쿠버네티스
Kubernetes the hard way - Kubeadm, Kubespray 를 사용하지 않는 설치
0. 이 글의 목적
쿠버네티스를 kubeadm과 kubespray 를 이용하지 않고 직접 구성요소를 설치해 보는 것이 이번 실습의 목표이다.
https://github.com/kelseyhightower/kubernetes-the-hard-way
위에 github 링크가 해당 실습의 원본이다.
해당 실습은 가시다 님과 함께 첫번째 스터디를 진행하였고, 그 과정을 블로그로 남긴다.
1. 전체 구조 요약

서버 구성
- jumpbox
- 해당 서버에서는 컨트롤플레인 서버와 워커노드에 실제 연결을 위한 설치를 위한 서버이다.
- server
- 해당 서버는 컨트롤 플레인 역할을 하는 노드이다.
- node0, node1
- 해당 서버들은 worker node 의 역할을 한다.
그리고 쿠버네티스 구성 요소들이, pod 가 아닌 systemd 를 이용해 기동이 되게 된다.
2. 따라하기
이 섹션의 경우엔 대부분이 단순 실행이다.
1 - 가상 머신 설치 및 설정
실습용 가상 머신을 준비한다. 머신은 위에 그림과 같고 간단한 spec 은 다음과 같다.
NIC2가 있는데, NIC2 의 경우 클러스터 간 쿠버네티스 통신을 위해 추가로 설정하였다.
| NAME | Description | CPU | RAM | NIC1 | NIC2 | HOSTNAME |
|---|---|---|---|---|---|---|
| jumpbox | Administration host | 2 | 1536 MB | 10.0.2.15 | 192.168.10.10 | jumpbox |
| server | Kubernetes server | 2 | 2GB | 10.0.2.15 | 192.168.10.100 | server.kubernetes.local server |
| node-0 | Kubernetes worker | 2 | 2GB | 10.0.2.15 | 192.168.10.101 | node-0.kubernetes.local node-0 |
| node-1 | Kubernetes worker | 2 | 2GB | 10.0.2.15 | 192.168.10.102 | node-1.kubernetes.local node-1 |
가상머신은 Virtual Box 와, Vagrant 로 동작한다.
vagrant 는 virtual box ui 를 통해 만드는 것이 아닌 코드를 이용해 가상머신 생성을 한다.
- VirtualBox 설치 - https://www.virtualbox.org/wiki/Downloads
- Vagrant 설치 - https://developer.hashicorp.com/vagrant/downloads#windows
* vagrant 에서 계속 에러가 발생했는데 그 에러는 다음과 같았다.
VBoxManage.exe: error: Failed to attach the network LUN (VERR_INTNET_FLT_IF_NOT_FOUND)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap, interface IConsole
virtualbox 를 설치하고 docker desktop 을 설치해서 그런지 문제가 있었다.
여기서 한시간 이상 repair 등 시도해 보았는데, 정답은 단순 삭제 후 재설치였다. 혹시 문제가 있다면 그냥 삭제 → 재설치 과정을 거치는 것이 좋을 것 같다.
- VagrantFile
# Base Image : https://portal.cloud.hashicorp.com/vagrant/discover/bento/debian-12
BOX_IMAGE = "bento/debian-12"
BOX_VERSION = "202510.26.0"
Vagrant.configure("2") do |config|
# jumpbox
config.vm.define "jumpbox" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--groups", "/Hardway-Lab"]
vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
vb.name = "jumpbox"
vb.cpus = 2
vb.memory = 1536 # 2048 2560 3072 4096
vb.linked_clone = true
end
subconfig.vm.host_name = "jumpbox"
subconfig.vm.network "private_network", ip: "192.168.10.10"
subconfig.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "init_cfg.sh"
end
# server
config.vm.define "server" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--groups", "/Hardway-Lab"]
vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
vb.name = "server"
vb.cpus = 2
vb.memory = 2048
vb.linked_clone = true
end
subconfig.vm.host_name = "server"
subconfig.vm.network "private_network", ip: "192.168.10.100"
subconfig.vm.network "forwarded_port", guest: 22, host: 60100, auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "init_cfg.sh"
end
# node-0
config.vm.define "node-0" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--groups", "/Hardway-Lab"]
vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
vb.name = "node-0"
vb.cpus = 2
vb.memory = 2048
vb.linked_clone = true
end
subconfig.vm.host_name = "node-0"
subconfig.vm.network "private_network", ip: "192.168.10.101"
subconfig.vm.network "forwarded_port", guest: 22, host: 60101, auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "init_cfg.sh"
end
# node-1
config.vm.define "node-1" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--groups", "/Hardway-Lab"]
vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
vb.name = "node-1"
vb.cpus = 2
vb.memory = 2048
vb.linked_clone = true
end
subconfig.vm.host_name = "node-1"
subconfig.vm.network "private_network", ip: "192.168.10.102"
subconfig.vm.network "forwarded_port", guest: 22, host: 60102, auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
subconfig.vm.provision "shell", path: "init_cfg.sh"
end
end
2. init_cfg.sh
#!/usr/bin/env bash
echo ">>>> Initial Config Start <<<<"
echo "[TASK 1] Setting Profile & Bashrc"
echo "sudo su -" >> /home/vagrant/.bashrc
echo 'alias vi=vim' >> /etc/profile
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime # Change Timezone
echo "[TASK 2] Disable AppArmor"
systemctl stop apparmor && systemctl disable apparmor >/dev/null 2>&1
echo "[TASK 3] Disable and turn off SWAP"
swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab
echo "[TASK 4] Install Packages"
apt update -qq >/dev/null 2>&1
apt install tree git jq yq unzip vim sshpass -y -qq >/dev/null 2>&1
echo "[TASK 5] Setting Root Password"
echo "root:qwe123" | chpasswd
echo "[TASK 6] Setting Sshd Config"
cat << EOF >> /etc/ssh/sshd_config
PasswordAuthentication yes
PermitRootLogin yes
EOF
systemctl restart sshd >/dev/null 2>&1
echo "[TASK 7] Setting Local DNS Using Hosts file"
sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts
cat << EOF >> /etc/hosts
192.168.10.10 jumpbox
192.168.10.100 server.kubernetes.local server
192.168.10.101 node-0.kubernetes.local node-0
192.168.10.102 node-1.kubernetes.local node-1
EOF
echo ">>>> Initial Config End <<<<"
해당 파일로 vagrant up 명령어를 이용해 가상 머신을 생성한다. 해당 가상머신에 ssh 로 접속하기 위해선 vagrant ssh jumpbox 로 접속한다.
02 - Set Up The Jumpbox
쿠버네티스 버전에 맞는 구성 요소들을 다운 받는다.
# root 계정 확인
whoami
root
## vagrant 계정 로그인 시 'sudo su -' 실행으로 root 계정 전환됨
cat /home/vagrant/.bashrc | tail -n 1
sudo su -
# 툴 설치 : 이미 적용되어 있음
apt-get update && apt install tree git jq yq unzip vim sshpass -y
# Sync GitHub Repository
## --depth 1 : 최신 커밋만 가져오는 shallow clone을 의미한다. 전체 git 히스토리가 필요 없으므로 이 옵션을 사용하면 다운로드 시간과 용량을 절약할 수 있다.
## 출처 멤버 작성 글 : https://sirzzang.github.io/kubernetes/Kubernetes-Cluster-The-Hard-Way-02/
pwd
git clone --depth 1 https://github.com/kelseyhightower/kubernetes-the-hard-way.git
cd kubernetes-the-hard-way
tree
pwd
# Download Binaries : k8s 구성을 위한 컴포넌트 다운로드
# CPU 아키텍처 확인
dpkg --print-architecture
arm64 # macOS 사용자
amd64 # Windows 사용자
# CPU 아키텍처 별 다운로드 목록 정보 다름
ls -l downloads-*
-rw-r--r-- 1 root root 839 Jan 4 10:30 downloads-amd64.txt
-rw-r--r-- 1 root root 839 Jan 4 10:30 downloads-arm64.txt
# https://kubernetes.io/releases/download/
cat downloads-$(dpkg --print-architecture).txt
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kubectl
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-apiserver
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-controller-manager
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-scheduler
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-proxy
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kubelet
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-arm64.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.3.0-rc.1/runc.arm64
https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-arm64-v1.6.2.tgz
https://github.com/containerd/containerd/releases/download/v2.1.0-beta.0/containerd-2.1.0-beta.0-linux-arm64.tar.gz
https://github.com/etcd-io/etcd/releases/download/v3.6.0-rc.3/etcd-v3.6.0-rc.3-linux-arm64.tar.gz
# wget 으로 다운로드 실행 : 500MB Size 정도
wget -q --show-progress \
--https-only \
--timestamping \
-P downloads \
-i downloads-$(dpkg --print-architecture).txt
# 확인
ls -oh downloads
total 544M
-rw-r--r-- 1 root 48M Jan 7 2025 cni-plugins-linux-arm64-v1.6.2.tgz
-rw-r--r-- 1 root 34M Mar 18 2025 containerd-2.1.0-beta.0-linux-arm64.tar.gz
-rw-r--r-- 1 root 17M Dec 9 2024 crictl-v1.32.0-linux-arm64.tar.gz
-rw-r--r-- 1 root 21M Mar 28 2025 etcd-v3.6.0-rc.3-linux-arm64.tar.gz
-rw-r--r-- 1 root 87M Mar 12 2025 kube-apiserver
-rw-r--r-- 1 root 80M Mar 12 2025 kube-controller-manager
-rw-r--r-- 1 root 54M Mar 12 2025 kubectl
-rw-r--r-- 1 root 72M Mar 12 2025 kubelet
-rw-r--r-- 1 root 63M Mar 12 2025 kube-proxy
-rw-r--r-- 1 root 62M Mar 12 2025 kube-scheduler
-rw-r--r-- 1 root 11M Mar 4 2025 runc.arm64
# Extract the component binaries from the release archives and organize them under the downloads directory.
ARCH=$(dpkg --print-architecture)
echo $ARCH
mkdir -p downloads/{client,cni-plugins,controller,worker}
tree -d downloads
downloads
├── client
├── cni-plugins
├── controller
└── worker
# 압축 풀기
tar -xvf downloads/crictl-v1.32.0-linux-${ARCH}.tar.gz \
-C downloads/worker/ && tree -ug downloads
tar -xvf downloads/containerd-2.1.0-beta.0-linux-${ARCH}.tar.gz \
--strip-components 1 \
-C downloads/worker/ && tree -ug downloads
tar -xvf downloads/cni-plugins-linux-${ARCH}-v1.6.2.tgz \
-C downloads/cni-plugins/ && tree -ug downloads
## --strip-components 1 : etcd-v3.6.0-rc.3-linux-amd64/etcd 경로의 앞부분(디렉터리)을 제거
tar -xvf downloads/etcd-v3.6.0-rc.3-linux-${ARCH}.tar.gz \
-C downloads/ \
--strip-components 1 \
etcd-v3.6.0-rc.3-linux-${ARCH}/etcdctl \
etcd-v3.6.0-rc.3-linux-${ARCH}/etcd && tree -ug downloads
# 확인
tree downloads/worker/
tree downloads/cni-plugins
ls -l downloads/{etcd,etcdctl}
# 파일 이동
mv downloads/{etcdctl,kubectl} downloads/client/
mv downloads/{etcd,kube-apiserver,kube-controller-manager,kube-scheduler} downloads/controller/
mv downloads/{kubelet,kube-proxy} downloads/worker/
mv downloads/runc.${ARCH} downloads/worker/runc
# 확인
tree downloads/client/
tree downloads/controller/
tree downloads/worker/
# 불필요한 압축 파일 제거
ls -l downloads/*gz
rm -rf downloads/*gz
# Make the binaries executable.
ls -l downloads/{client,cni-plugins,controller,worker}/*
chmod +x downloads/{client,cni-plugins,controller,worker}/*
ls -l downloads/{client,cni-plugins,controller,worker}/*
# 일부 파일 소유자 변경
tree -ug downloads # cat /etc/passwd | grep vagrant && cat /etc/group | grep vagrant
chown root:root downloads/client/etcdctl
chown root:root downloads/controller/etcd
chown root:root downloads/worker/crictl
tree -ug downloads
# kubernetes client 도구인 kubectl를 설치
ls -l downloads/client/kubectl
cp downloads/client/kubectl /usr/local/bin/
# can be verified by running the kubectl command:
kubectl version --client
Client Version: v1.32.3
Kustomize Version: v5.5.0
03 - Provisioning Compute Resources
아래 명령어로 ssh 접속 할 수 있게 서버간 기본 설정 진행한다.
# Machine Database (서버 속성 저장 파일) : IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
## 참고) server(controlplane)는 kubelet 동작하지 않아서, 파드 네트워크 대역 설정 필요 없음
cat <<EOF > machines.txt
192.168.10.100 server.kubernetes.local server
192.168.10.101 node-0.kubernetes.local node-0 10.200.0.0/24
192.168.10.102 node-1.kubernetes.local node-1 10.200.1.0/24
EOF
cat machines.txt
while read IP FQDN HOST SUBNET; do
echo "${IP} ${FQDN} ${HOST} ${SUBNET}"
done < machines.txt
# Configuring SSH Access 설정
# sshd config 설정 파일 확인 : 이미 암호 기반 인증 접속 설정 되어 있음
grep "^[^#]" /etc/ssh/sshd_config
...
PasswordAuthentication yes
PermitRootLogin yes
# Generate a new SSH key
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
ls -l /root/.ssh
-rw------- 1 root root 2602 Jan 2 21:07 id_rsa
-rw-r--r-- 1 root root 566 Jan 2 21:07 id_rsa.pub
# Copy the SSH public key to each machine
while read IP FQDN HOST SUBNET; do
sshpass -p 'qwe123' ssh-copy-id -o StrictHostKeyChecking=no root@${IP}
done < machines.txt
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} cat /root/.ssh/authorized_keys
done < machines.txt
# Once each key is added, verify SSH public key access is working
# 아래는 IP 기반으로 접속 확인
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} hostname
done < machines.txt
# Hostnames 설정
# 확인 : init_cfg.sh 로 이미 설정되어 있음
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} cat /etc/hosts
done < machines.txt
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} hostname --fqdn
done < machines.txt
# 아래는 hostname 으로 ssh 접속 확인
cat /etc/hosts
while read IP FQDN HOST SUBNET; do
sshpass -p 'qwe123' ssh -n -o StrictHostKeyChecking=no root@${HOST} hostname
done < machines.txt
while read IP FQDN HOST SUBNET; do
sshpass -p 'qwe123' ssh -n root@${HOST} uname -o -m -n
done < machines.txt
04 - Provisioning a CA and Generating TLS Certificates
| 항목 | 개인키 | CSR | 인증서 | 참고 정보 | X509v3 Extended Key Usage |
|---|---|---|---|---|---|
| Root CA | ca.key | X | ca.crt | ||
| admin | admin.key | admin.csr | admin.crt | CN = admin, O = system:masters | TLS Web Client Authentication |
| node-0 | node-0.key | node-0.csr | node-0.crt | CN = system:node:node-0, O = system:nodes | TLS Web Server / Client Authentication |
| node-1 | node-1.key | node-1.csr | node-1.crt | CN = system:node:node-1, O = system:nodes | TLS Web Server / Client Authentication |
| kube-proxy | kube-proxy.key | kube-proxy.csr | kube-proxy.crt | CN = system:kube-proxy, O = system:node-proxier | TLS Web Server / Client Authentication |
| kube-scheduler | kube-scheduler.key | kube-scheduler | kube-scheduler.crt | CN = system:kube-scheduler, O = system:kube-scheduler | TLS Web Server / Client Authentication |
| kube-controller-manager | kube-controller-manager.key | kube-controller-manager.csr | kube-controller-manager.crt | CN = system:kube-controller-manager, O = system:kube-controller-manager | TLS Web Server / Client Authentication |
| kube-api-server | kube-api-server.key | kube-api-server.csr | kube-api-server.crt | CN = kubernetes, SAN: IP(127.0.0.1, 10.32.0.1), DNS(kubernetes,..) | TLS Web Server / Client Authentication |
| service-accounts | service-accounts.key | service-accounts.csr | service-accounts.crt | CN = service-accounts | TLS Web Client Authentication |
인증서의 부분은 사실 이해가 잘 되지는 않았다. 단순 보안 통신을 한다고 알고 있었고, 해당 사항은 조금 더 공부를 해서 포스팅을 진행 하겠다.
CA 발급한다.
# Generate the CA configuration file, certificate, and private key
# Root CA 개인키 생성 : ca.key
openssl genrsa -out ca.key 4096
ls -l ca.key
-rw------- 1 root root 3272 Jan 3 09:41 ca.key
cat ca.key
openssl rsa -in ca.key -text -noout # 개인키 구조 확인
# Root CA 인증서 생성 : ca.crt
## -x509 : CSR을 만들지 않고 바로 인증서(X.509) 생성, 즉, Self-Signed Certificate
## -noenc : 개인키를 암호화하지 않음, 즉, CA 키(ca.key)에 패스프레이즈 없음
## -config ca.conf : 인증서 세부 정보는 설정 파일에서 읽음 , [req] 섹션 사용됨 - DN 정보 → [req_distinguished_name] , CA 확장 → [ca_x509_extensions]
openssl req -x509 -new -sha512 -noenc \
-key ca.key -days 3653 \
-config ca.conf \
-out ca.crt
ls -l ca.crt
-rw-r--r-- 1 root root 1899 Jan 3 09:46 ca.crt
# ca.conf 관련 내용
cat ca.conf
-------------------------------------------
[req]
distinguished_name = req_distinguished_name
prompt = no
x509_extensions = ca_x509_extensions
[ca_x509_extensions]
basicConstraints = CA:TRUE # 이 인증서는 CA 역할 가능
keyUsage = cRLSign, keyCertSign # cRLSign: 인증서 폐기 목록(CRL) 서명 가능, keyCertSign: 다른 인증서를 서명할 수 있음
[req_distinguished_name]
C = US
ST = Washington
L = Seattle
CN = CA
-------------------------------------------
cat ca.crt
openssl x509 -in ca.crt -text -noout # 인증서 전체 내용 확인
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
56:fb:42:82:5e:2f:96:cf:f5:83:2e:78:46:98:6e:3f:08:ee:99:67
Signature Algorithm: sha512WithRSAEncryption
Issuer: C = US, ST = Washington, L = Seattle, CN = CA
Validity
Not Before: Jan 3 00:46:22 2026 GMT
Not After : Jan 4 00:46:22 2036 GMT
Subject: C = US, ST = Washington, L = Seattle, CN = CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (4096 bit)
Modulus:
00:ae:ce:95:7e:db:50:a2:4a:71:8b:99:54:0d:b4:
ce:1e:6f:13:3c:a6:54:30:0f:5b:0a:76:56:8c:44:
75:98:58:a6:57:7d:d2:38:e8:05:3c:cc:a5:e9:86:
57:73:98:c5:17:52:7c:7e:c8:48:6c:6b:86:13:1c:
7a:72:5d:10:3a:15:72:8d:66:35:e3:55:06:3e:f7:
44:7f:1b:fc:9e:4a:2b:4a:28:dd:2c:34:63:8d:26:
cc:39:50:3b:44:e5:f8:fe:68:c8:c0:a5:94:ba:b1:
d5:e3:55:1d:d9:98:0c:03:23:3f:9d:d9:a0:79:2c:
e9:ce:c9:92:b8:1f:6e:83:cb:08:1e:e6:28:cf:55:
29:b3:f3:19:1b:fe:c2:d8:30:6e:ee:68:7e:80:c3:
9a:53:77:d1:ae:2a:21:ee:82:94:d7:b5:f3:8f:a3:
98:f8:85:c6:c9:94:72:f3:1e:61:45:84:97:e4:25:
69:c8:5e:11:2c:75:2a:85:a6:b4:75:50:5f:a1:6c:
0a:54:1e:78:ab:25:a3:2e:04:18:21:68:86:11:3d:
90:09:95:02:aa:fc:32:2c:c5:ed:ac:d1:14:3c:d7:
fc:c3:a3:9f:dd:52:07:eb:2f:a7:fc:22:5e:2c:23:
ad:f6:f5:1e:90:db:3f:32:eb:10:38:34:c3:f0:40:
5d:c9:0b:d0:01:fd:78:73:0b:80:92:75:0c:24:76:
c1:6d:93:42:86:4b:a0:6d:99:7b:72:46:b6:52:b1:
2f:47:90:a0:ed:d8:93:71:23:c4:20:c8:63:04:a1:
f6:b6:d8:6e:6b:20:1a:2b:56:43:02:47:5e:77:ae:
4e:00:d5:ec:05:f6:e8:a4:ab:aa:8b:14:8b:b9:da:
d4:a6:e6:c6:c2:35:5a:fd:24:51:7d:29:bb:3c:d3:
fd:a7:bf:a9:5b:77:5a:e1:b1:7b:51:ab:29:a4:15:
e7:ac:f6:2a:1e:38:68:bb:f6:6f:60:e4:26:34:cc:
45:08:2b:e0:71:9a:8e:67:e3:0d:d4:67:63:0b:76:
27:bd:ff:8d:9c:78:e5:b8:55:f0:ce:c1:35:b7:b6:
e7:44:60:60:25:ae:f1:0f:3d:c6:7e:25:03:a3:c8:
87:f8:3d:cd:4b:06:1b:d1:94:63:31:50:33:5b:3f:
3c:66:a8:4d:df:2a:b4:76:a4:fa:54:73:43:09:ac:
6c:21:0b:9c:35:e9:14:ca:25:cd:f1:72:c1:fe:0f:
aa:56:59:1d:ea:45:a7:ab:f5:41:a5:d1:50:3d:da:
f0:71:ff:8b:d2:3b:04:0a:d2:80:e9:17:d6:9a:a3:
1a:5f:19:9b:a0:ef:08:36:d4:88:65:2b:50:42:10:
14:e1:a9
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:TRUE
X509v3 Key Usage:
Certificate Sign, CRL Sign
X509v3 Subject Key Identifier:
B3:5D:82:13:B6:1C:44:59:8C:0A:4E:DB:2B:18:98:77:0D:7A:2F:5B
클라이언트 인증서를 만든다
# Create Client and Server Certificates : admin
openssl genrsa -out admin.key 4096
ls -l admin.key
# ca.conf 에 admin 섹션
cat ca.conf
-------------------------------------------
[admin]
distinguished_name = admin_distinguished_name
prompt = no
req_extensions = default_req_extensions
[admin_distinguished_name]
CN = admin
O = system:masters
[default_req_extensions] # 공통 CSR 확장
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Admin Client Certificate"
subjectKeyIdentifier = hash
-------------------------------------------
# csr 파일 생성 : admin.key 개인키를 사용해 'CN=admin, O=system:masters'인 Kubernetes 관리자용 클라이언트 인증서 요청(admin.csr) 생성
openssl req -new -key admin.key -sha256 \
-config ca.conf -section admin \
-out admin.csr
ls -l admin.csr
openssl req -in admin.csr -text -noout # CSR 전체 내용 확인
# ca에 csr 요청을 통한 crt 파일 생성
## -req : CSR를 입력으로 받아 인증서를 생성, self-signed 아님, CA가 서명하는 방식
## -days 3653 : 인증서 유효기간 3653일 (약 10년)
## -copy_extensions copyall : CSR에 포함된 모든 X.509 extensions를 인증서로 복사
## -CAcreateserial : CA 시리얼 번호 파일 자동 생성, 다음 인증서 발급 시 재사용, 기본 생성 파일(ca.srl)
openssl x509 -req -days 3653 -in admin.csr \
-copy_extensions copyall \
-sha256 -CA ca.crt \
-CAkey ca.key \
-CAcreateserial \
-out admin.crt
Certificate request self-signature ok
subject=CN = admin, O = system:masters
ls -l admin.crt
openssl x509 -in admin.crt -text -noout
...
Issuer: C = US, ST = Washington, L = Seattle, CN = CA
Validity
Not Before: Jan 3 01:01:40 2026 GMT
Not After : Jan 4 01:01:40 2036 GMT
Subject: CN = admin, O = system:masters
...
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
Netscape Cert Type:
SSL Client
Netscape Comment:
Admin Client Certificate
...
나머지 클라이언트 서버 인증서를 만든다.
# ca.conf 수정
cat ca.conf | grep system:kube-scheduler
CN = system:kube-scheduler
O = system:system:kube-scheduler
sed -i 's/system:system:kube-scheduler/system:kube-scheduler/' ca.conf
cat ca.conf | grep system:kube-scheduler
CN = system:kube-scheduler
O = system:kube-scheduler
# 변수 지정
certs=(
"node-0" "node-1"
"kube-proxy" "kube-scheduler"
"kube-controller-manager"
"kube-api-server"
"service-accounts"
)
# 확인
echo ${certs[*]}
node-0 node-1 kube-proxy kube-scheduler kube-controller-manager kube-api-server service-accounts
# 개인키 생성, csr 생성, 인증서 생성
for i in ${certs[*]}; do
openssl genrsa -out "${i}.key" 4096
openssl req -new -key "${i}.key" -sha256 \
-config "ca.conf" -section ${i} \
-out "${i}.csr"
openssl x509 -req -days 3653 -in "${i}.csr" \
-copy_extensions copyall \
-sha256 -CA "ca.crt" \
-CAkey "ca.key" \
-CAcreateserial \
-out "${i}.crt"
done
Certificate request self-signature ok
subject=CN = system:node:node-0, O = system:nodes, C = US, ST = Washington, L = Seattle
Certificate request self-signature ok
subject=CN = system:node:node-1, O = system:nodes, C = US, ST = Washington, L = Seattle
Certificate request self-signature ok
subject=CN = system:kube-proxy, O = system:node-proxier, C = US, ST = Washington, L = Seattle
Certificate request self-signature ok
subject=CN = system:kube-scheduler, O = system:kube-scheduler, C = US, ST = Washington, L = Seattle
Certificate request self-signature ok
subject=CN = system:kube-controller-manager, O = system:kube-controller-manager, C = US, ST = Washington, L = Seattle
Certificate request self-signature ok
subject=CN = kubernetes, C = US, ST = Washington, L = Seattle
Certificate request self-signature ok
subject=CN = service-accounts
ls -1 *.crt *.key *.csr
admin.crt
admin.csr
admin.key
ca.crt
ca.key
kube-api-server.crt
kube-api-server.csr
kube-api-server.key
kube-controller-manager.crt
kube-controller-manager.csr
kube-controller-manager.key
kube-proxy.crt
kube-proxy.csr
kube-proxy.key
kube-scheduler.crt
kube-scheduler.csr
kube-scheduler.key
node-0.crt
node-0.csr
node-0.key
node-1.crt
node-1.csr
node-1.key
service-accounts.crt
service-accounts.csr
service-accounts.key
# 인증서 정보 확인
openssl x509 -in node-0.crt -text -noout
Issuer: C = US, ST = Washington, L = Seattle, CN = CA
Validity
Not Before: Jan 3 02:41:09 2026 GMT
Not After : Jan 4 02:41:09 2036 GMT
Subject: CN = system:node:node-0, O = system:nodes, C = US, ST = Washington, L = Seattle
...
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Extended Key Usage:
TLS Web Client Authentication, TLS Web Server Authentication
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
Netscape Cert Type:
SSL Client
Netscape Comment:
Node-0 Certificate
X509v3 Subject Alternative Name:
DNS:node-0, IP Address:127.0.0.1
openssl x509 -in node-1.crt -text -noout
Subject: CN = system:node:node-1, O = system:nodes, C = US, ST = Washington, L = Seattle
X509v3 Subject Alternative Name:
DNS:node-1, IP Address:127.0.0.1
openssl x509 -in kube-proxy.crt -text -noout
Issuer: C = US, ST = Washington, L = Seattle, CN = CA
Validity
Not Before: Jan 3 02:41:13 2026 GMT
Not After : Jan 4 02:41:13 2036 GMT
Subject: CN = system:kube-proxy, O = system:node-proxier, C = US, ST = Washington, L = Seattle
...
X509v3 extensions:
Netscape Cert Type:
SSL Client
Netscape Comment:
Kube Proxy Certificate
X509v3 Subject Alternative Name:
DNS:kube-proxy, IP Address:127.0.0.1
openssl x509 -in kube-scheduler.crt -text -noout
Issuer: C = US, ST = Washington, L = Seattle, CN = CA
Validity
Not Before: Jan 3 02:41:13 2026 GMT
Not After : Jan 4 02:41:13 2036 GMT
Subject: CN = system:kube-scheduler, O = system:kube-scheduler, C = US, ST = Washington, L = Seattle
Netscape Cert Type:
SSL Client
Netscape Comment:
Kube Scheduler Certificate
X509v3 Subject Alternative Name:
DNS:kube-scheduler, IP Address:127.0.0.1
openssl x509 -in kube-controller-manager.crt -text -noout
Validity
Not Before: Jan 3 02:41:16 2026 GMT
Not After : Jan 4 02:41:16 2036 GMT
Subject: CN = system:kube-controller-manager, O = system:kube-controller-manager, C = US, ST = Washington, L = Seattle
Netscape Cert Type:
SSL Client
Netscape Comment:
Kube Controller Manager Certificate
X509v3 Subject Alternative Name:
DNS:kube-controller-manager, IP Address:127.0.0.1
# api-server : SAN 정보에 10.32.0.1 은 kubernetes (Service) ClusterIP. 다른 인증서와 다르게 SSL Server 역할 추가 확인
openssl x509 -in kube-api-server.crt -text -noout
Issuer: C = US, ST = Washington, L = Seattle, CN = CA
Validity
Not Before: Jan 3 02:41:17 2026 GMT
Not After : Jan 4 02:41:17 2036 GMT
Subject: CN = kubernetes, C = US, ST = Washington, L = Seattle
Netscape Cert Type:
SSL Client, SSL Server
Netscape Comment:
Kube API Server Certificate
X509v3 Subject Alternative Name:
IP Address:127.0.0.1, IP Address:10.32.0.1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.svc.cluster.local, DNS:server.kubernetes.local, DNS:api-server.kubernetes.local
# service-accounts
openssl x509 -in service-accounts.crt -text -noout
Issuer: C = US, ST = Washington, L = Seattle, CN = CA
Validity
Not Before: Jan 3 02:41:17 2026 GMT
Not After : Jan 4 02:41:17 2036 GMT
Subject: CN = service-accounts
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
Netscape Cert Type:
SSL Client
Netscape Comment:
Admin Client Certificate
생성한 인증서를 각 서버에 전송한다
# Copy the appropriate certificates and private keys to the node-0 and node-1 machines
for host in node-0 node-1; do
ssh root@${host} mkdir /var/lib/kubelet/
scp ca.crt root@${host}:/var/lib/kubelet/
scp ${host}.crt \
root@${host}:/var/lib/kubelet/kubelet.crt
scp ${host}.key \
root@${host}:/var/lib/kubelet/kubelet.key
done
# 확인
ssh node-0 ls -l /var/lib/kubelet
ssh node-1 ls -l /var/lib/kubelet
# Copy the appropriate certificates and private keys to the server machine
scp \
ca.key ca.crt \
kube-api-server.key kube-api-server.crt \
service-accounts.key service-accounts.crt \
root@server:~/
# 확인
ssh server ls -l /root
05 - Generating Kubernetes Configuration Files for Authentication
해당 내용은 코드가 길어 링크로 갈음한다. [깃허브 링크]
06 - Generating the Data Encryption Config and Key
ETCD 내의 텍스트를 암호화를 위한 설정을 한다.
# The Encryption Key
# Generate an encryption key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
echo $ENCRYPTION_KEY
JMnUP1PUUORZE9iadPdzYifnvPVIniSzOW6NUoMofVc=
# The Encryption Config File
# Create the encryption-config.yaml encryption config file
# (참고) 실제 etcd 값에 기록되는 헤더 : k8s:enc:aescbc:v1:key1:<ciphertext>
cat configs/encryption-config.yaml
kind: EncryptionConfiguration # kube-apiserver가 etcd에 저장할 리소스를 어떻게 암호화할지 정의
apiVersion: apiserver.config.k8s.io/v1 # --encryption-provider-config 플래그로 참조
resources:
- resources:
- secrets # 암호화 대상 Kubernetes 리소스 : 여기서는 Secret 리소스만 암호화
providers: # 지원 providers(위 부터 적용됨) : identity, aescbc, aesgcm, kms v2, secretbox
- aescbc: # etcd에 저장될 Secret을 AES-CBC 방식으로 암호화
keys:
- name: key1 # 키 식별자 (etcd 데이터에 기록됨)
secret: ${ENCRYPTION_KEY}
- identity: {} # 암호화하지 않음 (Plaintext), 주로 하위 호환성 / 점진적 암호화 목적
# aescbc를 첫 번째에, identity를 두 번째에 배치하는 것은 "새로운 데이터는 무조건 암호화해서 저장하되, 이전에 평문으로 저장되었던 데이터도 문제없이 읽어 들이겠다"는 하위 호환성 전략을 의미
# 설명 출처 : https://yyeon2.medium.com/bootstrap-kubernetes-the-hard-way-48644e868550
envsubst < configs/encryption-config.yaml > encryption-config.yaml
cat encryption-config.yaml
# Copy the encryption-config.yaml encryption config file to each controller instance:
scp encryption-config.yaml root@server:~/
ssh server ls -l /root/encryption-config.yaml
07 - Bootstrapping the etcd Cluster
etcd 클러스터 설치
# Prerequisites
# hostname 변경 : controller -> server
# http 평문 통신!
# Each etcd member must have a unique name within an etcd cluster.
# Set the etcd name to match the hostname of the current compute instance:
cat units/etcd.service | grep controller
ETCD_NAME=server
cat > units/etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/etcd-io/etcd
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--initial-advertise-peer-urls http://127.0.0.1:2380 \\
--listen-peer-urls http://127.0.0.1:2380 \\
--listen-client-urls http://127.0.0.1:2379 \\
--advertise-client-urls http://127.0.0.1:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster ${ETCD_NAME}=http://127.0.0.1:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat units/etcd.service | grep server
# Copy etcd binaries and systemd unit files to the server machine
scp \
downloads/controller/etcd \
downloads/client/etcdctl \
units/etcd.service \
root@server:~/
# 아래는 server 가상머신 접속 후 명령 실행
# The commands in this lab must be run on the server machine. Login to the server machine using the ssh command. Example:
ssh root@server
-------------------------------------------------------------------
# Bootstrapping an etcd Cluster
# Install the etcd Binaries
# Extract and install the etcd server and the etcdctl command line utility
pwd
mv etcd etcdctl /usr/local/bin/
# Configure the etcd Server
mkdir -p /etc/etcd /var/lib/etcd
chmod 700 /var/lib/etcd
cp ca.crt kube-api-server.key kube-api-server.crt /etc/etcd/
# Create the etcd.service systemd unit file:
mv etcd.service /etc/systemd/system/
tree /etc/systemd/system/
# Start the etcd Server
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
# 확인
systemctl status etcd --no-pager
ss -tnlp | grep etcd
LISTEN 0 4096 127.0.0.1:2380 0.0.0.0:* users:(("etcd",pid=2829,fd=3))
LISTEN 0 4096 127.0.0.1:2379 0.0.0.0:* users:(("etcd",pid=2829,fd=6))
# List the etcd cluster members
etcdctl member list
702b0a34e2cfd39, started, server, http://127.0.0.1:2380, http://127.0.0.1:2379, false
etcdctl member list -w table
etcdctl endpoint status -w table
exit
-------------------------------------------------------------------
08 - Bootstrapping the Kubernetes Control Plane
서버 노드에 구성 요소를 기동한다.
# Prerequisites
# kube-apiserver.service 수정 : service-cluster-ip-range 추가
# https://github.com/kelseyhightower/kubernetes-the-hard-way/issues/905
# service-cluster-ip 값은 ca.conf 에 설정한 [kube-api-server_alt_names] 항목의 Service IP 범위
cat ca.conf | grep '\[kube-api-server_alt_names' -A2
[kube-api-server_alt_names]
IP.0 = 127.0.0.1
IP.1 = 10.32.0.1
cat units/kube-apiserver.service
cat << EOF > units/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--allow-privileged=true \\
--apiserver-count=1 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-servers=http://127.0.0.1:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-api-server.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-api-server.key \\
--runtime-config='api/all=true' \\
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \\
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \\
--service-account-issuer=https://server.kubernetes.local:6443 \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat units/kube-apiserver.service
# kube-apiserver가 kubelet(Node)에 접근할 수 있도록 허용하는 '시스템 내부용 RBAC' 설정
cat configs/kube-apiserver-to-kubelet.yaml ; echo
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true" # Kubernetes가 업그레이드 시 자동 관리
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- "" # Core API group (v1) : Node 관련 서브리소스는 core group에 속함
resources: # 아래 처럼, kubelet API 대부분을 포괄
- nodes/proxy ## apiserver → kubelet 프록시 통신
- nodes/stats ## 노드/파드 리소스 통계 (cAdvisor)
- nodes/log ## metrics-server / top 명령
- nodes/spec ## kubectl logs
- nodes/metrics ## metrics-server / top 명령
verbs:
- "*" # 대상은 “nodes 하위 리소스”로 한정 + 모든 동작 허용 (get, list, watch, create, proxy 등)
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver # 누가 이 권한을 쓰는가? → kube-apiserver 자신
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes # 사용자 kubernetes ,이 사용자는 kube-apiserver가 사용하는 클라이언트 인증서의 CN
# api-server : Subject CN 확인
openssl x509 -in kube-api-server.crt -text -noout
Subject: CN = kubernetes,
# api -> kubelet 호출 시 Flow
kube-apiserver (client)
|
| (TLS client cert, CN=kubernetes)
↓
kubelet API Server 역할 (/stats, /log, /metrics)
|
↓
RBAC 평가:
User = kubernetes
→ ClusterRoleBinding system:kube-apiserver 매칭
→ ClusterRole system:kube-apiserver-to-kubelet 권한 부여
# kube-scheduler
cat units/kube-scheduler.service ; echo
cat configs/kube-scheduler.yaml ; echo
# kube-controller-manager : cluster-cidr 는 POD CIDR 포함하는 대역, service-cluster-ip-range 는 apiserver 설정 값 동일 설정.
cat units/kube-controller-manager.service ; echo
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--bind-address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--root-ca-file=/var/lib/kubernetes/ca.crt \
--service-account-private-key-file=/var/lib/kubernetes/service-accounts.key \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
# Connect to the jumpbox and copy Kubernetes binaries and systemd unit files to the server machine
scp \
downloads/controller/kube-apiserver \
downloads/controller/kube-controller-manager \
downloads/controller/kube-scheduler \
downloads/client/kubectl \
units/kube-apiserver.service \
units/kube-controller-manager.service \
units/kube-scheduler.service \
configs/kube-scheduler.yaml \
configs/kube-apiserver-to-kubelet.yaml \
root@server:~/
# 확인
ssh server ls -l /root
인증서와 systemd 실행 파일들을 옮기고, 서비스에 등록한다.
ssh root@server
---------------------------------------------------------------
# Create the Kubernetes configuration directory:
pwd
mkdir -p /etc/kubernetes/config
# Install the Kubernetes binaries:
mv kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl \
/usr/local/bin/
ls -l /usr/local/bin/kube-*
# Configure the Kubernetes API Server
mkdir -p /var/lib/kubernetes/
mv ca.crt ca.key \
kube-api-server.key kube-api-server.crt \
service-accounts.key service-accounts.crt \
encryption-config.yaml \
/var/lib/kubernetes/
ls -l /var/lib/kubernetes/
## Create the kube-apiserver.service systemd unit file:
mv kube-apiserver.service \
/etc/systemd/system/kube-apiserver.service
tree /etc/systemd/system
# Configure the Kubernetes Controller Manager
## Move the kube-controller-manager kubeconfig into place:
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
## Create the kube-controller-manager.service systemd unit file:
mv kube-controller-manager.service /etc/systemd/system/
# Configure the Kubernetes Scheduler
## Move the kube-scheduler kubeconfig into place:
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
## Create the kube-scheduler.yaml configuration file:
mv kube-scheduler.yaml /etc/kubernetes/config/
## Create the kube-scheduler.service systemd unit file:
mv kube-scheduler.service /etc/systemd/system/
# Start the Controller Services : Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
systemctl daemon-reload
systemctl enable kube-apiserver kube-controller-manager kube-scheduler
systemctl start kube-apiserver kube-controller-manager kube-scheduler
# 확인
ss -tlp | grep kube
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=3071,fd=3))
LISTEN 0 4096 *:10257 *:* users:(("kube-controller",pid=3072,fd=3))
LISTEN 0 4096 *:10259 *:* users:(("kube-scheduler",pid=3073,fd=3))
systemctl is-active kube-apiserver
systemctl status kube-apiserver --no-pager
journalctl -u kube-apiserver --no-pager
systemctl status kube-scheduler --no-pager
systemctl status kube-controller-manager --no-pager
# Verify this using the kubectl command line tool:
kubectl cluster-info dump --kubeconfig admin.kubeconfig
kubectl cluster-info --kubeconfig admin.kubeconfig
Kubernetes control plane is running at https://127.0.0.1:6443
kubectl get node --kubeconfig admin.kubeconfig
kubectl get pod -A --kubeconfig admin.kubeconfig
kubectl get service,ep --kubeconfig admin.kubeconfig
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 110m
NAME ENDPOINTS AGE
endpoints/kubernetes 10.0.2.15:6443 110m
# clusterroles 확인
kubectl get clusterroles --kubeconfig admin.kubeconfig
NAME CREATED AT
admin 2026-01-03T08:42:54Z
cluster-admin 2026-01-03T08:42:54Z
edit 2026-01-03T08:42:54Z
system:aggregate-to-admin 2026-01-03T08:42:54Z
system:aggregate-to-edit 2026-01-03T08:42:54Z
system:aggregate-to-view 2026-01-03T08:42:54Z
system:auth-delegator 2026-01-03T08:42:54Z
system:basic-user 2026-01-03T08:42:54Z
system:certificates.k8s.io:certificatesigningrequests:nodeclient 2026-01-03T08:42:54Z
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2026-01-03T08:42:54Z
system:certificates.k8s.io:kube-apiserver-client-approver 2026-01-03T08:42:54Z
system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2026-01-03T08:42:54Z
system:certificates.k8s.io:kubelet-serving-approver 2026-01-03T08:42:54Z
system:certificates.k8s.io:legacy-unknown-approver 2026-01-03T08:42:54Z
system:controller:attachdetach-controller 2026-01-03T08:42:54Z
system:controller:certificate-controller 2026-01-03T08:42:54Z
system:controller:clusterrole-aggregation-controller 2026-01-03T08:42:54Z
system:controller:cronjob-controller 2026-01-03T08:42:54Z
system:controller:daemon-set-controller 2026-01-03T08:42:54Z
system:controller:deployment-controller 2026-01-03T08:42:54Z
system:controller:disruption-controller 2026-01-03T08:42:54Z
system:controller:endpoint-controller 2026-01-03T08:42:54Z
system:controller:endpointslice-controller 2026-01-03T08:42:54Z
system:controller:endpointslicemirroring-controller 2026-01-03T08:42:54Z
system:controller:ephemeral-volume-controller 2026-01-03T08:42:54Z
system:controller:expand-controller 2026-01-03T08:42:54Z
system:controller:generic-garbage-collector 2026-01-03T08:42:54Z
system:controller:horizontal-pod-autoscaler 2026-01-03T08:42:54Z
system:controller:job-controller 2026-01-03T08:42:54Z
system:controller:legacy-service-account-token-cleaner 2026-01-03T08:42:54Z
system:controller:namespace-controller 2026-01-03T08:42:54Z
system:controller:node-controller 2026-01-03T08:42:54Z
system:controller:persistent-volume-binder 2026-01-03T08:42:54Z
system:controller:pod-garbage-collector 2026-01-03T08:42:54Z
system:controller:pv-protection-controller 2026-01-03T08:42:54Z
system:controller:pvc-protection-controller 2026-01-03T08:42:54Z
system:controller:replicaset-controller 2026-01-03T08:42:54Z
system:controller:replication-controller 2026-01-03T08:42:54Z
system:controller:resourcequota-controller 2026-01-03T08:42:54Z
system:controller:root-ca-cert-publisher 2026-01-03T08:42:54Z
system:controller:route-controller 2026-01-03T08:42:54Z
system:controller:service-account-controller 2026-01-03T08:42:54Z
system:controller:service-controller 2026-01-03T08:42:54Z
system:controller:statefulset-controller 2026-01-03T08:42:54Z
system:controller:ttl-after-finished-controller 2026-01-03T08:42:54Z
system:controller:ttl-controller 2026-01-03T08:42:54Z
system:controller:validatingadmissionpolicy-status-controller 2026-01-03T08:42:54Z
system:discovery 2026-01-03T08:42:54Z
system:heapster 2026-01-03T08:42:54Z
system:kube-aggregator 2026-01-03T08:42:54Z
system:kube-controller-manager 2026-01-03T08:42:54Z
system:kube-dns 2026-01-03T08:42:54Z
system:kube-scheduler 2026-01-03T08:42:54Z
system:kubelet-api-admin 2026-01-03T08:42:54Z
system:monitoring 2026-01-03T08:42:54Z
system:node 2026-01-03T08:42:54Z
system:node-bootstrapper 2026-01-03T08:42:54Z
system:node-problem-detector 2026-01-03T08:42:54Z
system:node-proxier 2026-01-03T08:42:54Z
system:persistent-volume-provisioner 2026-01-03T08:42:54Z
system:public-info-viewer 2026-01-03T08:42:54Z
system:service-account-issuer-discovery 2026-01-03T08:42:54Z
system:volume-scheduler 2026-01-03T08:42:54Z
view 2026-01-03T08:42:54Z
kubectl describe clusterroles system:kube-scheduler --kubeconfig admin.kubeconfig
# kube-scheduler subject 확인
kubectl get clusterrolebindings --kubeconfig admin.kubeconfig
kubectl describe clusterrolebindings system:kube-scheduler --kubeconfig admin.kubeconfig
Role:
Kind: ClusterRole
Name: system:kube-scheduler
Subjects:
Kind Name Namespace
---- ---- ---------
User system:kube-scheduler
---------------------------------------------------------------
Rbac 권한을 구성한다.
ssh root@server # 이미 server 에 ssh 접속 상태
---------------------------------------------------------------
# api -> kubelet 접속을 위한 RBAC 설정
# Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
cat kube-apiserver-to-kubelet.yaml
kubectl apply -f kube-apiserver-to-kubelet.yaml --kubeconfig admin.kubeconfig
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
# 확인
kubectl get clusterroles system:kube-apiserver-to-kubelet --kubeconfig admin.kubeconfig
kubectl get clusterrolebindings system:kube-apiserver --kubeconfig admin.kubeconfig
---------------------------------------------------------------
컨트롤 플레인 테스트
curl -s -k --cacert ca.crt https://server.kubernetes.local:6443/version | jq
{
"major": "1",
"minor": "32",
"gitVersion": "v1.32.3",
"gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
"gitTreeState": "clean",
"buildDate": "2025-03-11T19:52:21Z",
"goVersion": "go1.23.6",
"compiler": "gc",
"platform": "linux/arm64"
}
09 - Bootstrapping the Kubernetes Worker Nodes
워커노드에서도 실행해야 한다.
kubelet config 파일과 cni 파일들을 옮긴다.
# cni(bridge) 파일과 kubelet-config 파일 작성 및 node-0/1에 전달
cat configs/10-bridge.conf | jq
cat configs/kubelet-config.yaml | yq # clusterDomain , clusterDNS 없어도 smoke test 까지 잘됨 -> 실습에서 coredns 미사용
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "0.0.0.0" # kubelet HTTPS 서버 바인딩 주소 : 모든 인터페이스에서 10250 포트 수신
authentication:
anonymous:
enabled: false # 익명 인증 비활성화
webhook:
enabled: true # 인증 요청을 kube-apiserver에 위임 : ServiceAccount 토큰, bootstrap 토큰 처리 가능
x509: # kubelet에 접근하는 클라이언트 인증서 검증용 CA
clientCAFile: "/var/lib/kubelet/ca.crt" # (상동) 대상 : kube-apiserver, metrics-server, kubectl (직접 접근 시)
authorization:
mode: Webhook # 인가 요청을 kube-apiserver에 위임 : Node Authorizer + RBAC 적용됨
cgroupDriver: systemd
containerRuntimeEndpoint: "unix:///var/run/containerd/containerd.sock" # CRI 엔드포인트
enableServer: true # kubelet API 서버 활성화 , false면 apiserver가 kubelet 접근 불가
failSwapOn: false
maxPods: 16 # 노드당 최대 파드 수 16개
memorySwap:
swapBehavior: NoSwap
port: 10250 # kubelet HTTPS API 포트 : 로그, exec, stats, metrics 접근에 사용
resolvConf: "/etc/resolv.conf" # 파드에 전달할 DNS 설정 파일
registerNode: true # kubelet이 API 서버에 Node 객체 자동 등록
runtimeRequestTimeout: "15m" # CRI 요청 최대 대기 시간 : 이미지 pull, container start 등
tlsCertFile: "/var/lib/kubelet/kubelet.crt" # TLS 서버 인증서 (kubelet 자신) : kubelet HTTPS 서버의 서버 인증서
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet.key"
for HOST in node-0 node-1; do
SUBNET=$(grep ${HOST} machines.txt | cut -d " " -f 4)
sed "s|SUBNET|$SUBNET|g" \
configs/10-bridge.conf > 10-bridge.conf
sed "s|SUBNET|$SUBNET|g" \
configs/kubelet-config.yaml > kubelet-config.yaml
scp 10-bridge.conf kubelet-config.yaml \
root@${HOST}:~/
done
# 확인
ssh node-0 ls -l /root
ssh node-1 ls -l /root
# 파일 확인 및 node-0/1에 전달
cat configs/99-loopback.conf ; echo
cat configs/containerd-config.toml ; echo
cat configs/kube-proxy-config.yaml ; echo
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
cat units/containerd.service
cat units/kubelet.service
cat units/kube-proxy.service
for HOST in node-0 node-1; do
scp \
downloads/worker/* \
downloads/client/kubectl \
configs/99-loopback.conf \
configs/containerd-config.toml \
configs/kube-proxy-config.yaml \
units/containerd.service \
units/kubelet.service \
units/kube-proxy.service \
root@${HOST}:~/
done
for HOST in node-0 node-1; do
scp \
downloads/cni-plugins/* \
root@${HOST}:~/cni-plugins/
done
# 확인
ssh node-0 ls -l /root
ssh node-1 ls -l /root
ssh node-0 ls -l /root/cni-plugins
ssh node-1 ls -l /root/cni-plugins
node-0
#
ssh root@node-0
-----------------------------------------------------------
pwd
ls -l
# Install the OS dependencies : The socat binary enables support for the kubectl port-forward command.
apt-get -y install socat conntrack ipset kmod psmisc bridge-utils
# Disable Swap : Verify if swap is disabled:
swapon --show
# Create the installation directories
mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
# Install the worker binaries:
mv crictl kube-proxy kubelet runc /usr/local/bin/
mv containerd containerd-shim-runc-v2 containerd-stress /bin/
mv cni-plugins/* /opt/cni/bin/
# Configure CNI Networking
## Create the bridge network configuration file:
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
cat /etc/cni/net.d/10-bridge.conf
## To ensure network traffic crossing the CNI bridge network is processed by iptables, load and configure the br-netfilter kernel module:
lsmod | grep netfilter
modprobe br-netfilter
echo "br-netfilter" >> /etc/modules-load.d/modules.conf
lsmod | grep netfilter
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/kubernetes.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
# Configure containerd : Install the containerd configuration files:
mkdir -p /etc/containerd/
mv containerd-config.toml /etc/containerd/config.toml
mv containerd.service /etc/systemd/system/
cat /etc/containerd/config.toml ; echo
version = 2
[plugins."io.containerd.grpc.v1.cri"] # CRI 플러그인 활성화 : kubelet은 이 플러그인을 통해 containerd와 통신
[plugins."io.containerd.grpc.v1.cri".containerd] # containerd 기본 런타임 설정
snapshotter = "overlayfs" # 컨테이너 파일시스템 레이어 관리 방식 : Linux표준/성능최적
default_runtime_name = "runc" # 기본 OCI 런타임 : 파드가 별도 지정 없을 경우 runc 사용
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] # runc 런타임 상세 설정
runtime_type = "io.containerd.runc.v2" # containerd 최신 runc shim
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] # runc 옵션
SystemdCgroup = true # containerd가 cgroup을 systemd로 관리
[plugins."io.containerd.grpc.v1.cri".cni] # CNI 설정
bin_dir = "/opt/cni/bin" # CNI 플러그인 바이너리 위치
conf_dir = "/etc/cni/net.d" # CNI 네트워크 설정 파일 위치
# kubelet ↔ containerd 연결 Flow
kubelet
↓ CRI (gRPC)
unix:///var/run/containerd/containerd.sock
↓
containerd CRI plugin
↓
runc
↓
Linux namespaces / cgroups
# Configure the Kubelet : Create the kubelet-config.yaml configuration file:
mv kubelet-config.yaml /var/lib/kubelet/
mv kubelet.service /etc/systemd/system/
# Configure the Kubernetes Proxy
mv kube-proxy-config.yaml /var/lib/kube-proxy/
mv kube-proxy.service /etc/systemd/system/
# Start the Worker Services
systemctl daemon-reload
systemctl enable containerd kubelet kube-proxy
systemctl start containerd kubelet kube-proxy
# 확인
systemctl status kubelet --no-pager
systemctl status containerd --no-pager
systemctl status kube-proxy --no-pager
exit
-----------------------------------------------------------
# jumpbox 에서 server 접속하여 kubectl node 정보 확인
ssh server "kubectl get nodes node-0 -o yaml --kubeconfig admin.kubeconfig" | yq
ssh server "kubectl get nodes -owide --kubeconfig admin.kubeconfig"
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-0 Ready <none> 2m48s v1.32.3 192.168.10.101 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-40-arm64 containerd://2.1.0-beta.0
ssh server "kubectl get pod -A --kubeconfig admin.kubeconfig"
node-1
#
ssh root@node-1
-----------------------------------------------------------
# Install the OS dependencies : The socat binary enables support for the kubectl port-forward command.
apt-get -y install socat conntrack ipset kmod psmisc bridge-utils
# Create the installation directories
mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
# Install the worker binaries:
mv crictl kube-proxy kubelet runc /usr/local/bin/
mv containerd containerd-shim-runc-v2 containerd-stress /bin/
mv cni-plugins/* /opt/cni/bin/
# Configure CNI Networking
## Create the bridge network configuration file:
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
cat /etc/cni/net.d/10-bridge.conf
## To ensure network traffic crossing the CNI bridge network is processed by iptables, load and configure the br-netfilter kernel module:
modprobe br-netfilter
echo "br-netfilter" >> /etc/modules-load.d/modules.conf
lsmod | grep netfilter
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/kubernetes.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
# Configure containerd : Install the containerd configuration files:
mkdir -p /etc/containerd/
mv containerd-config.toml /etc/containerd/config.toml
mv containerd.service /etc/systemd/system/
# Configure the Kubelet : Create the kubelet-config.yaml configuration file:
mv kubelet-config.yaml /var/lib/kubelet/
mv kubelet.service /etc/systemd/system/
# Configure the Kubernetes Proxy
mv kube-proxy-config.yaml /var/lib/kube-proxy/
mv kube-proxy.service /etc/systemd/system/
# Start the Worker Services
systemctl daemon-reload
systemctl enable containerd kubelet kube-proxy
systemctl start containerd kubelet kube-proxy
# 확인
systemctl status kubelet --no-pager
systemctl status containerd --no-pager
systemctl status kube-proxy --no-pager
exit
-----------------------------------------------------------
# jumpbox 에서 server 접속하여 kubectl node 정보 확인
ssh server "kubectl get nodes -owide --kubeconfig admin.kubeconfig"
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-0 Ready <none> 93s v1.32.3 192.168.10.101 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-40-arm64 containerd://2.1.0-beta.0
node-1 Ready <none> 15s v1.32.3 192.168.10.102 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-40-arm64 containerd://2.1.0-beta.0
ssh server "kubectl get pod -A --kubeconfig admin.kubeconfig"
10 - Configuring kubectl for Remote Access
설정한 쿠버네티스를 jumpbox 에서 제어 할 수 있게 설정한다.
# The Admin Kubernetes Configuration File
# You should be able to ping server.kubernetes.local based on the /etc/hosts DNS entry from a previous lab.
curl -s --cacert ca.crt https://server.kubernetes.local:6443/version | jq
# Generate a kubeconfig file suitable for authenticating as the admin user:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
# 위 명령어를 실행한 결과 kubectl 명령줄 도구에서 사용하는 기본 위치 ~/.kube/config에 kubectl 파일이 생성됩니다.
# 이는 또한 구성을 지정하지 않고도 kubectl 명령어를 실행할 수 있음을 의미합니다.
# Check the version of the remote Kubernetes cluster:
kubectl version
# List the nodes in the remote Kubernetes cluster
kubectl get nodes -v=6
I0104 16:27:17.687800 2735 loader.go:402] Config loaded from file: /root/.kube/config
cat /root/.kube/config
kubectl get nodes -owide
kubectl get pod -A
11 - Provisioning Pod Network Routes
통신을 위해 서버의 라우팅 테이블을 직접 수정해 준다.
# The Routing Table
# In this section you will gather the information required to create routes in the kubernetes-the-hard-way VPC network.
# Print the internal IP address and Pod CIDR range for each worker instance:
SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
echo $SERVER_IP $NODE_0_IP $NODE_0_SUBNET $NODE_1_IP $NODE_1_SUBNET
192.168.10.100 192.168.10.101 10.200.0.0/24 192.168.10.102 10.200.1.0/24
ssh server ip -c route
ssh root@server <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
ssh server ip -c route
ssh node-0 ip -c route
ssh root@node-0 <<EOF
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
ssh node-0 ip -c route
ssh node-1 ip -c route
ssh root@node-1 <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
EOF
ssh node-1 ip -c route
12 - Smoke Test
nginx pod 를 만들어 실제 통신이 되는지 등을 테스트 한다.
아래는 실습 구성도 이다.
아까 ETCD 암호화를 진행하였는데, 실제 데이터가 암호화가 되어 있는지 확인한다.
# Create a generic secret
kubectl create secret generic kubernetes-the-hard-way --from-literal="mykey=mydata"
# 확인
kubectl get secret kubernetes-the-hard-way
kubectl get secret kubernetes-the-hard-way -o yaml
kubectl get secret kubernetes-the-hard-way -o jsonpath='{.data.mykey}' ; echo
kubectl get secret kubernetes-the-hard-way -o jsonpath='{.data.mykey}' | base64 -d ; echo
# Print a hexdump of the kubernetes-the-hard-way secret stored in etcd
## etcdctl get … : etcd 내부 key 직접 조회, kubernetes API 우회(매우 강력한 접근)
## Secret 리소스의 etcd 실제 저장 경로: /registry/<resource>/<namespace>/<name> -> /registry/secrets/default/kubernetes-the-hard-way
ssh root@server \
'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| # etcd key 이름은 항상 평문 : 어떤 리소스인지 식별 가능
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 44 61 dc 08 37 97 eb |:v1:key1:Da..7..|
00000050 d4 d0 5b 14 39 23 a5 74 1b 3c a4 56 e4 a1 d1 17 |..[.9#.t.<.V....|
... # Kubernetes Secret이 etcd에 AES-CBC 방식으로 정상 암호화되어 저장되고 있음을 증명하는 출력
# k8s:enc : Kubernetes 암호화 포맷
# aescbc : 암호화 알고리즘 (AES-CBC)
# v1 : encryption provider 버전
# key1 : 사용된 encryption key 이름
# 이후 데이터는 암호화된 데이터
nginx deploy 등을 만들고, 실제로 통신이 되는지 등을 테스트 한다.
# Deployments
# Create a deployment for the nginx web server:
kubectl get pod
kubectl create deployment nginx --image=nginx:latest
kubectl scale deployment nginx --replicas=2
kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-54c98b4f84-pxp6c 1/1 Running 0 108s 10.200.1.2 node-1 <none> <none>
nginx-54c98b4f84-qxpbn 1/1 Running 0 12s 10.200.0.2 node-0 <none> <none>
ssh node-0 crictl ps
ssh node-1 crictl ps
ssh node-0 pstree -ap
ssh node-1 pstree -ap
ssh node-0 brctl show
ssh node-1 brctl show
ssh node-0 ip addr # 파드 별 veth 인터페이스 생성 확인
ssh node-1 ip addr # 파드 별 veth 인터페이스 생성 확인
# server 노드에서 파드 IP로 호출 확인
ssh server curl -s 10.200.1.2 | grep title
ssh server curl -s 10.200.0.2 | grep title
# Port Forwarding
# Retrieve the full name of the nginx pod:
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME
# Forward port 8080 on your local machine to port 80 of the nginx pod:
kubectl port-forward $POD_NAME 8080:80 &
ps -ef | grep kubectl
# In a new terminal make an HTTP request using the forwarding address:
curl --head http://127.0.0.1:8080
# Log
# Print the nginx pod logs
kubectl logs $POD_NAME
curl --head http://127.0.0.1:8080
kubectl logs $POD_NAME
# 확인 후 port-forward Killed
kill -9 $(pgrep kubectl)
# Exec
# Print the nginx version by executing the nginx -v command in the nginx container:
kubectl exec -ti $POD_NAME -- nginx -v
# Service
# Expose the nginx deployment using a NodePort service:
cur
# 확인
kubectl get service,ep nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx NodePort 10.32.0.149 <none> 80:31410/TCP 10s
NAME ENDPOINTS AGE
endpoints/nginx 10.200.0.2:80,10.200.1.2:80 10s
# Retrieve the node port assigned to the nginx service:
NODE_PORT=$(kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
echo $NODE_PORT
# Make an HTTP request using the IP address and the nginx node port:
curl -s -I http://node-0:${NODE_PORT}
curl -s -I http://node-1:${NODE_PORT}
3. 느낀 점
쿠버네티스를 각각 설치해 보면서 내가 모르는 부분에 대해서 조금 더 알 수 있었다.
RBAC 이라 던 지, 각 구성요소에 대해서 역할을 모호하게 나마 알고 있었는데 조금 더 찾아볼 수 있었다.
추가로 실습은 따라서 진행을 하면서 전부 이해한 것은 아니지만, 해당 실습을 여러 번 진행 해보면서 깊이를 키워나갈 예정이다.
좋은 스터디 기회로 많은 것을 알게 되었다. 여기선 작성하지 않았지만 kind k8s 라는 것도 처음 들어 보았다.
4. 해당 스터디를 통해 배워야 할 점
- 인증서 관련 이해
- 쿠버네티스 구성 요소에 대한 이해
- 기타 리눅스 명령어 이해.
