블로그 불러오는 중...
kubeadm은 Kubernetes 클러스터를 표준적이고 일관된 방식으로 초기화하고 구성하기 위한 공식 CLI 도구입니다. Kubernetes 프로젝트에서 직접 제공하며, “클러스터 부트스트래핑(bootstrap)” 역할에만 집중하도록 설계되어 있습니다.
# Base Image https://portal.cloud.hashicorp.com/vagrant/discover/bento/rockylinux-10.0
BOX_IMAGE = "bento/rockylinux-10.0" # "bento/rockylinux-9"
BOX_VERSION = "202510.26.0"
N = 2 # max number of Worker Nodes
Vagrant.configure("2") do |config|
# ControlPlane Nodes
config.vm.define "k8s-ctr" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--groups", "/K8S-Upgrade-Lab"]
vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
vb.name = "k8s-ctr"
vb.cpus = 4
vb.memory = 3072 # 2048 2560 3072 4096
vb.linked_clone = true
end
subconfig.vm.host_name = "k8s-ctr"
subconfig.vm.network "private_network", ip: "192.168.10.100"
subconfig.vm.network "forwarded_port", guest: 22, host: "60000", auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
end
# Worker Nodes
(1..N).each do |i|
config.vm.define "k8s-w#{i}" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.box_version = BOX_VERSION
subconfig.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--groups", "/K8S-Upgrade-Lab"]
vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
vb.name = "k8s-w#{i}"
vb.cpus = 2
vb.memory = 2048
vb.linked_clone = true
end
subconfig.vm.host_name = "k8s-w#{i}"
subconfig.vm.network "private_network", ip: "192.168.10.10#{i}"
subconfig.vm.network "forwarded_port", guest: 22, host: "6000#{i}", auto_correct: true, id: "ssh"
subconfig.vm.synced_folder "./", "/vagrant", disabled: true
end
end
end쿠버네티스 v1.32.11 며 구성요소는 아래와 같다.
| 항목 | 버전 | k8s 버전 호환성 |
|---|---|---|
| Rocky Linux | 10.0-1.6 | RHEL 10 소스 기반 배포판으로 RHEL 정보 참고 |
| containerd | v2.1.5 | CRI Version(v1), k8s 1.32~1.35 지원 - Link |
| runc | v1.3.3 | 정보 조사 필요 https://github.com/opencontainers/runc |
| kubelet | v1.32.11 | k8s 버전 정책 문서 참고 - Docs |
| kubeadm | v1.32.11 | 상동 |
| kubectl | v1.32.11 | 상동 |
| helm | v3.18.6 | k8s 1.30.x ~ 1.33.x 지원 - Docs |
| flannel cni | v0.27.3 | k8s 1.28~ 이후 - Release |
vagrant ssh k8s-ctr
# User 정보
whoami
id # uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant)
pwd # /home/vagrant
# cpu, mem
lscpu
free -h
# Disk
lsblk # sda 8:0 0 64G 0 disk
df -hT
# Network
ip -br -c -4 addr # enp0s9 UP 192.168.10.100/24
ip -c route
ip addr
# Host Info, Kernel
hostnamectl # Kernel: Linux 6.12.0-55.39.1.el10_0.aarch64
uname -r
rpm -aq | grep release
rocky-release-10.0-1.6.el10.noarch
# cgroup 버전 확인
stat -fc %T /sys/fs/cgroup
cgroup2fs
findmnt
mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
## systemd cgroup 계층 구조 확인
systemd-cgls --no-pager
# Process
pstree
lsns
모든 작업은 root 로 진행 예정이다.
sudo su -
시간 동기화가 중요하므로, 시간 정보를 확인해 본다
# timedatectl 정보 확인
timedatectl status # RTC in local TZ: yes -> Warning:...
timedatectl set-local-rtc 0
timedatectl status
# 시스템 타임존(Timezone)을 한국(KST, UTC+9) 으로 설정 : 시스템 시간은 UTC 기준 유지, 표시만 KST로 변환
date
timedatectl set-timezone Asia/Seoul
date
# systemd가 시간 동기화 서비스(chronyd) 를 관리하도록 설정되어 있음 : ntpd 대신 chrony 사용 (Rocky 9/10 기본)
timedatectl status
timedatectl set-ntp true # System clock synchronized: yes -> NTP service: active
# chronyc 확인
# chrony가 어떤 NTP 서버들을 알고 있고, 그중 어떤 서버를 기준으로 시간을 맞추는지를 보여줍니다.
## Stratum 2: 매우 신뢰도 높은 서버
## Reach 377: 최근 8회 연속 응답 성공 (최대값)
chronyc sources -v
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 211.108.117.211 2 6 377 9 -56us[ -31us] +/- 3253us
...
# 현재 시스템 시간이 얼마나 정확한지 종합 성적표
chronyc tracking
Reference ID : D36C75D3 (211.108.117.211)
...# SELinux 설정 : Kubernetes는 Permissive 권장
getenforce
sestatus # Current mode: enforcing
setenforce 0
getenforce
sestatus # Current mode: permissive
# 재부팅 시에도 Permissive 적용
cat /etc/selinux/config | grep ^SELINUX
sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
cat /etc/selinux/config | grep ^SELINUX
# firewalld(방화벽) 끄기
systemctl status firewalld
systemctl disable --now firewalld
systemctl status firewalld# Swap 비활성화
lsblk
free -h
free -h | grep Swap
swapoff -a
lsblk
free -h | grep Swap
# 재부팅 시에도 'Swap 비활성화' 적용되도록 /etc/fstab에서 swap 라인 주석 처리
cat /etc/fstab | grep swap
sed -i '/swap/d' /etc/fstab
cat /etc/fstab | grep swaplsmod
lsmod | grep -iE 'overlay|br_netfilter'
# 커널 모듈 로드
modprobe overlay
modprobe br_netfilter
lsmod | grep -iE 'overlay|br_netfilter'
#
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
tree /etc/modules-load.d/
# 커널 파라미터 설정 : 네트워크 설정 - 브릿지 트래픽이 iptables를 거치도록 함
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
tree /etc/sysctl.d/
# 설정 적용
sysctl --system
# 적용 확인
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.ipv4.ip_forward# hosts 설정
cat /etc/hosts
sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts
cat << EOF >> /etc/hosts
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2
EOF
cat /etc/hosts
# 확인
ping -c 1 k8s-ctr
ping -c 1 k8s-w1
ping -c 1 k8s-w2
본 실습에서는 config 버전이 변경됨으로 인해서 2점대로 설치 한다. (/etc/containerd/config.toml 파일의 문법이 1->2 버전 차이에서 config 파일의 문법 변경됨으로 인해, 실습에서의 간변함을 위해 2버전 설치한다)
# dnf == yum, 버전 정보 확인
dnf
yum
dnf --version
yum --version
# Docker 저장소 추가 : dockerd 설치 X, containerd 설치 OK
dnf repolist
tree /etc/yum.repos.d/
dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf repolist
tree /etc/yum.repos.d/
cat /etc/yum.repos.d/docker-ce.repo
dnf makecache
# 설치 가능한 모든 containerd.io 버전 확인
dnf list --showduplicates containerd.io
Available Packages
containerd.io.aarch64 1.7.23-3.1.el10 docker-ce-stable
containerd.io.aarch64 1.7.24-3.1.el10 docker-ce-stable
containerd.io.aarch64 1.7.25-3.1.el10 docker-ce-stable
containerd.io.aarch64 1.7.26-3.1.el10 docker-ce-stable
containerd.io.aarch64 1.7.27-3.1.el10 docker-ce-stable
containerd.io.aarch64 1.7.28-1.el10 docker-ce-stable
containerd.io.aarch64 1.7.28-2.el10 docker-ce-stable
containerd.io.aarch64 1.7.29-1.el10 docker-ce-stable
containerd.io.aarch64 2.1.5-1.el10 docker-ce-stable
containerd.io.aarch64 2.2.0-2.el10 docker-ce-stable
containerd.io.aarch64 2.2.1-1.el10 docker-ce-stable
# containerd 설치
dnf install -y containerd.io-2.1.5-1.el10
Downloading Packages:
containerd.io-2.1.5-1.el10.aarch64.rpm
# 설치된 파일 확인
which runc && runc --version
which containerd && containerd --version
which containerd-shim-runc-v2 && containerd-shim-runc-v2 -v
which ctr && ctr --version
cat /etc/containerd/config.toml
tree /usr/lib/systemd/system | grep containerd
cat /usr/lib/systemd/system/containerd.service
# 기본 설정 생성 및 SystemdCgroup 활성화 (매우 중요)
containerd config default | tee /etc/containerd/config.toml
version = 3 # containerd version 2.0 이상 시
root = '/var/lib/containerd'
state = '/run/containerd'
...
# https://v1-32.docs.kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#cgroupfs-cgroup-driver
# cgroupfs 드라이버는 kubelet의 기본 cgroup 드라이버이다.
# cgroupfs 드라이버가 사용될 때, kubelet과 컨테이너 런타임은 직접적으로 cgroup 파일시스템과 상호작용하여 cgroup들을 설정한다.
# cgroupfs 드라이버가 권장되지 않는 때가 있는데, systemd가 init 시스템인 경우이다.
# 이것은 systemd가 시스템에 단 하나의 cgroup 관리자만 있을 것으로 기대하기 때문이다.
# 또한, cgroup v2를 사용할 경우에도 cgroupfs 대신 systemd cgroup 드라이버를 사용한다.
# -----------------------------------------------------------
# https://github.com/containerd/containerd/blob/main/docs/cri/config.md
## In containerd 2.x
version = 3
[plugins.'io.containerd.cri.v1.images']
snapshotter = "overlayfs"
## In containerd 1.x
version = 2
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
# -----------------------------------------------------------
cat /etc/containerd/config.toml | grep -i systemdcgroup
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
cat /etc/containerd/config.toml | grep -i systemdcgroup
# systemd unit 파일 최신 상태 읽기
systemctl daemon-reload
# containerd start 와 enabled
systemctl enable --now containerd
#
systemctl status containerd --no-pager
journalctl -u containerd.service --no-pager
pstree -alnp
systemd-cgls --no-pager
# containerd의 유닉스 도메인 소켓 확인 : kubelet에서 사용 , containerd client 3종(ctr, nerdctr, crictl)도 사용
containerd config dump | grep -n containerd.sock
ls -l /run/containerd/containerd.sock
ss -xl | grep containerd
ss -xnp | grep containerd
# 플러그인 확인
ctr --address /run/containerd/containerd.sock version
ctr plugins ls
TYPE ID PLATFORMS STATUS
io.containerd.content.v1 content - ok # 이미지 레이어 저장
...
io.containerd.snapshotter.v1 native linux/arm64/v8 ok
io.containerd.snapshotter.v1 overlayfs linux/arm64/v8 ok # Kubernetes 기본 snapshotter
io.containerd.snapshotter.v1 zfs linux/arm64/v8 skip
...
io.containerd.metadata.v1 bolt - ok # 메타데이터 DB (bolt)dockerc 저장소를 설정한다. 해당 저장소에 containerd 가 있다.
config 에서 systemcgroup 을 true 로 변경한다. (cgroupfs 대신 systemd cgroup 드라이버를 사용한다.) 해당 명령어 부분에 주석으로 설명이 되어 있다.


플러그인 중에 snapshotter 가 있는데, 여러가지 플러그인 변경에 따라 성능의 차이가 있다고 한다.

# repo 추가
## exclude=... : 실수로 dnf update 시 kubelet 자동 업그레이드 방지
dnf repolist
tree /etc/yum.repos.d/
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
dnf makecache
# 설치
## --disableexcludes=... kubernetes repo에 설정된 exclude 규칙을 이번 설치에서만 무시(1회성 옵션 처럼 사용)
dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Installing:
kubeadm aarch64 1.32.11-150500.1.1 kubernetes 10 M
kubectl aarch64 1.32.11-150500.1.1 kubernetes 9.4 M
kubelet aarch64 1.32.11-150500.1.1 kubernetes 13 M
Installing dependencies:
cri-tools aarch64 1.32.0-150500.1.1 kubernetes 6.2 M
kubernetes-cni aarch64 1.6.0-150500.1.1 kubernetes 7.2 M
# kubelet 활성화 (실제 기동은 kubeadm init 후에 시작됨)
systemctl enable --now kubelet
ps -ef |grep kubelet
# 설치 파일들 확인
which kubeadm && kubeadm version -o yaml
which kubectl && kubectl version --client=true
Client Version: v1.32.11
Kustomize Version: v5.5.0
which kubelet && kubelet --version
Kubernetes v1.32.11
# cri-tools
which crictl && crictl version
WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/usr/bin/crictl.yaml"
# /etc/crictl.yaml 파일 작성
cat << EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
crictl info | jq
{
"cniconfig": {
"Networks": [
{
"Config": {
"CNIVersion": "0.3.1",
"Name": "cni-loopback",
"Plugins": [
{
"Network": {
"ipam": {},
"type": "loopback"
},
"Source": "{\"type\":\"loopback\"}"
}
],
"Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n \"type\": \"loopback\"\n}]\n}"
},
"IFName": "lo"
}
],
"PluginConfDir": "/etc/cni/net.d",
"PluginDirs": [
"/opt/cni/bin"
],
"PluginMaxConfNum": 1,
"Prefix": "eth"
},
...
"containerdEndpoint": "/run/containerd/containerd.sock",
"containerdRootDir": "/var/lib/containerd",
...
"status": {
...
{
"message": "Network plugin returns error: cni plugin not initialized",
"reason": "NetworkPluginNotReady",
"status": false,
"type": "NetworkReady"
},
# kubernetes-cni : 파드 네트워크 구성을 위한 CNI 바이너리 파일 확인
ls -al /opt/cni/bin
tree /opt/cni
/opt/cni
└── bin
├── bandwidth
├── bridge
├── portmap
...
tree /etc/cni/
/etc/cni/
└── net.d
#
systemctl is-active kubelet
systemctl status kubelet --no-pager
journalctl -u kubelet --no-pager
tree /usr/lib/systemd/system | grep kubelet -A1
├── kubelet.service
├── kubelet.service.d
│ └── 10-kubeadm.conf
cat /usr/lib/systemd/system/kubelet.service
cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
...
tree /etc/kubernetes
tree /var/lib/kubelet
cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=
#
systemd-cgls --no-pager
lsns
# containerd의 유닉스 도메인 소켓 확인 : kubelet에서 사용 , containerd client 3종(ctr, nerdctr, crictl)도 사용
ls -l /run/containerd/containerd.sock
ss -xl | grep containerd
ss -xnp | grep containerddnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
는
하겠다는 의미입니다.
kubeadm 으로 kubernetes 설치 시 다음과 같은 절차가 실행이 된다. 정말 많은 절차가 있는데, 먼저 작성한다
/etc/kubernetes/pki 생성
/etc/kubernetes 생성
/etc/kubernetes/bootstrap-kubelet.conf/etc/kubernetes/controller-manager.conf/etc/kubernetes/scheduler.conf/etc/kubernetes/admin.conf , /etc/kubernetes/super-admin.confkube-system namespacetier:control-plane and component:{component-name} labelssystem-node-critical priority classhostNetwork: true is set on all static Pods to allow control plane startup/healthz or /livez endpoints.kubectl get cm -n kube-system **kubeadm-config**node-role.kubernetes.io/control-plane=""node-role.kubernetes.io/control-plane:NoSchedule ← 일반 워크로드가 Control Plane에 스케줄 Xcluster-info ConfigMap in the kube-public namespace.system:unauthenticated).kubeadm join 시 사용할 수 있습니다.kube-proxy is created in the kube-system namespacekube-dns for compatibility reasons with the legacy kube-dns addon.kube-system namespace.coredns ServiceAccount is bound to the privileges in the system:coredns ClusterRole.kube-dns with kubeadm was removed.# 기본 환경 정보 출력 저장
crictl images
crictl ps
cat /etc/sysconfig/kubelet
tree /etc/kubernetes | tee -a etc_kubernetes-1.txt
tree /var/lib/kubelet | tee -a var_lib_kubelet-1.txt
tree /run/containerd/ -L 3 | tee -a run_containerd-1.txt
pstree -alnp | tee -a pstree-1.txt
systemd-cgls --no-pager | tee -a systemd-cgls-1.txt
lsns | tee -a lsns-1.txt
ip addr | tee -a ip_addr-1.txt
ss -tnlp | tee -a ss-1.txt
df -hT | tee -a df-1.txt
findmnt | tee -a findmnt-1.txt
sysctl -a | tee -a sysctl-1.txt
# kubeadm Configuration 파일 작성
cat << EOF > kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
bootstrapTokens:
- token: "123456.1234567890123456"
ttl: "0s"
usages:
- signing
- authentication
nodeRegistration:
kubeletExtraArgs:
- name: node-ip
value: "192.168.10.100" # 미설정 시 10.0.2.15 맵핑
criSocket: "unix:///run/containerd/containerd.sock"
localAPIEndpoint:
advertiseAddress: "192.168.10.100"
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: "1.32.11"
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/16"
EOF
cat kubeadm-init.yaml
#
kubeadm init --config="kubeadm-init.yaml"
[init] Using Kubernetes version: v1.32.11
...
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-ctr kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-ctr localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-ctr localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.003793571s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 3.004974627s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-ctr as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-ctr as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 123456.1234567890123456
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
# crictl 확인
crictl images
IMAGE TAG IMAGE ID SIZE
registry.k8s.io/coredns/coredns v1.11.3 2f6c962e7b831 16.9MB
registry.k8s.io/etcd 3.5.24-0 1211402d28f58 21.9MB
registry.k8s.io/kube-apiserver v1.32.11 58951ea1a0b5d 26.4MB
registry.k8s.io/kube-controller-manager v1.32.11 82766e5f2d560 24.2MB
registry.k8s.io/kube-proxy v1.32.11 dcdb790dc2bfe 27.6MB
registry.k8s.io/kube-scheduler v1.32.11 cfa17ff3d6634 19.2MB
registry.k8s.io/pause 3.10 afb61768ce381 268kB
crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
a04be00090580 dcdb790dc2bfe 26 seconds ago Running kube-proxy 0 1fd91b0a982bb kube-proxy-7w44b kube-system
b005f34739da5 82766e5f2d560 37 seconds ago Running kube-controller-manager 0 555d146c3ec07 kube-controller-manager-k8s-ctr kube-system
eb42b9c47fdce cfa17ff3d6634 37 seconds ago Running kube-scheduler 0 e649514d0a1b7 kube-scheduler-k8s-ctr kube-system
bbe8495d2a205 58951ea1a0b5d 37 seconds ago Running kube-apiserver 0 be25c00dd555c kube-apiserver-k8s-ctr kube-system
c00a944599500 1211402d28f58 37 seconds ago Running etcd 0 ce6b89dea28da etcd-k8s-ctr kube-system
# kubeconfig 작성
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config
# 확인
kubectl cluster-info
kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr NotReady control-plane 6m45s v1.32.11 192.168.10.100 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-668d6bf9bc-bmdjw 0/1 Pending 0 6m55s <none> <none> <none> <none>
coredns-668d6bf9bc-cbtn9 0/1 Pending 0 6m55s <none> <none> <none> <none>
etcd-k8s-ctr 1/1 Running 0 7m2s 192.168.10.100 k8s-ctr <none> <none>
kube-apiserver-k8s-ctr 1/1 Running 0 7m4s 192.168.10.100 k8s-ctr <none> <none>
kube-controller-manager-k8s-ctr 1/1 Running 0 7m2s 192.168.10.100 k8s-ctr <none> <none>
kube-proxy-zfr9d 1/1 Running 0 6m55s 192.168.10.100 k8s-ctr <none> <none>
kube-scheduler-k8s-ctr 1/1 Running 0 7m3s 192.168.10.100 k8s-ctr <none> <none>
# coredns 의 service name 확인 : kube-dns
kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h26m
# cluster-info ConfigMap 공개 : cluster-info는 '신원 확인 전, 최소한의 신뢰 부트스트랩 데이터'
kubectl -n kube-public get configmap cluster-info
kubectl -n kube-public get configmap cluster-info -o yaml
kubectl -n kube-public get configmap cluster-info -o jsonpath='{.data.kubeconfig}' | grep certificate-authority-data | cut -d ':' -f2 | tr -d ' ' | base64 -d | openssl x509 -text -noout
curl -s -k https://192.168.10.100:6443/api/v1/namespaces/kube-public/configmaps/cluster-info | jq
curl -s -k https://192.168.10.100:6443/api/v1/namespaces/default/pods # X
kubectl -n kube-public get role
kubectl -n kube-public get rolebinding
# kubeadm init 시 생성되는 객체
- Namespace: kube-public
- ConfigMap: cluster-info
- Role + RoleBinding
>> 대상: system:unauthenticated (인증 안 된 사용자)
>> 권한: get on configmaps/cluster-info
👉 아직 클러스터 인증서가 없는 노드(worker) 가 (kubeadm join 전) API Server에 처음 접속해서 최소 정보(엔드포인트 + CA)를 얻기 위해 필요로그를 보면, 위에서 기술했던 대로 처리를 하는 것을 알 수 있다.
중간에 `kubeletExtraArgs:

위 사진을 보면 낮은 인터페이스 순이라고 하면 10.xx 대역으로 잡히게 된다. 우리는 192.XX 대역으로 통신하기 때문에 설정했다.

Node가 NotReady인 이유
Kubernetes에서 Node가 Ready가 되려면 다음 조건이 모두 충족되어야 합니다.
지금은 3번이 빠져 있습니다.
추가로 coredns 도 마찬가지다.

추가로 curl -s -k https://192.168.10.100:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 해당 api 는 외부 접근이 가능하다.
rbac 인증 설정 anonymous 도 접근하게 되어 있고, 나중에 워커노드 조인을 위해 해당 부분을 만들어 두었다고 한다.
#
echo "sudo su -" >> /home/vagrant/.bashrc
# Source the completion
source <(kubectl completion bash)
source <(kubeadm completion bash)
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'source <(kubeadm completion bash)' >> /etc/profile
kubectl get <tab 2번>
# Alias kubectl to k
alias k=kubectl
complete -o default -F __start_kubectl k
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -o default -F __start_kubectl k' >> /etc/profile
k get node
# kubecolor 설치 : https://kubecolor.github.io/setup/install/
dnf install -y 'dnf-command(config-manager)'
dnf config-manager --add-repo https://kubecolor.github.io/packages/rpm/kubecolor.repo
dnf repolist
dnf install -y kubecolor
kubecolor get node
alias kc=kubecolor
echo 'alias kc=kubecolor' >> /etc/profile
kc get node
kc describe node
# Install Kubectx & Kubens"
dnf install -y git
git clone https://github.com/ahmetb/kubectx /opt/kubectx
ln -s /opt/kubectx/kubens /usr/local/bin/kubens
ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
# Install Kubeps & Setting PS1
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
cat << "EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=true
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
# 빠져나오기
exit
exit
# 다시 접속
vagrant ssh k8s-ctr
whoami
pwd
kubectl config rename-context "kubernetes-admin@kubernetes" "HomeLab"
kubens default
# helm 3 설치 : https://helm.sh/docs/intro/install
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | DESIRED_VERSION=v3.18.6 bash
helm version
# k9s 설치 : https://github.com/derailed/k9s
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
wget https://github.com/derailed/k9s/releases/latest/download/k9s_linux_${CLI_ARCH}.tar.gz
tar -xzf k9s_linux_*.tar.gz
ls -al k9s
chown root:root k9s
mv k9s /usr/local/bin/
chmod +x /usr/local/bin/k9s
k9s# 현재 k8s 클러스터에 파드 전체 CIDR 확인
kc describe pod -n kube-system kube-controller-manager-k8s-ctr
...
Command:
kube-controller-manager
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
--service-cluster-ip-range=10.96.0.0/16
...
# 노드별 파드 CIDR 확인
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr 10.244.0.0/24
# Deploying Flannel with Helm
# https://github.com/flannel-io/flannel/blob/master/Documentation/configuration.md
helm repo add flannel https://flannel-io.github.io/flannel
helm repo update
kubectl create namespace kube-flannel
cat << EOF > flannel.yaml
podCidr: "10.244.0.0/16"
flannel:
cniBinDir: "/opt/cni/bin"
cniConfDir: "/etc/cni/net.d"
args:
- "--ip-masq"
- "--kube-subnet-mgr"
- "--iface=enp0s9"
backend: "vxlan"
EOF
helm install flannel flannel/flannel --namespace kube-flannel --version 0.27.3 -f flannel.yaml
# 확인
helm list -A
helm get values -n kube-flannel flannel
kubectl get ds,pod,cm -n kube-flannel -owide
kc describe cm -n kube-flannel kube-flannel-cfg
kc describe ds -n kube-flannel
...
Command:
/opt/bin/flanneld
--ip-masq
--kube-subnet-mgr
--iface=enp0s9
# flannel cni 바이너리 설치 확인
ls -l /opt/cni/bin/
-rwxr-xr-x. 1 root root 2974540 Jan 17 01:35 flannel
...
# cni 설정 정보 확인
tree /etc/cni/net.d/
cat /etc/cni/net.d/10-flannel.conflist | jq
# cni 설치 후 아래 상태(conditions) 정상 확인
crictl info | jq
"status": {
"conditions": [
{
"message": "",
"reason": "",
"status": true,
"type": "RuntimeReady"
},
{
"message": "",
"reason": "",
"status": true,
"type": "NetworkReady"
},
{
"message": "",
"reason": "",
"status": true,
"type": "ContainerdHasNoDeprecationWarnings"
}
]
}
}
# coredns 파드 정상 기동 확인
kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-668d6bf9bc-bmdjw 1/1 Running 0 137m 10.244.0.3 k8s-ctr <none> <none>
coredns-668d6bf9bc-cbtn9 1/1 Running 0 137m 10.244.0.2 k8s-ctr <none> <none>
...
# network 정보 확인
ip -c route | grep 10.244.
ip addr # cni0, flannel.1, vethY.. 확인
bridge link
lsns -t net
# iptables 규칙 확인
iptables -t nat -S
iptables -t filter -S
iptables-save설치 시 - "--iface=enp0s9"
부분을 추가해서 해당 인터페이스로 통신할 수 있게 한다.
# # kubelet 활성화 확인 : 실제 기동은 kubeadm init 후에 시작됨
systemctl is-active kubelet
systemctl status kubelet --no-pager
# 노드 정보 확인 : 일반 워크로드가 Control Plane에 스케줄 X
kc describe node
Labels: ...
node-role.kubernetes.io/control-plane=
Taints: node-role.kubernetes.io/control-plane:NoSchedule
# 기본 환경 정보 출력 저장
cat /etc/sysconfig/kubelet
tree /etc/kubernetes | tee -a etc_kubernetes-2.txt
tree /var/lib/kubelet | tee -a var_lib_kubelet-2.txt
tree /run/containerd/ -L 3 | tee -a run_containerd-2.txt
pstree -alnp | tee -a pstree-2.txt
systemd-cgls --no-pager | tee -a systemd-cgls-2.txt
lsns | tee -a lsns-2.txt
ip addr | tee -a ip_addr-2.txt
ss -tnlp | tee -a ss-2.txt
df -hT | tee -a df-2.txt
findmnt | tee -a findmnt-2.txt
sysctl -a | tee -a sysctl-2.txt
# 파일 출력 비교 : 빠져나오기 ':q' -> ':q' => 변경된 부분이 어떤 동작과 역할인지 조사해보기!
vi -d etc_kubernetes-1.txt etc_kubernetes-2.txt
vi -d var_lib_kubelet-1.txt var_lib_kubelet-2.txt
vi -d run_containerd-1.txt run_containerd-2.txt
vi -d pstree-1.txt pstree-2.txt
vi -d systemd-cgls-1.txt systemd-cgls-2.txt
vi -d lsns-1.txt lsns-2.txt
vi -d ip_addr-1.txt ip_addr-2.txt
vi -d ss-1.txt ss-2.txt
vi -d df-1.txt df-2.txt
vi -d findmnt-1.txt findmnt-2.txt
# kubelet 에 --protect-kernel-defaults=false 적용되어 관련 코드에 sysctl 커널 파라미터 적용 : 아래 링크 확왼
## 위 설정 시, 커널 튜닝 가능 항목 중 하나라도 kubelet의 기본값과 다르면 오류가 발생합니다
vi -d sysctl-1.txt sysctl-2.txt
kernel.panic = 0 -> 10 변경
kernel.panic_on_oops = 1 기존값 그대로
vm.overcommit_memory = 0 -> 1 변경
vm.panic_on_oom = 0 기존값 그대로
sysctl kernel.keys.root_maxkeys # 1000000 기존값 그대로
sysctl kernel.keys.root_maxbytes # 25000000 # root_maxkeys * 25 기존값 그대로
# kube-proxy 에서도 관련 코드에 sysctl 커널 파라미터 적용 : 아래 링크 확왼
net.nf_conntrack_max = 65536 -> 131072
net.netfilter.nf_conntrack_max = 65536 -> 131072
net.netfilter.nf_conntrack_count = 1 -> 282
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60 -> 3600
net.netfilter.nf_conntrack_tcp_timeout_established = 432000 -> 86400시스템 커널 파라미터들이 변경이 된다.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-ctr kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-ctr localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-ctr localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
# kubeadm-config 확인
kc describe cm -n kube-system kubeadm-config
...
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s # CA인증서: 10년
certificateValidityPeriod: 8760h0m0s # 인증서 : 1년
certificatesDir: /etc/kubernetes/pki # 인증서 위치
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
...
# Checks expiration for the certificates in the local PKI managed by kubeadm.
kubeadm certs check-expiration
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 16, 2027 14:33 UTC 364d ca no
apiserver Jan 16, 2027 14:33 UTC 364d ca no
apiserver-etcd-client Jan 16, 2027 14:33 UTC 364d etcd-ca no
apiserver-kubelet-client Jan 16, 2027 14:33 UTC 364d ca no
controller-manager.conf Jan 16, 2027 14:33 UTC 364d ca no
etcd-healthcheck-client Jan 16, 2027 14:33 UTC 364d etcd-ca no
etcd-peer Jan 16, 2027 14:33 UTC 364d etcd-ca no
etcd-server Jan 16, 2027 14:33 UTC 364d etcd-ca no
front-proxy-client Jan 16, 2027 14:33 UTC 364d front-proxy-ca no
scheduler.conf Jan 16, 2027 14:33 UTC 364d ca no
super-admin.conf Jan 16, 2027 14:33 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jan 14, 2036 14:33 UTC 9y no
etcd-ca Jan 14, 2036 14:33 UTC 9y no
front-proxy-ca Jan 14, 2036 14:33 UTC 9y no
#
tree /etc/kubernetes/
tree /etc/kubernetes/pki
# CA 인증서
cat /etc/kubernetes/pki/ca.crt | openssl x509 -text -noout
...
Issuer: CN=kubernetes
Validity
Not Before: Jan 16 14:28:05 2026 GMT
Not After : Jan 14 14:33:05 2036 GMT
Subject: CN=kubernetes
...
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment, Certificate Sign
X509v3 Basic Constraints: critical
CA:TRUE
# apiserver 인증서 : 'TLS Web Server' 키 용도 확인
cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
...
Issuer: CN=kubernetes
Validity
Not Before: Jan 16 14:28:05 2026 GMT
Not After : Jan 16 14:33:05 2027 GMT
Subject: CN=kube-apiserver
...
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
40:6A:34:8B:CD:FC:94:4C:19:69:E6:0F:07:E3:4E:7B:29:2F:26:C6
X509v3 Subject Alternative Name:
DNS:k8s-ctr, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.10.100
# apiserver-kubelet-client 인증서 : 'TLS Web Client' 키 용도 확인
cat /etc/kubernetes/pki/apiserver-kubelet-client.crt | openssl x509 -text -noout
Subject: O=kubeadm:cluster-admins, CN=kube-apiserver-kubelet-client
...
X509v3 Extended Key Usage:
TLS Web Client Authentication
# 나머지 인증서도 확인해보자!# 관리자 용도
cat /etc/kubernetes/admin.conf
cat /etc/kubernetes/super-admin.conf
# kcm
cat /etc/kubernetes/controller-manager.conf
# scheduler
cat /etc/kubernetes/scheduler.conf
# kubelet
cat /etc/kubernetes/kubelet.conf
...
users:
- name: system:node:k8s-ctr
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
#
ls -l /var/lib/kubelet/pki
-rw-------. 1 root root 2822 Jan 17 09:34 kubelet-client-2026-01-17-09-34-41.pem
lrwxrwxrwx. 1 root root 59 Jan 17 09:34 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2026-01-17-09-34-41.pem
-rw-r--r--. 1 root root 2262 Jan 17 09:34 kubelet.crt
-rw-------. 1 root root 1679 Jan 17 09:34 kubelet.key
# kubelet 서버 역할 : Subjec, Key Usage, SAN 확인
cat /var/lib/kubelet/pki/kubelet.crt | openssl x509 -text -noout
Issuer: CN=k8s-ctr-ca@1768610081
Validity
Not Before: Jan 16 23:34:41 2026 GMT
Not After : Jan 16 23:34:41 2027 GMT
Subject: CN=k8s-ctr@1768610081
...
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
D5:FE:B1:E9:89:9F:46:A8:56:D4:4E:4B:01:7B:2B:49:44:FA:61:85
X509v3 Subject Alternative Name:
DNS:k8s-ctr
# kubelet 클라이언트 역할 : Subjec, Key Usage 확인
cat /var/lib/kubelet/pki/kubelet-client-current.pem | openssl x509 -text -noout
Issuer: CN=kubernetes
Validity
Not Before: Jan 17 00:28:48 2026 GMT
Not After : Jan 17 00:33:48 2027 GMT
Subject: O=system:nodes, CN=system:node:k8s-ctr
...
X509v3 Extended Key Usage:
TLS Web Client Authentication# kubeket에 의해 기동되는 static pod 대상 매니페스트 디렉터리 확인
tree /etc/kubernetes/manifests/
/etc/kubernetes/manifests/
├── etcd.yaml
├── kube-apiserver.yaml
├── kube-controller-manager.yaml
└── kube-scheduler.yaml
# kubelt 설정 확인
cat /var/lib/kubelet/config.yaml
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
cgroupDriver: systemd
staticPodPath: /etc/kubernetes/manifests
...
cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.100 --pod-infra-container-image=registry.k8s.io/pause:3.10"
# static 파드 확인
kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
etcd-k8s-ctr 1/1 Running 0 4h9m 192.168.10.100 k8s-ctr <none> <none>
kube-apiserver-k8s-ctr 1/1 Running 0 4h9m 192.168.10.100 k8s-ctr <none> <none>
kube-controller-manager-k8s-ctr 1/1 Running 0 4h9m 192.168.10.100 k8s-ctr <none> <none>
kube-scheduler-k8s-ctr 1/1 Running 0 4h9m 192.168.10.100 k8s-ctr <none> <none>
...
# etcd : etcd client 는 https://192.168.10.100:2379 호출, metrics 은 http://127.0.0.1:2381 확인
tree /var/lib/etcd/
cat /etc/kubernetes/manifests/etcd.yaml
- --advertise-client-urls=https://192.168.10.100:2379
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.10.100:2379
- --listen-metrics-urls=http://127.0.0.1:2381
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
...
# kube-apiserver
cat /etc/kubernetes/manifests/kube-apiserver.yaml
- command:
- kube-apiserver
# Listen https://<IP>:6443
- --advertise-address=192.168.10.100
- --secure-port=6443
# etcd client -> etcd server(https://127.0.0.1:2379)
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
# kubelet-client -> kubelet
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
# k8s servicd cid
- --service-cluster-ip-range=10.96.0.0/16
ss -tnlp | grep apiserver
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=6400,fd=3))
## k8s 내부에서 api 호출 시 : https://10.96.0.1 혹은 https://kubernetes.default.svc.cluster.local
kubectl get svc,ep
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h26m
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.10.100:6443 4h26m
# scheduler
cat /etc/kubernetes/manifests/kube-scheduler.yaml
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
## tcp 10259 Listen 포트 확인
ss -tnlp | grep scheduler
LISTEN 0 4096 127.0.0.1:10259 0.0.0.0:* users:(("kube-scheduler",pid=6397,fd=3))
## scheduler 파드가 1개 이상일 경우 리더 역할 파드 확인
## Lease는 k8s의 경량 coordination 리소스 : 리더 선출 (Leader Election), 노드/컴포넌트 상태 heartbeat, 저부하(high-scale) 상태 갱신
kubectl get leases.coordination.k8s.io -n kube-system kube-scheduler -o yaml
kubectl get leases.coordination.k8s.io -n kube-system kube-scheduler
NAME HOLDER AGE
kube-scheduler k8s-ctr_7d815157-fdd5-4753-8a64-023d115d3704 4h31m
## Node Heartbeat (Node 상태) : node heartbeat 전용 네임스페이스
kubectl get lease -n kube-node-lease
kubectl get lease -n kube-node-lease -o yaml
# kube-controller-manager
cat /etc/kubernetes/manifests/kube-controller-manager.yaml
- command:
- kube-controller-manager
# kcm bind address
- --bind-address=127.0.0.1
# 노드별 파드 cidr 할당
- --allocate-node-cidrs=true
- --cluster-cidr=10.244.0.0/16
# k8s svc cidr
- --service-cluster-ip-range=10.96.0.0/16
# kcm controller
- --controllers=*,bootstrapsigner,tokencleaner
# lease 사용 : 리더
- --leader-elect=true
# 모든 컨트롤러가 kcm의 단일 권한(identity) 사용하지 않고, 컨트롤러별 개별 ServiceAccount + RBAC 사용
- --use-service-account-credentials=true
## tcp 10257 Listen 포트 확인
ss -tnlp | grep controller
LISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=6393,fd=3))
## 노드별 파드 CIDR 확인
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr 10.244.0.0/24
## kcm 파드가 1개 이상일 경우 리더 역할 파드 확인
kubectl get lease -n kube-system kube-controller-manager -o yaml
kubectl get lease -n kube-system kube-controller-manager
## 컨트롤러별 개별 ServiceAccount + RBAC 사용
kubectl get sa -n kube-system | grep controller
attachdetach-controller 0 4h44m
certificate-controller 0 4h44m
clusterrole-aggregation-controller 0 4h44m
...(생략)...
# coredns 확인
kc describe deploy -n kube-system coredns
kubectl get deploy -n kube-system coredns -owide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
coredns 2/2 2 2 4h46m coredns registry.k8s.io/coredns/coredns:v1.11.3 k8s-app=kube-dns
## label 도 아직 예전 kube-dns 사용
kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-668d6bf9bc-cspcd 1/1 Running 0 4h48m 10.244.0.2 k8s-ctr <none> <none>
coredns-668d6bf9bc-gh225 1/1 Running 0 4h48m 10.244.0.3 k8s-ctr <none> <none>
##
kubectl get svc,ep -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4h50m
NAME ENDPOINTS AGE
endpoints/kube-dns 10.244.0.2:53,10.244.0.3:53,10.244.0.2:53 + 3 more... 4h49m
## 프로메테우스 메트릭 엔드포인트 확인
curl -s http://10.96.0.10:9153/metrics | head
## configmap 확인
kc describe cm -n kube-system coredns
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30 {
disable success cluster.local
disable denial cluster.local
}
loop
reload
loadbalance
}
## forward 정보 확인
cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 168.126.63.1
nameserver 8.8.8.8
# kube-proxy 확인
kubectl get ds -n kube-system -owide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 4h53m kube-proxy registry.k8s.io/kube-proxy:v1.32.11 k8s-app=kube-proxy
kc describe pod -n kube-system -l k8s-app=kube-proxy
kubectl get pod -n kube-system -l k8s-app=kube-proxy -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-proxy-7w44b 1/1 Running 0 4h54m 192.168.10.100 k8s-ctr <none> <none>
kc describe cm -n kube-system kube-proxy
bindAddress: 0.0.0.0
metricsBindAddress: ""
clusterCIDR: 10.244.0.0/16
conntrack:
maxPerCore: null
min: null
mode: "" # 기본 모드 : iptables
nodePortAddresses: null # NodePort 서비스가 '모든 노드 인터페이스(IP)'에 바인딩됨
portRange: "" # kube-proxy 자체가 '포트 범위를 제한하지 않음'
##
ss -tnlp | grep kube-proxy
LISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:(("kube-proxy",pid=6631,fd=10)) # 헬스 체크 전용
LISTEN 0 4096 *:10256 *:* users:(("kube-proxy",pid=6631,fd=9)) # 메트릭 노출용
curl 127.0.0.1:10249/healthz ; echo
ok
# 엔드포인트 정보 확인해보자
curl http://127.0.0.1:10256/metrics
curl http://192.168.10.100:10256/metrics
404 page not found
## iptables rule 확인
iptables -t nat -S
iptables -t filter -S
iptables-save
## conntrack 전용 툴 설치
dnf install -y conntrack-tools
conntrack -V
## conntrack 툴 사용
conntrack -L # 전체 conntrack 엔트리 조회
conntrack -L -p tcp # TCP 연결만 보기
conntrack -L -p tcp --state ESTABLISHED # 특정 상태 필터링
conntrack -L | grep dport=443 # 특정 포트 관련 연결
conntrack -E # 실시간 이벤트 추적
## conntrack sysctl 주요 파라미터
## nf_conntrack_max : 최대 엔트리 수
## nf_conntrack_count : 현재 사용 중
## nf_conntrack_tcp_timeout_established : TCP 유지 시간
sysctl -a | grep conntrack
k8s-w1, k8s-w2 에 사전 설정을 한다.
# root 권한(로그인 환경) 전환
echo "sudo su -" >> /home/vagrant/.bashrc
sudo su -
# Time, NTP 설정
timedatectl set-local-rtc 0
# 시스템 타임존(Timezone)을 한국(KST, UTC+9) 으로 설정 : 시스템 시간은 UTC 기준 유지, 표시만 KST로 변환
timedatectl set-timezone Asia/Seoul
# SELinux 설정 : Kubernetes는 Permissive 권장
setenforce 0
# 재부팅 시에도 Permissive 적용
sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
# firewalld(방화벽) 끄기
systemctl disable --now firewalld
# Swap 비활성화
swapoff -a
# 재부팅 시에도 'Swap 비활성화' 적용되도록 /etc/fstab에서 swap 라인 주석 처리
sed -i '/swap/d' /etc/fstab
# 커널 모듈 로드
modprobe overlay
modprobe br_netfilter
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# 커널 파라미터 설정 : 네트워크 설정 - 브릿지 트래픽이 iptables를 거치도록 함
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 설정 적용
sysctl --system >/dev/null 2>&1
# hosts 설정
sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts
cat << EOF >> /etc/hosts
192.168.10.100 k8s-ctr
192.168.10.101 k8s-w1
192.168.10.102 k8s-w2
EOF
cat /etc/hosts# Docker 저장소 추가
dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# containerd 설치
dnf install -y containerd.io-2.1.5-1.el10
# 기본 설정 생성 및 SystemdCgroup 활성화 (매우 중요)
containerd config default | tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# systemd unit 파일 최신 상태 읽기
systemctl daemon-reload
# containerd start 와 enabled
systemctl enable --now containerd
# repo 추가
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
# 설치
dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# kubelet 활성화 (실제 기동은 kubeadm init 후에 시작됨)
systemctl enable --now kubelet
# /etc/crictl.yaml 파일 작성
cat << EOF > /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
EOF
조인 단계에는 다음과 같은 단계가 있다.
-discovery-token-ca-cert-hash를 사용하여 API 서버의 kube-public/cluster-info ConfigMap을 익명으로 요청합니다.--discovery-token-ca-cert-hash sha256:xxxx)과 일치하는지 확인하여, 접속하려는 마스터 노드가 진짜인지 검증합니다.Certificate 발급자는 이 요청이 유효한 토큰을 통한 것임을 확인하고 자동으로 승인합니다./etc/kubernetes/kubelet.conf 파일을 생성합니다.kubelet이 이 설정을 가지고 실행되면서 API 서버에 자신을 "Node" 리소스로 등록합니다.kube-proxy 등을 배포하며, 노드 상태가 Ready가 될 준비를 마칩니다.# 기본 환경 정보 출력 저장
crictl images
crictl ps
cat /etc/sysconfig/kubelet
tree /etc/kubernetes | tee -a etc_kubernetes-1.txt
tree /var/lib/kubelet | tee -a var_lib_kubelet-1.txt
tree /run/containerd/ -L 3 | tee -a run_containerd-1.txt
pstree -alnp | tee -a pstree-1.txt
systemd-cgls --no-pager | tee -a systemd-cgls-1.txt
lsns | tee -a lsns-1.txt
ip addr | tee -a ip_addr-1.txt
ss -tnlp | tee -a ss-1.txt
df -hT | tee -a df-1.txt
findmnt | tee -a findmnt-1.txt
sysctl -a | tee -a sysctl-1.txt
# kubeadm Configuration 파일 작성
NODEIP=$(ip -4 addr show enp0s9 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
echo $NODEIP
cat << EOF > kubeadm-join.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: JoinConfiguration
discovery:
bootstrapToken:
token: "123456.1234567890123456"
apiServerEndpoint: "192.168.10.100:6443"
unsafeSkipCAVerification: true
nodeRegistration:
criSocket: "unix:///run/containerd/containerd.sock"
kubeletExtraArgs:
- name: node-ip
value: "$NODEIP"
EOF
cat kubeadm-join.yaml
# join
kubeadm join --config="kubeadm-join.yaml"
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.164948ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
# crictl 확인
crictl images
crictl ps
# cluster-info cm 호출 가능 확인
curl -s -k https://192.168.10.100:6443/api/v1/namespaces/kube-public/configmaps/cluster-info | jq
# join 된 워커 노드 확인
kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 6h58m v1.32.11 192.168.10.100 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w1 Ready <none> 2m29s v1.32.11 192.168.10.101 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w2 Ready <none> 2m29s v1.32.11 192.168.10.102 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
# 노드별 파드 CIDR 확인
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1 10.244.1.0/24
k8s-w2 10.244.2.0/24
# 다른 노드의 파드 CIDR(Per Node Pod CIDR)에 대한 라우팅이 자동으로 커널 라우팅에 추가됨을 확인 : flannel.1 을 통해 VXLAN 통한 라우팅
ip -c route | grep flannel
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
# k8s-ctr 에서 10.244.1.0 IP로 통신 가능(vxlan overlay 사용) 확인
ping -c 1 10.244.1.0
PING 10.244.1.0 (10.244.1.0) 56(84) bytes of data.
64 bytes from 10.244.1.0: icmp_seq=1 ttl=64 time=1.19 ms
# 워커 노드에 Taints 정보 확인
kc describe node k8s-w1
Taints: <none>
# k8s-w1 노드에 배치된 파드 확인
kubectl get pod -A -owide | grep k8s-w2
kubectl get pod -A -owide | grep k8s-w1
kube-flannel kube-flannel-ds-nhfhh 1/1 Running 0 4m3s 192.168.10.101 k8s-w1 <none> <none>
kube-system kube-proxy-7zczb 1/1 Running 0 4m3s 192.168.10.101 k8s-w1 <none> <none>노드 정보 빛 환경정보, sysctl 확인
# # kubelet 활성화 확인
systemctl status kubelet --no-pager
# 기본 환경 정보 출력 저장
cat /etc/sysconfig/kubelet
tree /etc/kubernetes | tee -a etc_kubernetes-2.txt
/etc/kubernetes
├── kubelet.conf
├── manifests
└── pki
└── ca.crt
cat /etc/kubernetes/kubelet.conf
tree /var/lib/kubelet | tee -a var_lib_kubelet-2.txt
tree /run/containerd/ -L 3 | tee -a run_containerd-2.txt
pstree -alnp | tee -a pstree-2.txt
systemd-cgls --no-pager | tee -a systemd-cgls-2.txt
lsns | tee -a lsns-2.txt
ip addr | tee -a ip_addr-2.txt
ss -tnlp | tee -a ss-2.txt
df -hT | tee -a df-2.txt
findmnt | tee -a findmnt-2.txt
sysctl -a | tee -a sysctl-2.txt
# kubelet 에 --protect-kernel-defaults=false 적용되어 관련 코드에 sysctl 커널 파라미터 적용
vi -d sysctl-1.txt sysctl-2.txt
kernel.panic = 0 -> 10 변경
vm.overcommit_memory = 0 -> 1 변경
# 파일 출력 비교 : 빠져나오기 ':q' -> ':q' => 변경된 부분이 어떤 동작과 역할인지 조사해보기!
vi -d etc_kubernetes-1.txt etc_kubernetes-2.txt
vi -d var_lib_kubelet-1.txt var_lib_kubelet-2.txt
vi -d run_containerd-1.txt run_containerd-2.txt
vi -d pstree-1.txt pstree-2.txt
vi -d systemd-cgls-1.txt systemd-cgls-2.txt
vi -d lsns-1.txt lsns-2.txt
vi -d ip_addr-1.txt ip_addr-2.txt
vi -d ss-1.txt ss-2.txt
vi -d df-1.txt df-2.txt
vi -d findmnt-1.txt findmnt-2.txt# repo 추가
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# 파라미터 파일 생성
cat <<EOT > monitor-values.yaml
prometheus:
prometheusSpec:
scrapeInterval: "20s"
evaluationInterval: "20s"
externalLabels:
cluster: "myk8s-cluster"
service:
type: NodePort
nodePort: 30001
grafana:
defaultDashboardsTimezone: Asia/Seoul
adminPassword: prom-operator
service:
type: NodePort
nodePort: 30002
alertmanager:
enabled: true
defaultRules:
create: true
kubeProxy:
enabled: false
prometheus-windows-exporter:
prometheus:
monitor:
enabled: false
EOT
cat monitor-values.yaml
# 배포
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 80.13.3 \
-f monitor-values.yaml --create-namespace --namespace monitoring
# 확인
helm list -n monitoring
kubectl get pod,svc,ingress,pvc -n monitoring
kubectl get prometheus,servicemonitors,alertmanagers -n monitoring
kubectl get crd | grep monitoring
# 각각 웹 접속 실행 : NodePort 접속
open http://192.168.10.100:30001 # prometheus
open http://192.168.10.100:30002 # grafana : 접속 계정 admin / prom-operator
# 프로메테우스 버전 확인
kubectl exec -it sts/prometheus-kube-prometheus-stack-prometheus -n monitoring -c prometheus -- prometheus --version
prometheus, version 3.9.1
# 그라파나 버전 확인
kubectl exec -it -n monitoring deploy/kube-prometheus-stack-grafana -- grafana --version
grafana version 12.3.1설치하고 대시보드 추가해 준다.
[K8S Dashboard 추가] Dashboard → New → Import → 15661, 15757 입력 후 Load ⇒ 데이터소스(Prometheus 선택) 후 Import 클릭
데이터 소스가 없는데, 아래 사진처럼 설정해 주었다.

15661 화면을 띄웠다.

프로메테우스를 열어 본다. status -> target health 클릭한다.

다음과 같이 수집이 안되는 것들이 있다.

프로메테우스는 외부에 있으므로 해당 설정을 변경한다.
# kube-controller-manager bind-address 127.0.0.1 => 0.0.0.0 변경
sed -i 's|--bind-address=127.0.0.1|--bind-address=0.0.0.0|g' /etc/kubernetes/manifests/kube-controller-manager.yaml
cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep bind-address
- --bind-address=0.0.0.0
# kube-scheduler bind-address 127.0.0.1 => 0.0.0.0 변경
sed -i 's|--bind-address=127.0.0.1|--bind-address=0.0.0.0|g' /etc/kubernetes/manifests/kube-scheduler.yaml
cat /etc/kubernetes/manifests/kube-scheduler.yaml | grep bind-address
- --bind-address=0.0.0.0
# etcd metrics-url(http) 127.0.0.1 에 192.168.10.100 추가
sed -i 's|--listen-metrics-urls=http://127.0.0.1:2381|--listen-metrics-urls=http://127.0.0.1:2381,http://192.168.10.100:2381|g' /etc/kubernetes/manifests/etcd.yaml
cat /etc/kubernetes/manifests/etcd.yaml | grep listen-metrics-urls
- --listen-metrics-urls=http://127.0.0.1:2381,http://192.168.10.100:2381해당 설정 변경 시 kubelet 이 변경을 감지하고, 수집이 잘 되게 된다.
샘플 애플리케이션 배포
# 샘플 애플리케이션 배포
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF애플리케이션 확인 / 반복해서 1초에 한번씩 호출
# 배포 확인
kubectl get deploy,svc,ep webpod -owide
# webpod service clusterip 변수 지정
SVCIP=$(kubectl get svc webpod -o jsonpath='{.spec.clusterIP}')
echo $SVCIP
# 통신 확인
curl -s $SVCIP
curl -s $SVCIP | grep Hostname
# 반복 호출(신규 터미널)
while true; do curl -s $SVCIP | grep Hostname; sleep 1; done# Check certificates expiration for a Kubernetes cluster
kc describe cm -n kube-system kubeadm-config | grep -i cert
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
# 현재 인증서가 UTC 기준 1월 17일 00:34 분 생성되어서, 유효기간 365일(1년) 이후 만료일은 '27년 1월 17일 00:33분.
kubeadm certs check-expiration -v 6
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
...
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 17, 2027 00:33 UTC 364d ca no
apiserver Jan 17, 2027 00:33 UTC 364d ca no
apiserver-etcd-client Jan 17, 2027 00:33 UTC 364d etcd-ca no
apiserver-kubelet-client Jan 17, 2027 00:33 UTC 364d ca no
controller-manager.conf Jan 17, 2027 00:33 UTC 364d ca no
etcd-healthcheck-client Jan 17, 2027 00:33 UTC 364d etcd-ca no
etcd-peer Jan 17, 2027 00:33 UTC 364d etcd-ca no
etcd-server Jan 17, 2027 00:33 UTC 364d etcd-ca no
front-proxy-client Jan 17, 2027 00:33 UTC 364d front-proxy-ca no
scheduler.conf Jan 17, 2027 00:33 UTC 364d ca no
super-admin.conf Jan 17, 2027 00:33 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jan 15, 2036 00:33 UTC 9y no
etcd-ca Jan 15, 2036 00:33 UTC 9y no
front-proxy-ca Jan 15, 2036 00:33 UTC 9y no
tree /etc/kubernetes/pki/
ls -l /etc/kubernetes/pki/etcd/
ls -l /etc/kubernetes/pki
-rw-r--r--. 1 root root 1281 Jan 17 09:34 apiserver.crt
-rw-r--r--. 1 root root 1123 Jan 17 09:34 apiserver-etcd-client.crt
-rw-------. 1 root root 1679 Jan 17 09:34 apiserver-etcd-client.key
-rw-------. 1 root root 1675 Jan 17 09:34 apiserver.key
...
# apiserver 인증서(예시) :
cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9019049356910942135 (0x7d2a199aea6457b7)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Jan 17 00:28:48 2026 GMT
Not After : Jan 17 00:33:48 2027 GMT
Subject: CN=kube-apiserver
...kubeadm certs renew : CA는 유지한 채 control-plane 인증서를 재서명하며, kubeconfig 재생성됨. → static pod 재기동이 반드시 필요!
갱신 시 영향도
kubelet.conf 파일이 포함되어 있지 않습니다.HA 컨트롤 플레인을 사용하는 클러스터를 실행하는 경우, 이 명령어는 모든 컨트롤 플레인 노드에서 실행해야 합니다.
이 명령은 에 저장된 CA 인증서와 키를 사용하여 갱신을 수행합니다 ⇒ CA(cert/key, 기본 10년 유효 기간)는 재생성하지 않음.
/etc/kubernetes/pki/
├── **ca.crt / ca.key** (❌ renew 대상 아님)
├── **etcd/**
│ ├── **ca.crt / ca.key** (❌ renew 대상 아님)명령어를 실행한 후에는 컨트롤 플레인 Pod를 재시작해야 합니다.
/etc/kubernetes/manifests/ 디렉터리에서 일시적으로 제거하고 20초 동안 기다립니다( KubeletConfiguration 구조체fileCheckFrequency 의 값 참조).**touch /etc/kubernetes/manifests/*.yaml
watch -d crictl ps**kubeadm certs renew 특정 인증서를 갱신하거나, all 명령을 사용하여 모든 인증서를 한 번에 갱신할 수 있습니다.
# 갱신 시 : 기존 cert 삭제 -> CA로 재서명된 새 cert 생성
kubeadm certs renew apiserver
kubeadm certs renew etcd-server
kubeadm certs renew all# 샘플 애플리케이션 반복 호출(신규 터미널)
SVCIP=$(kubectl get svc webpod -o jsonpath='{.spec.clusterIP}')
while true; do curl -s $SVCIP | grep Hostname; sleep 1; done
# 사전 백업 : HA Controlplane 모두
cp -r /etc/kubernetes/pki /etc/kubernetes/pki.backup.$(date +%F)
ls -l /etc/kubernetes/pki.backup.$(date +%F)
mkdir /etc/kubernetes/backup-conf.$(date +%F)
cp /etc/kubernetes/*.conf /etc/kubernetes/backup-conf.$(date +%F)
ls -l /etc/kubernetes/backup-conf.$(date +%F)
# 인증서 만료 상태 확인
kubeadm certs check-expiration
# 인증서 전체 갱신 : 기존 cert 삭제 -> CA로 재서명된 새 cert 생성
kubeadm certs renew all
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
certificate embedded in the kubeconfig file for the super-admin renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
# 인증서 만료 상태 확인
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 17, 2027 12:25 UTC 364d ca no
apiserver Jan 17, 2027 12:25 UTC 364d ca no
apiserver-etcd-client Jan 17, 2027 12:25 UTC 364d etcd-ca no
apiserver-kubelet-client Jan 17, 2027 12:25 UTC 364d ca no
controller-manager.conf Jan 17, 2027 12:25 UTC 364d ca no
etcd-healthcheck-client Jan 17, 2027 12:25 UTC 364d etcd-ca no
etcd-peer Jan 17, 2027 12:25 UTC 364d etcd-ca no
etcd-server Jan 17, 2027 12:25 UTC 364d etcd-ca no
front-proxy-client Jan 17, 2027 12:25 UTC 364d front-proxy-ca no
scheduler.conf Jan 17, 2027 12:25 UTC 364d ca no
super-admin.conf Jan 17, 2027 12:25 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jan 15, 2036 00:33 UTC 9y no
etcd-ca Jan 15, 2036 00:33 UTC 9y no
front-proxy-ca Jan 15, 2036 00:33 UTC 9y no
# ca 인증서는 그대로, 나머지 인증서는 신규 생성
ls -lt /etc/kubernetes/pki/
-rw-r--r--. 1 root root 1119 Jan 17 21:25 front-proxy-client.crt
-rw-------. 1 root root 1675 Jan 17 21:25 front-proxy-client.key
-rw-r--r--. 1 root root 1176 Jan 17 21:25 apiserver-kubelet-client.crt
-rw-------. 1 root root 1675 Jan 17 21:25 apiserver-kubelet-client.key
-rw-r--r--. 1 root root 1123 Jan 17 21:25 apiserver-etcd-client.crt
-rw-------. 1 root root 1675 Jan 17 21:25 apiserver-etcd-client.key
-rw-r--r--. 1 root root 1281 Jan 17 21:25 apiserver.crt
-rw-------. 1 root root 1675 Jan 17 21:25 apiserver.key
-rw-------. 1 root root 1675 Jan 17 09:34 sa.key
-rw-------. 1 root root 451 Jan 17 09:34 sa.pub
drwxr-xr-x. 2 root root 162 Jan 17 09:34 etcd
-rw-r--r--. 1 root root 1123 Jan 17 09:34 front-proxy-ca.crt
-rw-------. 1 root root 1675 Jan 17 09:34 front-proxy-ca.key
-rw-r--r--. 1 root root 1107 Jan 17 09:34 ca.crt
-rw-------. 1 root root 1679 Jan 17 09:34 ca.key
ls -lt /etc/kubernetes/pki/etcd
-rw-r--r--. 1 root root 1196 Jan 17 21:25 server.crt
-rw-------. 1 root root 1675 Jan 17 21:25 server.key
-rw-r--r--. 1 root root 1196 Jan 17 21:25 peer.crt
-rw-------. 1 root root 1679 Jan 17 21:25 peer.key
-rw-r--r--. 1 root root 1123 Jan 17 21:25 healthcheck-client.crt
-rw-------. 1 root root 1679 Jan 17 21:25 healthcheck-client.key
-rw-r--r--. 1 root root 1094 Jan 17 09:34 ca.crt
-rw-------. 1 root root 1675 Jan 17 09:34 ca.key
# apiserver 인증서
cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
Issuer: CN=kubernetes
Validity
Not Before: Jan 17 12:20:10 2026 GMT
Not After : Jan 17 12:25:10 2027 GMT
Subject: CN=kube-apiserver
# control component 의 kubeconfig 신규 생성 확인
ls -lt /etc/kubernetes/*.conf
-rw-------. 1 root root 5682 Jan 17 21:25 /etc/kubernetes/super-admin.conf
-rw-------. 1 root root 5626 Jan 17 21:25 /etc/kubernetes/scheduler.conf
-rw-------. 1 root root 5682 Jan 17 21:25 /etc/kubernetes/controller-manager.conf
-rw-------. 1 root root 5654 Jan 17 21:25 /etc/kubernetes/admin.conf
-rw-------. 1 root root 1974 Jan 17 09:34 /etc/kubernetes/kubelet.conf
# 특히 admin.conf 변경 확인
ls -l /etc/kubernetes/admin.conf
ls -l /etc/kubernetes/backup-conf.$(date +%F)/admin.conf
vi -d /etc/kubernetes/backup-conf.$(date +%F)/admin.conf /etc/kubernetes/admin.conf인증서 교체를 하니, 직접 kube-apiserver 등을 restart 하라고 한다.

control-plane static pod 재기동 & admin.conf kubeconfig 재적용
# 사전 백업 : static pod 매니페스트
cp -r /etc/kubernetes/manifests /etc/kubernetes/manifests.backup.$(date +%F)
ls -l /etc/kubernetes/manifests.backup.$(date +%F)
# static pod 모니터링(신규 터미널)
watch -d crictl ps
# static pod manifest 삭제
rm -rf /etc/kubernetes/manifests/*.yaml
# static pod manifest 삭제 확인
crictl ps
# static pod manifest 복사 -> 파드 재기동
cp /etc/kubernetes/manifests.backup.$(date +%F)/*.yaml /etc/kubernetes/manifests
tree /etc/kubernetes/manifests
# 파드 기동 확인 : CA가 바뀌지 않았기 때문에 예전 인증서도 신뢰됨 >> 다만, 예전(?) 인증서의 만료 기간을 놓칠 수 있으니, 같이 갱신 할 것!
crictl ps
kubectl get pod -n kube-system -owide -v=6
# 특히 admin.conf 변경 확인
ls -l /root/.kube/config
ls -l /etc/kubernetes/admin.conf
ls -l /etc/kubernetes/backup-conf.$(date +%F)/admin.conf
vi -d /root/.kube/config /etc/kubernetes/admin.conf
vi -d /etc/kubernetes/backup-conf.$(date +%F)/admin.conf /etc/kubernetes/admin.conf
# admin.conf kubeconfig 재적용
yes | cp /etc/kubernetes/admin.conf ~/.kube/config ; echo
chown $(id -u):$(id -g) ~/.kube/config
kubectl config rename-context "kubernetes-admin@kubernetes" "HomeLab"
kubens default인증서 갱신이 이루어졌다.
인증서 갱신 이후 쿠버네티스 manifest 파일을 잠시 삭제하고, 다시 옮겨 준다. kubelet 등 manifest 파일을 보고, 다시 재생성 되기 때문에 해당 파일을 없애고, 팟이 사라져있는지 보고 다시 옮겨 주어 재시작을 했다.
아래 사진은 인증서 상태와 연결된 그라파나이다.
그라파나는 그래프를 보면 잠시 재 기동 시간에 연결이 끊겨있음을 확인할 수 있다.

본의 아니게 vagrant 를 돌리는 host 머신이 꺼지는 바람에 이슈가 있었다.
먼저 재부팅 되면 swap 이 계속 켜지는 이슈가 있었다. swapoff -a 로 종료시키고 kubelet 을 재기동 하였다.
추가로 vagrant reload 가 되지 않았다.
중간에 bash 에 sudo su - 로 바로 로그인 시 root 로 접근하는 코드가 있는데, 해당 코드가 scp 를 진행하면서 last login 이라는 키워드를 반환하면서 vagrant 가 실패한 것이라 한다. 재 기동이 된다면 해당 부분을 주석을 하고 실습을 진행하면 될 것 같다.