반응형

 

k8s 인증서가 갱신되더라도 전에 사용하고 있던 config파일 내에 client-certificate-data 항목이 만료가 되지 않았으면,

그대로 정상동작하며, 해당 데이터에 대한 만료일자를 확인하는 방법이다.

 

root@master:~/tmp $ cat ~/.kube/config # client-certificate-data 값 확인
root@master:~/tmp $ echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL~~~~~" | base64 --decode > client.crt
root@master:~/tmp $ ls
client.crt
root@master:~/tmp $ openssl x509 -in client.crt -noout -enddate
notAfter=Apr  7 13:49:20 2025 GMT

 

 

반응형
반응형

 

서버 작업 중 kubelet.service 재기동시 계속 종료되는 현상이 발생되어 journalctl -xeu kubelet를 통해 로그를 확인하였으나, 그 당시 엄청 많은 쓸데없는 에러로 인해서 해당 문제를 해결하는데 시간을 많이 허비 하였다.

(로그찾을때 E(Error) 만 보느라 F(Fatal)을 놓쳤었음.)

 

아래 에러로 인하여 kubelet이 재기동 되지 않은 것을 확인하였으며, kenel panic을 재조정하여 해결하였다.

Failed to start ContainerManager invalid kernel flag: kenrnel/panic, expected valued: 10, actual value :0

 

이 문제는 kubeadm으로 k8s 설치시 발생하진 않지만, kubespray로 k8s를 설치하게되면,  기본적으로 kernel.panic=10으로 들어가게된다.

해당 옵션은 kubelet 기동 인자 전달시 예상한값과 다를 경우 패스하거나 실패 시키는 옵션인데, 누군가 kernel.panic=0 으로 변경하여 에러가 발생하였다.

 

조치방법은 /etc/sysctl.conf 내 kernel.panic 옵션의 값을 변경하면된다.

# vi /etc/sysctl.conf
kernel.panic=10

 

 

기타.

kubespray로 k8s를 설치할 경우 kubelet service파일(/etc/systemd/system/kubelet.service)에 인자들이 들어간다.

적용된 kernel.panic 옵션을 통해 해당 인자들이 탐지되어 정상기동하거나 실패하는 듯 하다.

# cat /etc/systemd/system/kubelet.service

[Service]
EnvironmentFile=-/etc/kubernetes/kubelet.env
ExecStart=/usr/local/bin/kubelet \
                $KUBE_LOGTOSTDERR \
                $KUBE_LOG_LEVEL \
                $KUBELET_API_SERVER \
                $KUBELET_ADDRESS \
                $KUBELET_PORT \
                $KUBELET_HOSTNAME \
                $KUBELET_ARGS \
                $DOCKER_SOCKET

 

반응형
반응형

 

해당 가이드는 지속적으로 수정 예정. 동작 및 코드 문의시 댓글 부탁드립니다.

 

k8s 를 이용하다보면 Node들에 container image들이 쌓이게 되는데 이를 정리하는 CronJob 이다

CronJob에 이용되는 image는 아래 dokcer hub에서 확인 할 수 있다.(amd64, arm64 아키텍쳐 사용가능)

https://hub.docker.com/r/pangyeons/image-prune

 

https://hub.docker.com/r/pangyeons/image-prune

 

hub.docker.com

 

현재버전 - 1.2

 

기능은 옵션을 통해 docker 뿐만아니라 crictl 명령어를 이용하여 image pruning 을 진행할 수 있으며,

Control Plane 도 정리할지 안할지 옵션을 통해 선택할 수 있다.

 

사용방법은 아래와 같다.

 

1. 아래는 기본적인 yaml 파일이며 command 배열과 mountPath, API_TOKEN, API_URL, KEY_NAME, defaultMode는 필수 옵션이다.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: image-prune
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: image-prune
            image: pangyeons/image-prune:latest
            imagePullPolicy: IfNotPresent
            command: # 아래 command 배열 수정 및 삭제 금지
            - /bin/sh
            - -c
            - chmod +x image_prune.sh; /image_prune.sh
            volumeMounts:
            - mountPath: /etc/sshkey # 수정 및 삭제 금지
              name: secret-sshkey
            env:
            - name: API_TOKEN # 수정 및 삭제 금지
              valueFrom:
                secretKeyRef:
                  key:
                  name: 
            - name: API_URL # 수정 및 삭제 금지
              value: ""
            - name: KEY_NAME # 수정 및 삭제 금지
              value: ""
            - name: CRI_TYPE
              value: ""
            - name: CONTROL_PLANE
              value: ""
            - name: OS_USER
              value: ""
            - name: PORT
              value: "6443"
            - name: LOG_FILE
              value: ""
          restartPolicy: OnFailure
          volumes:
          - name: secret-sshkey
            secret:
              defaultMode: 0600 # 수정 및 삭제 금지
              secretName:

 

2. ssh key 생성 및 등록

ssh-keygen 을 통해 ssh key 생성

ssh-keygen -t rsa # ex) id_rsa, id_rsa.pub 생성

 

 

생성 후 나온 public key 모든 node에 등록

# id_rsa.pub 등록
vi ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNbPyWARlsD1OmjgHcQAewXvmTbAJYAYMlRgjgUKu69uVyKB8ZS0n3KuLJy9JoTF4y/VOL5DTCU2TFb1A1eIhM4Ox5sPoNTWIG7h/crH

 

생성한 ssh private key를 k8s secret에 등록

kubectl create secret generic sshkey --from-file=privatekey=./id_rsa

 

 

3. k8s API를 사용할 API Token 생성(현재 Ready 중인 Node 및 Master/Worker Node 구분을 위함)

API Token 생성을 위한 Serivce Account 생성 및 API 조회에 필요한 Role 부여

vi test-token.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-token
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: read-nodes
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-nodes-binding
subjects:
- kind: ServiceAccount
  name: test-token
  namespace: default
roleRef:
  kind: ClusterRole
  name: read-nodes
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: test-token-secret
  namespace: default
  annotations:
    kubernetes.io/service-account.name: test-token

 

 

생성한 계정에 대한 API Token 조회

API_TOKEN=$(kubectl get secret test-token-secret -o jsonpath="{.data.token}" | base64 --decode)

 

4. 생성한 API Token을 k8s secret 으로 생성

kubectl create secret generic apitoken --from-literal=apitoken=$API_TOKEN

 

5. CronJob 생성

API_TOKEN secret으로 생성한 apitoken key: apitoken
name: apitoken
필수
API_URL Control Plane API URL 127.0.0.1 필수
KEY_NAME secret으로 생성한 ssh key privatekey 필수
OS_USER Node들에 접속할 OS계정 user 기본값 : root
CRI_TYPE 컨테이너 런타임 인터페이스 docker/crictl 기본값 : root
CONTROL_PLANE CONTROL PLANE 도 정리 true/false 기본값 : true
PORT k8s API PORT 6443 기본값 : 6443
LOG_FILE 삭제로그 파일로 저장
/var/log/image-prune_yyyymmddhhMM.log
true/false 기본값 : false

 

apiVersion: batch/v1
kind: CronJob
metadata:
  name: image-prune
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: image-prune
            image: pangyeons/image-prune:latest
            imagePullPolicy: IfNotPresent
            command: # 아래 command 배열 수정 및 삭제 금지
            - /bin/sh
            - -c
            - chmod +x image_prune.sh; /image_prune.sh
            volumeMounts:
            - mountPath: /etc/sshkey # 수정 및 삭제 금지
              name: secret-sshkey
            env:
            - name: API_TOKEN # 수정 및 삭제 금지
              valueFrom:
                secretKeyRef:
                  key: apitoken # 위에 가이드대로 생성한 token
                  name: apitoken # 위에 가이드대로 생성한 token
            - name: API_URL # 수정 및 삭제 금지
              value: "172.1.1.1" # Control Plane API IP
            - name: KEY_NAME # 위에 가이드대로 생성한 SSH KEY Secret
              value: "privatekey"
            - name: CRI_TYPE # Container Runtime이 crictl일 경우
              value: "crictl"
            - name: CONTROL_PLANE # Control Plane에서는 동작안함.
              value: "false"
            - name: PORT
              value "6443"
            - name: LOG_FILE
              value: "false"
          restartPolicy: OnFailure
          volumes:
          - name: secret-sshkey
            secret:
              defaultMode: 0600 # 수정 및 삭제 금지
              secretName: sshkey # 위에 가이드대로 생성한 SSH KEY Secret

 

반응형
반응형

 

이전에 Ubuntu 20.04 에 Docker/Contained 로 Kubernetes 설치하는 글을 썼으나,

(이전 글 - 2023.08.21 - [Develop/k8s] - Ubuntu 20.04 kubernetes(k8s) 설치)

최근에 Mac M2 Arm 아키텍처에 Ubuntu 22.04에서 그대로 설치를 진행하니 kube-apiserver가 

계속 종료되는 현상이 발생했다. 아마 k8s 와 containerd 간의 cgroup 설정문제로 그런 것 같은데

이참에 cri-o 로 cgroup 설정도 해볼겸 다시 설치를 진행하였다.

 

 

기본적으로 root 계정으로 실행하며, 아래 5번까지는 Master노드와 Worker 노드에 똑같이 설정한다

1. 패키지 업데이트 또는 필요한 항목 설치

apt-get update
apt-get install -y software-properties-common curl

 

 

2. Kubernetes과 cri-o 다운 경로 설정

해당 설정은 직접 버전과 경로를 설정하면 필요 없다.

KUBERNETES_VERSION=v1.29
PROJECT_PATH=prerelease:/main

 

 

3. Kubernetes 와 cri-o Repository 설정

# kubernetes
curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/ /" |
    tee /etc/apt/sources.list.d/kubernetes.list
    
# cri-o 
curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/$PROJECT_PATH/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/$PROJECT_PATH/deb/ /" |
    tee /etc/apt/sources.list.d/cri-o.list

 

 

4. 패키지 설치

apt-get update
apt-get install -y cri-o kubelet kubeadm kubectl

 

설치하게되면 cri-o 가 동작중이 아니므로, 서비스 실행

systemctl start crio.service

 

 

5. 클러스터 동작을 위한 기본 구성

swapoff -a
modprobe br_netfilter
sysctl -w net.ipv4.ip_forward=1

 

 

6. kubernetes init (Master 노드만 진행한다)

kubeadm init

 

진행 후 완료 되면 나오는 메세지대로 config 파일을 생성한다.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

정상적으로 성공할 경우 아래 값 반환 해당값 메모장에 임시 저장

kubeadm join 192.x.x.x:6443 --token c5l89v.9ao1r5texepx06d8 \
	--discovery-token-ca-cert-hash sha256:50cb3eaxe334612e81c2342790130801afd70ddb9967a06bb0b202141748354f

 

 

7. Node 등록 (Worker 노드만 진행한다.)

6번에서 저장한 kubeadm join 입력

kubeadm join 192.x.x.x:6443 --token c5l89v.9ao1r5texepx06d8 \
	--discovery-token-ca-cert-hash sha256:50cb3eaxe334612e81c2342790130801afd70ddb9967a06bb0b202141748354f

 

 

8. Master에서 확인

Master에서 아래 명령어 실행

kubectl get nodes -o wide

 

Ready 로 잘 연결된 것을 확인

 

 

9. cilium 설치 (Master 노드만 진행한다)

# Arm 아키텍처여서 Arm으로 진행 github에 접속해서 알맞은 아키텍처로 설치 필요
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-arm64.tar.gz

 

설치 후 압축 해제 및 cilium install

sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
cilium install

 

cilium 설치 확인(설치되고 pod가 뜨는데 몇분걸림) 및 핵심 pod 정상 확인

 

 

 

v1.29가 아닌 v1.28이하로 설치할 경우 cri-o만 따로 설치 후 이전과 비슷한 방법으로 k8s를 설치해줘야 하는 것 같다.

참고 - https://github.com/cri-o/cri-o/blob/main/install.md#installation-instructions

cri-o v1.29 이상 설치 - https://github.com/cri-o/packaging/blob/main/README.md

cri-o v1.28 이하 설치 - https://github.com/cri-o/cri-o/blob/main/install-legacy.md

cilium - https://kubernetes.io/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/

반응형
반응형

kubectl 도구를 이용하여 Windows에서 기타 서버에 설치되어있는 k8s cluster를 사용할 수 있으며,

k8s 클러스터가 여러개일 경우에도 선택하여 사용가능하다.

 

1. k8s 설치

Window에 Terminal 또는 Powershell을 열어 아래의 명령어를 통해 kubectl을 다운받는다.

curl.exe -LO "https://dl.k8s.io/release/v1.28.4/bin/windows/amd64/kubectl.exe"

 

다운 후 Path에 등록하여도 되고 직접 kubectl 설치되어있는 폴더 호출 하여 사용해도 무관.

# C드라이브 test폴더에서 kubectl 파일 다운받았을 경우
C:\test> ./kubectl

 

 

2. k8s config 파일 복사

서버에 있는 k8s config파일의 내용을 복사한다.

cat ~/.kube/config

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCRXhNelV4TlROYUZ3MHpNekE0TVRneE16VTJOVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURmZTZSREhXZE90Q25qczZnNHFESVprZHFzR2ZRUEE5MXlWYS9LN3AwcTZIRllvRzRQcGZHMkdUeFIKWi9HeXFqU3ZIS01KYWo1WUIvdHJ1TEtrMnoreTFGZXp0eHJFc1JkVi96UXlNOENrcGZVd1FhV2dNMGhSUTF3NgpKcTZGOHhsMlBSaDFlRjJ1eG9YT0pPOFFGam1EV2lUWVQrNEhTL0dRRE5hTlYvNlAxOXllM2VNQWhyZndQeU4xCkJKWXNwRnBTRFJLMTNpNG1IYTk3ZS9WMU9iQkFaQlJNZHhYc3FsaHQvaDR3UWFaRGF4N05tb2huVC9QOGlJTzYKamdtMC82cDBiRmhaaU5nam10TFkwdW9sd2R5Z1JyV1laSCtsclJ6b3FmbW0wUzRieEwrNC9lUXVCeFNlYzBmQwozWjlTaFJPaitiVkt6MFdmRG5PUEQ4S29kTmJqQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUU0x6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUJpSXNhK0EvNgpkdzIzWVNhTm8wU0FaQ3ZUckc1eE1wVWtpRktDcXd0eXU5OFhOR29JNWlhY3pyMllsS2ZueGh6eVNzSFJuWTBNCnZUT3k3aUp2ZGhseldqUlUvcGZMS0hRMVZuSkZ6ZzQ1NDhrdE5MUHQ3eVFTL25HelJDZnJhZGhzc0FKTzJMc1cKTnRxVm5kM3hibVpFTDREamM4ZWNmYVJZMHJTMm9yQ3RoZzZQZFZhMTdQaWlWTk1zS0hGSnJpMVRPdEY5VzdKYQpNQkRLN2ttL2syZThnRmJPYzhlZHc5VGVmS09QTFBLenN6NXVzbWRCNnVyNnZxRFlUYWhtVi9UV2NrQW04aFd6ClU4bkxDcEE1VVllV3hkbTBJSGU2L0NOUmNEeXQzUEtOQTlMZm5zeHcxZjh5c24zSElSSTc2REhyVE9OK3lQUjQKRFpacjI5YjRsWkVtCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.1.1:6443
  name: dev-k8s
contexts:
- context:
    cluster: dev-k8s
    user: pangyeon
  name: pangyeon-dev
current-context:  pangyeon-dev
kind: Config
preferences: {}
users:
- name: pangyeon
  user:
    client-certificate-data: LS0tLS1CRUdJTiBSUZJQDQWdtZ0F3SUJBZ0lJSGpubTVYZWxsbk13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHlOREE0TWpBeE16VTJOVFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ0QVJiWnBiUmNsdFZXRjAKaldFVCt4ZlJ6NC9naG0rYlhCM3Y1clphVzNNSkxxT0JaSGNaZUJPTEFCTmtvbk5nK3BQTVRndmNQSzQ1TGlBRgo3a0orbXg5UWVUbm0va2U5Rm1FQSswNDRIMlFmczFQOTk1ZWRrZHdZZ1RBNzhUWHc5MnNkWUZ4ZjJKNHpEV3YvCjJiSjkxR0ozSUhMYTNzeGwxRlEvS1Q3ZmFYa0U1Sk5Id283ZStYYVdBTmNET0hUT1Z5SjBYTzdEamJZeHpvZ2YKRVZxeWxEYjlrM0JmZlFHNmdNZGVtR251NGZJNHMybFlUZWtzVGZBdE1Eb0VzYk1TWVpsaGh5VXl4U0wvTGc5MApQSzJTMUFoZUhuYXArT1FKUEczaHh0cW1vKzltY0RDYlZKbkx5d2g3ejBnVWtmNXYzcnpLOE8wamxMSVBSTlAzCmtwNEF4UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUUwpMekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBc3MzUVpGT3V2Nmk1MGsrNTlrU252cUxuSDhNZkNod0ZxSHdZCmVESThQTUVXQXUvdDZ4Ty9CSUFhNm1QOVNvUk9WbHcwejRaRmVMeVIvU0tRQjZINXZjbzJGanpNUFNXaGVsdEYKdjBqTGdleTVjemkvVnJJUkNQNEttRjhqZ1JVMnJPRUsxblB4Tm5jOGp3d1NDamdrSmp6THNhRjBEVng3bjI1NwpBUER3b3NMMGNPMXA1OVVHOEZmWXNCUVhmZDZpZm9vb0VmVjJLSEdyZkZ1WVlqNmNhQjQ0ZjZEVWF0bmIrcXNZCk00VWd0dDhpRklKUEdwQlBIMGlGWjQ2R0dVbFZ0NGw5cFhSRVRQVEQ0K0txekM1UjIvbHJQRkdoVnpJUUFwWlkKYnRHeUI1ejlhRFB0UWJSSTNUaloxTHY2em1HUkk3OUNvSDJ6ZnJmN0svdHBTMk9JSVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUBUmJacGJSY2x0VldGMGpXRVQreGZSejQvZ2htK2JYQjN2NXJaYVczTUpMcU9CClpIY1plQk9MQUJOa29uTmcrcFBNVGd2Y1BLNDVMaUFGN2tKK214OVFlVG5tL2tlOUZtRUErMDQ0SDJRZnMxUDkKOTVlZGtkd1lnVEE3OFRYdzkyc2RZRnhmMko0ekRXdi8yYko5MUdKM0lITGEzc3hsMUZRL0tUN2ZhWGtFNUpOSAp3bzdlK1hhV0FOY0RPSFRPVnlKMFhPN0RqYll4em9nZkVWcXlsRGI5azNCZmZRRzZnTWRlbUdudTRmSTRzMmxZClRla3NUZkF0TURvRXNiTVNZWmxoaHlVeXhTTC9MZzkwUEsyUzFBaGVIbmFwK09RSlBHM2h4dHFtbys5bWNEQ2IKVkpuTHl3aDd6MGdVa2Y1djNyeks4TzBqbExJUFJOUDNrcDRBeFFJREFRQUJBb0lCQVFDTGxKcnBmY09uZXR5QgowSThXK014VUtsZXV2aXNOMXZnV0JRclo4NDBrTlBld2hxQ3R3OE85YzBvQ0hGemZ2QllyQWtrYnFEa3ZoRHY1CmpuZjZDdlRVWTE5a1ZXbGkzOFJoR0RRV0cwbDF6TnJqL0RwUHpLbTVOOXR4M2FEL045ZWxIUEU2WFBMUExldUgKTGxPaFBWbERPQ1NoMEdLS0tYenp1MkluSDNKSXhyby9XYk12MmkwdWtVcEllTkdta1JMZ1VDR09zL25qNmpjdAowOHl2ZHpZUVdwQW5QcHRvZ2RQbTgrUzJHVHBXWktlOUhEbDRqd3NpWVRVZW9IYnpRWWJBUys1SzRmdkJmUGN5CmUyZmU5ZGUxL3lpc1h1b3VHcmt1cGs1ejgveXN6eGJTVlZ2ZlhlWmhjV2dub0xrOEJ2RkY4WC9KV0ZIWE5oUisKY3JhUEg2bkJBb0dCQU81Z2FtVDdzQjhCUWJaQVR1VXlDMk13ZHkzV0l6K21NKzdyVFFLU3htR296VnJlOVhUcgpyN0dub1J4dnpVUWxkWk1HNWQ5RTB5ZFBqOFMzeW81NG16LzZOaW5iQnBaT1NDN2N4eHlOTnNRMFY4TWZxRVEwCnB4T094K1FJaG9GRC9vRXFlVmhvSFYvQ2IzOEN3K1p5VVlyejdxZk40WEZkQ2ovYTMrVnR5S0ROQW9HQkFPcjkKWVp4aU1PaXkzR0pJVlJ1SzY1N005QWsyZFF3blJDeWZTbUZXNzdHMTdRc1RWQy9uVXVueHRMNGNiVEhaRndVMgo2dXVIUUg2QUNMS2p1ek03djJ3MERsZUNlbG1TKzFaelZvS2I2Mmc2S3pUZXZ2bWhrczI0Vkxtc1BMT0lta1pGCndHSmJoT1lFcDhXcktZalZQNzJXSmg4bXhLVFBUSVhFNTZKSzZIL1pBb0dCQU0yQXUxaHhqdVU3M1IyMGxRK00KTkRyL3hrN3l4QktVUTBOZkFWWU5tUThLU25kanJYSnQyVnFyNi80cStHZ2VieDBnbmozOEJKbG9Rc1pSdUVOWgpBR2FJVy9kN2hsTkFDNFN5K3NqSGlRWmZKYVhtL2RaSEdoNkhRaFo1cnhOenZjNDNBc1BQaGp0TzBYWkt1UDVMClliY01FcHdCcHJCbmlIV0NTUEZ1MHI2bEFvR0FSUGdYUlJIZ3J2dUlDV1NYYmgwSTZMUFkwRGRtaFNtbExiK1cKMGhqMUF1Q1ZjUkc4UE04Vkc4cXdOTGdkS0d0Q0FXckw2bExwRC9lK0ZjaE9ja3dQODg4WGdvR3VMVW9oY0k4cgpqZXY3WEx6dDMzZWM3NkdIZDgrcE5sR2lBME9ObkNCdXhhOTh3eElNdDh4enhWQnBnOWhrMmZIRDkyZE1XMXFlCmJaaTB3b2tDZ1lFQTdoWUNYSXlXQkpkU3lrMnNPakVndHdLY3AwY2VxUC9sd2QwbGFya2VDU1laSEtweGY5TSsKMm93dGd6UzFTZ1pibHlvRytMQzVFRkF6cXJIK002aHdXZCtMcG8yeWhBZ1hVNm9SMDlNdG56ZUo0UGhBTzI5WQo1ejNiZHp5Q1RNZlN4RUYweWNOL21yZnI1N2VGVk51d1ZnUkVySWxkVGw5NkRaVENXS2ZDb0h3PQotLS0tLUVORCBSU0EgUFJJVkFURSBS0tLQo=

 

 

3. 복사한 파일 이동 및 조회

아래 두가지 방법으로 사용가능

 

3.1 Windows 계정 폴더로 이동

C:\Users\본인계정\  으로 이동하여 .kube 폴더 생성 후 아래에 config파일 이동

 

C:\Users\본인계정\.kube\config 에 파일이 있을 경우 바로 사용가능하다.

# C드라이브 test에 kubectl 파일이 있는 경우
PS C:\test> ./kubectl get pods -A
NAMESPACE       NAME                                                READY   STATUS        RESTARTS       AGE
ingress-nginx   ingress-nginx-controller-6544f7745b-z4lsr           1/1     Running       1 (52m ago)    24h
kube-system     coredns-5dd5756b68-2tl8x                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-6b55b                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-7fxrc                            1/1     Running       8 (52m ago)    1d
kube-system     coredns-5dd5756b68-h982d                            1/1     Running       9 (52m ago)    1d
kube-system     etcd-ubuntu                                         1/1     Running       0              1d
kube-system     kube-apiserver-ubuntu                               1/1     Running       0              1d
kube-system     kube-controller-manager-ubuntu                      1/1     Running       0              1d
kube-system     kube-proxy-2sngh                                    1/1     Running       9 (52m ago)    1d
kube-system     kube-proxy-mmsbf                                    1/1     Running       10 (52m ago)   1d
kube-system     kube-proxy-zh22k                                    1/1     Running       0              1d
kube-system     kube-scheduler-ubuntu                               1/1     Running       0              1d
kube-system     weave-net-94f8g                                     2/2     Running       24 (52m ago)   1d
kube-system     weave-net-gszsj                                     2/2     Running       30 (51m ago)   1d
kube-system     weave-net-hxknh                                     2/2     Running       1 (99d ago)    1d

 

 

3.2 사용하기 편한곳에 config 파일두고 kubectl --kubeconfig 옵션을 통해 사용

# C드라이브 내 test폴더에 config파일이 있을 경우
./kubectl get pods -A --kubeconfig=C:\test\config
NAMESPACE       NAME                                                READY   STATUS        RESTARTS       AGE
ingress-nginx   ingress-nginx-controller-6544f7745b-z4lsr           1/1     Running       1 (52m ago)    24h
kube-system     coredns-5dd5756b68-2tl8x                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-6b55b                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-7fxrc                            1/1     Running       8 (52m ago)    1d
kube-system     coredns-5dd5756b68-h982d                            1/1     Running       9 (52m ago)    1d
kube-system     etcd-ubuntu                                         1/1     Running       0              1d
kube-system     kube-apiserver-ubuntu                               1/1     Running       0              1d
kube-system     kube-controller-manager-ubuntu                      1/1     Running       0              1d
kube-system     kube-proxy-2sngh                                    1/1     Running       9 (52m ago)    1d
kube-system     kube-proxy-mmsbf                                    1/1     Running       10 (52m ago)   1d
kube-system     kube-proxy-zh22k                                    1/1     Running       0              1d
kube-system     kube-scheduler-ubuntu                               1/1     Running       0              1d
kube-system     weave-net-94f8g                                     2/2     Running       24 (52m ago)   1d
kube-system     weave-net-gszsj                                     2/2     Running       30 (51m ago)   1d
kube-system     weave-net-hxknh                                     2/2     Running       1 (99d ago)    1d

 

 

4. k8s 클러스터가 여러개일경우

config 파일에 등록한다.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJTC9XdXZlY1ZxL013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHpNekE0TVRneE16VTJOVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURmZTZSREhXZE90Q25qczZnNHFESVprZHFzR2ZRUEE5MXlWYS9LN3AwcTZIRllvRzRQcGZHMkdUeFIKWi9HeXFqU3ZIS01KYWo1WUIvdHJ1TEtrMnoreTFGZXp0eHJFc1JkVi96UXlNOENrcGZVd1FhV2dNMGhSUTF3NgpKcTZGOHhsMlBSaDFlRjJ1eG9YT0pPOFFGam1EV2lUWVQrNEhTL0dRRE5hTlYvNlAxOXllM2VNQWhyZndQeU4xCkJKWXNwRnBTRFJLMTNpNG1IYTk3ZS9WMU9iQkFaQlJNZHhYc3FsaHQvaDR3UWFaRGF4N05tb2huVC9QOGlJTzYKamdtMC82cDBiRmhaaU5nam10TFkwdW9sd2R5Z1JyV1laSCtsclJ6b3FmbW0wUzRieEwrNC9lUXVCeFNlYzBmQwozWjlTaFJPaitiVkt6MFdmRG5PUEQ4S29kTmJqQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUU0x6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUJpSXNhK0EvNgpkdzIzWVNhTm8wU0FaQ3ZUckc1eE1wVWtpRktDcXd0eXU5OFhOR29JNWlhY3pyMllsS2ZueGh6eVNzSFJuWTBNCnZUT3k3aUp2ZGhseldqUlUvcGZMS0hRMVZuSkZ6ZzQ1NDhrdE5MUHQ3eVFTL25HelJDZnJhZGhzc0FKTzJMc1cKTnRxVm5kM3hibVpFTDREamM4ZWNmYVJZMHJTMm9yQ3RoZzZQZFZhMTdQaWlWTk1zS0hGSnJpMVRPdEY5VzdKYQpNQkRLN2ttL2syZThnRmJPYzhlZHc5VGVmS09QTFBLenN6NXVzbWRCNnVyNnZxRFlUYWhtVi9UV2NrQW04aFd6ClU4bkxDcEE1VVllV3hkbTBJSGU2L0NOUmNEeXQzUEtOQTlMZm5zeHcxZjh5c24zSElSSTc2REhyVE9OK3lQUjQKRFpacjI5YjRsWkVtCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.1.1:6443
  name: dev-k8s
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJTC9XdXZlY1ZxL013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHpNekE0TVRneE16VTJOVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURmZTZSREhXZE90Q25qczZnNHFESVprZHFzR2ZRUEE5MXlWYS9LN3AwcTZIRllvRzRQcGZHMkdUeFIKWi9HeXFqU3ZIS01KYWo1WUIvdHJ1TEtrMnoreTFGZXp0eHJFc1JkVi96UXlNOENrcGZVd1FhV2dNMGhSUTF3NgpKcTZGOHhsMlBSaDFlRjJ1eG9YT0pPOFFGam1EV2lUWVQrNEhTL0dRRE5hTlYvNlAxOXllM2VNQWhyZndQeU4xCkJKWXNwRnBTRFJLMTNpNG1IYTk3ZS9WMU9iQkFaQlJNZHhYc3FsaHQvaDR3UWFaRGF4N05tb2huVC9QOGlJTzYKamdtMC82cDBiRmhaaU5nam10TFkwdW9sd2R5Z1JyV1laSCtsclJ6b3FmbW0wUzRieEwrNC9lUXVCeFNlYzBmQwozWjlTaFJPaitiVkt6MFdmRG5PUEQ4S29kTmJqQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUU0x6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUJpSXNhK0EvNgpkdzIzWVNhTm8wU0FaQ3ZUckc1eE1wVWtpRktDcXd0eXU5OFhOR29JNWlhY3pyMllsS2ZueGh6eVNzSFJuWTBNCnZUT3k3aUp2ZGhseldqUlUvcGZMS0hRMVZuSkZ6ZzQ1NDhrdE5MUHQ3eVFTL25HelJDZnJhZGhzc0FKTzJMc1cKTnRxVm5kM3hibVpFTDREamM4ZWNmYVJZMHJTMm9yQ3RoZzZQZFZhMTdQaWlWTk1zS0hGSnJpMVRPdEY5VzdKYQpNQkRLN2ttL2syZThnRmJPYzhlZHc5VGVmS09QTFBLenN6NXVzbWRCNnVyNnZxRFlUYWhtVi9UV2NrQW04aFd6ClU4bkxDcEE1VVllV3hkbTBJSGU2L0NOUmNEeXQzUEtOQTlMZm5zeHcxZjh5c24zSElSSTc2REhyVE9OK3lQUjQKRFpacjI5YjRsWkVtCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.2.1:6443
  name: stg-k8s
contexts:
- context:
    cluster: dev-k8s
    user: pangyeon-dev
  name: pangyeon-dev
- context:
    cluster: stg-k8s
    user: pangyeon-stg
  name: pangyeon-stg
current-context: pangyeon-dev
kind: Config
preferences: {}
users:
- name: pangyeon-dev
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJSGpubTVYZWxsbk13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHlOREE0TWpBeE16VTJOVFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ0QVJiWnBiUmNsdFZXRjAKaldFVCt4ZlJ6NC9naG0rYlhCM3Y1clphVzNNSkxxT0JaSGNaZUJPTEFCTmtvbk5nK3BQTVRndmNQSzQ1TGlBRgo3a0orbXg5UWVUbm0va2U5Rm1FQSswNDRIMlFmczFQOTk1ZWRrZHdZZ1RBNzhUWHc5MnNkWUZ4ZjJKNHpEV3YvCjJiSjkxR0ozSUhMYTNzeGwxRlEvS1Q3ZmFYa0U1Sk5Id283ZStYYVdBTmNET0hUT1Z5SjBYTzdEamJZeHpvZ2YKRVZxeWxEYjlrM0JmZlFHNmdNZGVtR251NGZJNHMybFlUZWtzVGZBdE1Eb0VzYk1TWVpsaGh5VXl4U0wvTGc5MApQSzJTMUFoZUhuYXArT1FKUEczaHh0cW1vKzltY0RDYlZKbkx5d2g3ejBnVWtmNXYzcnpLOE8wamxMSVBSTlAzCmtwNEF4UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUUwpMekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBc3MzUVpGT3V2Nmk1MGsrNTlrU252cUxuSDhNZkNod0ZxSHdZCmVESThQTUVXQXUvdDZ4Ty9CSUFhNm1QOVNvUk9WbHcwejRaRmVMeVIvU0tRQjZINXZjbzJGanpNUFNXaGVsdEYKdjBqTGdleTVjemkvVnJJUkNQNEttRjhqZ1JVMnJPRUsxblB4Tm5jOGp3d1NDamdrSmp6THNhRjBEVng3bjI1NwpBUER3b3NMMGNPMXA1OVVHOEZmWXNCUVhmZDZpZm9vb0VmVjJLSEdyZkZ1WVlqNmNhQjQ0ZjZEVWF0bmIrcXNZCk00VWd0dDhpRklKUEdwQlBIMGlGWjQ2R0dVbFZ0NGw5cFhSRVRQVEQ0K0txekM1UjIvbHJQRkdoVnpJUUFwWlkKYnRHeUI1ejlhRFB0UWJSSTNUaloxTHY2em1HUkk3OUNvSDJ6ZnJmN0svdHBTMk9JSVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMnRBUmJacGJSY2x0VldGMGpXRVQreGZSejQvZ2htK2JYQjN2NXJaYVczTUpMcU9CClpIY1plQk9MQUJOa29uTmcrcFBNVGd2Y1BLNDVMaUFGN2tKK214OVFlVG5tL2tlOUZtRUErMDQ0SDJRZnMxUDkKOTVlZGtkd1lnVEE3OFRYdzkyc2RZRnhmMko0ekRXdi8yYko5MUdKM0lITGEzc3hsMUZRL0tUN2ZhWGtFNUpOSAp3bzdlK1hhV0FOY0RPSFRPVnlKMFhPN0RqYll4em9nZkVWcXlsRGI5azNCZmZRRzZnTWRlbUdudTRmSTRzMmxZClRla3NUZkF0TURvRXNiTVNZWmxoaHlVeXhTTC9MZzkwUEsyUzFBaGVIbmFwK09RSlBHM2h4dHFtbys5bWNEQ2IKVkpuTHl3aDd6MGdVa2Y1djNyeks4TzBqbExJUFJOUDNrcDRBeFFJREFRQUJBb0lCQVFDTGxKcnBmY09uZXR5QgowSThXK014VUtsZXV2aXNOMXZnV0JRclo4NDBrTlBld2hxQ3R3OE85YzBvQ0hGemZ2QllyQWtrYnFEa3ZoRHY1CmpuZjZDdlRVWTE5a1ZXbGkzOFJoR0RRV0cwbDF6TnJqL0RwUHpLbTVOOXR4M2FEL045ZWxIUEU2WFBMUExldUgKTGxPaFBWbERPQ1NoMEdLS0tYenp1MkluSDNKSXhyby9XYk12MmkwdWtVcEllTkdta1JMZ1VDR09zL25qNmpjdAowOHl2ZHpZUVdwQW5QcHRvZ2RQbTgrUzJHVHBXWktlOUhEbDRqd3NpWVRVZW9IYnpRWWJBUys1SzRmdkJmUGN5CmUyZmU5ZGUxL3lpc1h1b3VHcmt1cGs1ejgveXN6eGJTVlZ2ZlhlWmhjV2dub0xrOEJ2RkY4WC9KV0ZIWE5oUisKY3JhUEg2bkJBb0dCQU81Z2FtVDdzQjhCUWJaQVR1VXlDMk13ZHkzV0l6K21NKzdyVFFLU3htR296VnJlOVhUcgpyN0dub1J4dnpVUWxkWk1HNWQ5RTB5ZFBqOFMzeW81NG16LzZOaW5iQnBaT1NDN2N4eHlOTnNRMFY4TWZxRVEwCnB4T094K1FJaG9GRC9vRXFlVmhvSFYvQ2IzOEN3K1p5VVlyejdxZk40WEZkQ2ovYTMrVnR5S0ROQW9HQkFPcjkKWVp4aU1PaXkzR0pJVlJ1SzY1N005QWsyZFF3blJDeWZTbUZXNzdHMTdRc1RWQy9uVXVueHRMNGNiVEhaRndVMgo2dXVIUUg2QUNMS2p1ek03djJ3MERsZUNlbG1TKzFaelZvS2I2Mmc2S3pUZXZ2bWhrczI0Vkxtc1BMT0lta1pGCndHSmJoT1lFcDhXcktZalZQNzJXSmg4bXhLVFBUSVhFNTZKSzZIL1pBb0dCQU0yQXUxaHhqdVU3M1IyMGxRK00KTkRyL3hrN3l4QktVUTBOZkFWWU5tUThLU25kanJYSnQyVnFyNi80cStHZ2VieDBnbmozOEJKbG9Rc1pSdUVOWgpBR2FJVy9kN2hsTkFDNFN5K3NqSGlRWmZKYVhtL2RaSEdoNkhRaFo1cnhOenZjNDNBc1BQaGp0TzBYWkt1UDVMClliY01FcHdCcHJCbmlIV0NTUEZ1MHI2bEFvR0FSUGdYUlJIZ3J2dUlDV1NYYmgwSTZMUFkwRGRtaFNtbExiK1cKMGhqMUF1Q1ZjUkc4UE04Vkc4cXdOTGdkS0d0Q0FXckw2bExwRC9lK0ZjaE9ja3dQODg4WGdvR3VMVW9oY0k4cgpqZXY3WEx6dDMzZWM3NkdIZDgrcE5sR2lBME9ObkNCdXhhOTh3eElNdDh4enhWQnBnOWhrMmZIRDkyZE1XMXFlCmJaaTB3b2tDZ1lFQTdoWUNYSXlXQkpkU3lrMnNPakVndHdLY3AwY2VxUC9sd2QwbGFya2VDU1laSEtweGY5TSsKMm93dGd6UzFTZ1pibHlvRytMQzVFRkF6cXJIK002aHdXZCtMcG8yeWhBZ1hVNm9SMDlNdG56ZUo0UGhBTzI5WQo1ejNiZHp5Q1RNZlN4RUYweWNOL21yZnI1N2VGVk51d1ZnUkVySWxkVGw5NkRaVENXS2ZDb0h3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
- name: pangyeon-stg
  user:
    client-certificate-data: LS0tLS1CRUdJTiGSDKLNMFKGHtLS0tCk1JSURJVENDQWdubTVYZWxsbk13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHlOREE0TWpBeE16VTJOVFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ0QVJiWnBiUmNsdFZXRjAKaldFVCt4ZlJ6NC9naG0rYlhCM3Y1clphVzNNSkxxT0JaSGNaZUJPTEFCTmtvbk5nK3BQTVRndmNQSzQ1TGlBRgo3a0orbXg5UWVUbm0va2U5Rm1FQSswNDRIMlFmczFQOTk1ZWRrZHdZZ1RBNzhUWHc5MnNkWUZ4ZjJKNHpEV3YvCjJiSjkxR0ozSUhMYTNzeGwxRlEvS1Q3ZmFYa0U1Sk5Id283ZStYYVdBTmNET0hUT1Z5SjBYTzdEamJZeHpvZ2YKRVZxeWxEYjlrM0JmZlFHNmdNZGVtR251NGZJNHMybFlUZWtzVGZBdE1Eb0VzYk1TWVpsaGh5VXl4U0wvTGc5MApQSzJTMUFoZUhuYXArT1FKUEczaHh0cW1vKzltY0RDYlZKbkx5d2g3ejBnVWtmNXYzcnpLOE8wamxMSVBSTlAzCmtwNEF4UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUUwpMekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBc3MzUVpGT3V2Nmk1MGsrNTlrU252cUxuSDhNZkNod0ZxSHdZCmVESThQTUVXQXUvdDZ4Ty9CSUFhNm1QOVNvUk9WbHcwejRaRmVMeVIvU0tRQjZINXZjbzJGanpNUFNXaGVsdEYKdjBqTGdleTVjemkvVnJJUkNQNEttRjhqZ1JVMnJPRUsxblB4Tm5jOGp3d1NDamdrSmp6THNhRjBEVng3bjI1NwpBUER3b3NMMGNPMXA1OVVHOEZmWXNCUVhmZDZpZm9vb0VmVjJLSEdyZkZ1WVlqNmNhQjQ0ZjZEVWF0bmIrcXNZCk00VWd0dDhpRklKUEdwQlBIMGlGWjQ2R0dVbFZ0NGw5cFhSRVRQVEQ0K0txekM1UjIvbHJQRkdoVnpJUUFwWlkKYnRHeUI1ejlhRFB0UWJSSTNUaloxTHY2em1HUkk3OUNvSDJ6ZnJmN0svdHBTMk9JSVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJGSNRYJASQAWpNSUlFcFFJQkFBS0NBUUVBMnRBUmJacGJSY2x0VldGMGpXRVQreGZSejQvZ2htK2JYQjN2NXJaYVczTUpMcU9CClpIY1plQk9MQUJOa29uTmcrcFBNVGd2Y1BLNDVMaUFGN2tKK214OVFlVG5tL2tlOUZtRUErMDQ0SDJRZnMxUDkKOTVlZGtkd1lnVEE3OFRYdzkyc2RZRnhmMko0ekRXdi8yYko5MUdKM0lITGEzc3hsMUZRL0tUN2ZhWGtFNUpOSAp3bzdlK1hhV0FOY0RPSFRPVnlKMFhPN0RqYll4em9nZkVWcXlsRGI5azNCZmZRRzZnTWRlbUdudTRmSTRzMmxZClRla3NUZkF0TURvRXNiTVNZWmxoaHlVeXhTTC9MZzkwUEsyUzFBaGVIbmFwK09RSlBHM2h4dHFtbys5bWNEQ2IKVkpuTHl3aDd6MGdVa2Y1djNyeks4TzBqbExJUFJOUDNrcDRBeFFJREFRQUJBb0lCQVFDTGxKcnBmY09uZXR5QgowSThXK014VUtsZXV2aXNOMXZnV0JRclo4NDBrTlBld2hxQ3R3OE85YzBvQ0hGemZ2QllyQWtrYnFEa3ZoRHY1CmpuZjZDdlRVWTE5a1ZXbGkzOFJoR0RRV0cwbDF6TnJqL0RwUHpLbTVOOXR4M2FEL045ZWxIUEU2WFBMUExldUgKTGxPaFBWbERPQ1NoMEdLS0tYenp1MkluSDNKSXhyby9XYk12MmkwdWtVcEllTkdta1JMZ1VDR09zL25qNmpjdAowOHl2ZHpZUVdwQW5QcHRvZ2RQbTgrUzJHVHBXWktlOUhEbDRqd3NpWVRVZW9IYnpRWWJBUys1SzRmdkJmUGN5CmUyZmU5ZGUxL3lpc1h1b3VHcmt1cGs1ejgveXN6eGJTVlZ2ZlhlWmhjV2dub0xrOEJ2RkY4WC9KV0ZIWE5oUisKY3JhUEg2bkJBb0dCQU81Z2FtVDdzQjhCUWJaQVR1VXlDMk13ZHkzV0l6K21NKzdyVFFLU3htR296VnJlOVhUcgpyN0dub1J4dnpVUWxkWk1HNWQ5RTB5ZFBqOFMzeW81NG16LzZOaW5iQnBaT1NDN2N4eHlOTnNRMFY4TWZxRVEwCnB4T094K1FJaG9GRC9vRXFlVmhvSFYvQ2IzOEN3K1p5VVlyejdxZk40WEZkQ2ovYTMrVnR5S0ROQW9HQkFPcjkKWVp4aU1PaXkzR0pJVlJ1SzY1N005QWsyZFF3blJDeWZTbUZXNzdHMTdRc1RWQy9uVXVueHRMNGNiVEhaRndVMgo2dXVIUUg2QUNMS2p1ek03djJ3MERsZUNlbG1TKzFaelZvS2I2Mmc2S3pUZXZ2bWhrczI0Vkxtc1BMT0lta1pGCndHSmJoT1lFcDhXcktZalZQNzJXSmg4bXhLVFBUSVhFNTZKSzZIL1pBb0dCQU0yQXUxaHhqdVU3M1IyMGxRK00KTkRyL3hrN3l4QktVUTBOZkFWWU5tUThLU25kanJYSnQyVnFyNi80cStHZ2VieDBnbmozOEJKbG9Rc1pSdUVOWgpBR2FJVy9kN2hsTkFDNFN5K3NqSGlRWmZKYVhtL2RaSEdoNkhRaFo1cnhOenZjNDNBc1BQaGp0TzBYWkt1UDVMClliY01FcHdCcHJCbmlIV0NTUEZ1MHI2bEFvR0FSUGdYUlJIZ3J2dUlDV1NYYmgwSTZMUFkwRGRtaFNtbExiK1cKMGhqMUF1Q1ZjUkc4UE04Vkc4cXdOTGdkS0d0Q0FXckw2bExwRC9lK0ZjaE9ja3dQODg4WGdvR3VMVW9oY0k4cgpqZXY3WEx6dDMzZWM3NkdIZDgrcE5sR2lBME9ObkNCdXhhOTh3eElNdDh4enhWQnBnOWhrMmZIRDkyZE1XMXFlCmJaaTB3b2tDZ1lFQTdoWUNYSXlXQkpkU3lrMnNPakVndHdLY3AwY2VxUC9sd2QwbGFya2VDU1laSEtweGY5TSsKMm93dGd6UzFTZ1pibHlvRytMQzVFRkF6cXJIK002aHdXZCtMcG8yeWhBZ1hVNm9SMDlNdG56ZUo0UGhBTzI5WQo1ejNiZHp5Q1RNZlN4RUYweWNOL21yZnI1N2VGVk51d1ZnUkVySWxkVGw5NkRaVENXS2ZDb0h3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

 

 

5. use-context 옵션을 통해 클러스터를 선택한다.

PS C:\test> ./kubectl config use-context pangyeon-dev
Switched to context "pangyeon-dev".

 

 

6. 선택 후 조회

./kubectl get pods -A
NAMESPACE       NAME                                                READY   STATUS        RESTARTS       AGE
ingress-nginx   ingress-nginx-controller-6544f7745b-z4lsr           1/1     Running       1 (52m ago)    24h
kube-system     coredns-5dd5756b68-2tl8x                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-6b55b                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-7fxrc                            1/1     Running       8 (52m ago)    1d
kube-system     coredns-5dd5756b68-h982d                            1/1     Running       9 (52m ago)    1d
kube-system     etcd-ubuntu                                         1/1     Running       0              1d
kube-system     kube-apiserver-ubuntu                               1/1     Running       0              1d
kube-system     kube-controller-manager-ubuntu                      1/1     Running       0              1d
kube-system     kube-proxy-2sngh                                    1/1     Running       9 (52m ago)    1d
kube-system     kube-proxy-mmsbf                                    1/1     Running       10 (52m ago)   1d
kube-system     kube-proxy-zh22k                                    1/1     Running       0              1d
kube-system     kube-scheduler-ubuntu                               1/1     Running       0              1d
kube-system     weave-net-94f8g                                     2/2     Running       24 (52m ago)   1d
kube-system     weave-net-gszsj                                     2/2     Running       30 (51m ago)   1d
kube-system     weave-net-hxknh                                     2/2     Running       1 (99d ago)    1d

 

 

참고

https://kubernetes.io/ko/docs/tasks/tools/install-kubectl-windows/

https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

반응형
반응형

1. pods 전체삭제

 # Pods 전체 삭제
 kubectl delete pods --all --all-namespaces
 kubectl delete pods --all -A
 
 # Namespace 별 삭제
 kubectl delete pods --all -n test

 

 

2. pods 특정 이름으로 삭제

<namespace> 자리에 namespace 입력 - test

/application/ 자리에 삭제할 이름 입력 - /nginx/

kubectl get pods -n <namespace> --no-headers=true | awk '/application/{print $1}'| xargs  kubectl delete -n <namespace> pod

#namesapce가 test이고 test namespace안에 nginx라는 이름이 들어가는 pods 전체 삭제
kubectl get pods -n test --no-headers=true | awk '/nginx/{print $1}'| xargs  kubectl delete -n test pod

 

 

참고 : 

https://www.baeldung.com/linux/kubernetes-delete-all-pods

https://stackoverflow.com/questions/59473707/kubernetes-pod-delete-with-pattern-match-or-wildcard

반응형
반응형

 

Version

- Ubuntu 20.04

- Docker 24.0.5

- kubernetes 1.28.0

 

※ Master 1대 Woker 2대

 

1. 전부 root 계정으로 시작한다.

sudo -i

 

2. 모든 서버 Hostname 변경

변경 안 할 시 node 등록할 때 에러 발생

# Master server 1
hostnamectl set-hostname Master

# Worker server 1
hostnamectl set-hostname Node1

# Worker server 2
hostnamectl set-hostname Node2

 

3. Docker 설치 (Master, Woker 전부 진행)

apt-get update
apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
# 도커까지 설치
apt-get install docker-ce docker-ce-cli containerd.io

# containerd만 설치할경우
apt-get install containerd.io

모든 서버에 설치 후 Docker Version 확인

 docker version

 

4. Kubernetes (k8s) 설치 (Master, Worker 전부 실행)

swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab

 

iptable 설정

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

 

kubectl, kubeadm, kubelet 설치

(2024.03.12 기준 https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/#how-to-migrate 해당URL을 보고 curl 과 deb 설정한다)

apt-get update
apt-get install -y apt-transport-https ca-certificates curl

 

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg| gpg -o /usr/share/keyrings/kubernetes-archive-keyring.gpg --dearmor
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
systemctl daemon-reload
systemctl restart kubelet

 

5. kubeadm init (Master에만 진행)

kubeadm init 실행시 아래 에러 발생

unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" 

참고 : https://kubernetes.io/docs/setup/production-environment/container-runtimes/

(Configuring the systemd cgroup driver의 Note 부분 참고)

해당 작업만 모든 서버(Master, Worker에서 진행)

# 해당작업만 모든 서버(Master, Worker)에서 진행
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
systemctl restart containerd

작업 후 아래 명령어 진행

kubeadm init

정상적으로 성공할 경우 아래 값 반환 해당값 메모장에 임시 저장

kubeadm join 192.x.x.x:6443 --token c5l89v.9ao1r5texepx06d8 \
	--discovery-token-ca-cert-hash sha256:50cb3eaxe334612e81c2342790130801afd70ddb9967a06bb0b202141748354f

모든 User를 위해 명령어 등록

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

 

6. Pod Network 설치 (Master에서만 진행)

https://cloud.weave.works 서버가 이상하므로 github에서 다운받는다. 현재 작성일 기준 2.8.1이 최신

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml

 

7. Node 등록 (Woker에서만 진행)

5번에서 저장한 kubeadm join 입력

kubeadm join 192.x.x.x:6443 --token c5l89v.9ao1r5texepx06d8 \
	--discovery-token-ca-cert-hash sha256:50cb3eaxe334612e81c2342790130801afd70ddb9967a06bb0b202141748354f

 

8. Master에서 확인

Master에서 아래 명령어 실행

kubectl get nodes -o wide

해당 가이드대로 진행하였으면 아래 사진에서 ubuntu -> master로 나와야함

 

 

반응형

+ Recent posts