반응형

 

serviceAccount 를 현재 사용하고 있는 pod 들을 조회하기 위해서는 --field-selector 옵션을 통해 조회 가능하다.

$ kubectl get pods -n kube-system --field-selector spec.serviceAccountName=cilium -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
cilium-2nrnz   1/1     Running   0          13m   172.16.1.1   worker1   <none>           <none>
cilium-ldsn5   1/1     Running   0          13m   172.16.1.2   worker2   <none>           <none>
cilium-pcm6x   1/1     Running   0          13m   172.16.1.3   master    <none>           <none>

 

serviceAccount 뿐만아니라 여러 field 선택을 통해 다른것들도 조회가능하다.

https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/field-selectors/

 

필드 셀렉터

필드 셀렉터 는 한 개 이상의 리소스 필드 값에 따라 쿠버네티스 리소스를 선택하기 위해 사용된다. 필드 셀렉터 쿼리의 예시는 다음과 같다. metadata.name=my-service metadata.namespace!=default status.phase=Pe

kubernetes.io

 

반응형
반응형

 

k8s cluster scale out 시 kubeadm token이 필요하다.

kubeadm token의 경우 만료일자가 있어 어느정도 기한이 지나면 사라지기 때문에 재생성이 필요하다.

방법이 2개 정도 있으며, 생성 후 사용하는 법과 생성과 동시에 명령어를 생성하는 법이 있다.

 

1. 기존 토큰이 있을 경우 kubeadm token list 명령어를 통해 token 조회 가능

kubeadm token list​

 

기존 토큰을 사용하거나 토큰이 없어서 재생성 후 사용할 경우 기존 토큰이 없을 경우는 토큰 생성

kubeadm token create

root@master:~# kubeadm token create
ck9j53.uiwl5qd5s9vwdevzv

 

 

discovery-token-ca-cert-hash 조회

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

root@master:~# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
c8158df76056620db625ba08a4faaf47f795e4bf0517d6a296657dc6f59192e3

 

 

조회 후 kubeadm join 옵션을 통해 신규 Node 추가

kubeadm join 127.0.0.1:6443 --token ck9j53.uiwl5qd5s9vwdevzv \
--discovery-token-ca-cert-hash sha256:c8158df76056620db625ba08a4faaf47f795e4bf0517d6a296657dc6f59192e3

 

 

2. --print-join-command 로 바로 출력

kubeadm token create --print-join-command

root@master:~# kubeadm token create --print-join-command
kubeadm join 127.0.0.1:6443 --token ae46ls.ggne5zqvxtyv2153 --discovery-token-ca-cert-hash sha256:c8158df76056620db625ba08y4fabf37f715e4af0517d6x296657dc6f59194e2

 

반응형
반응형

 

서버 작업 중 kubelet.service 재기동시 계속 종료되는 현상이 발생되어 journalctl -xeu kubelet를 통해 로그를 확인하였으나, 그 당시 엄청 많은 쓸데없는 에러로 인해서 해당 문제를 해결하는데 시간을 많이 허비 하였다.

(로그찾을때 E(Error) 만 보느라 F(Fatal)을 놓쳤었음.)

 

아래 에러로 인하여 kubelet이 재기동 되지 않은 것을 확인하였으며, kenel panic을 재조정하여 해결하였다.

Failed to start ContainerManager invalid kernel flag: kenrnel/panic, expected valued: 10, actual value :0

 

이 문제는 kubeadm으로 k8s 설치시 발생하진 않지만, kubespray로 k8s를 설치하게되면,  기본적으로 kernel.panic=10으로 들어가게된다.

해당 옵션은 kubelet 기동 인자 전달시 예상한값과 다를 경우 패스하거나 실패 시키는 옵션인데, 누군가 kernel.panic=0 으로 변경하여 에러가 발생하였다.

 

조치방법은 /etc/sysctl.conf 내 kernel.panic 옵션의 값을 변경하면된다.

# vi /etc/sysctl.conf
kernel.panic=10

 

 

기타.

kubespray로 k8s를 설치할 경우 kubelet service파일(/etc/systemd/system/kubelet.service)에 인자들이 들어간다.

적용된 kernel.panic 옵션을 통해 해당 인자들이 탐지되어 정상기동하거나 실패하는 듯 하다.

# cat /etc/systemd/system/kubelet.service

[Service]
EnvironmentFile=-/etc/kubernetes/kubelet.env
ExecStart=/usr/local/bin/kubelet \
                $KUBE_LOGTOSTDERR \
                $KUBE_LOG_LEVEL \
                $KUBELET_API_SERVER \
                $KUBELET_ADDRESS \
                $KUBELET_PORT \
                $KUBELET_HOSTNAME \
                $KUBELET_ARGS \
                $DOCKER_SOCKET

 

반응형
반응형

 

해당 가이드는 지속적으로 수정 예정. 동작 및 코드 문의시 댓글 부탁드립니다.

 

k8s 를 이용하다보면 Node들에 container image들이 쌓이게 되는데 이를 정리하는 CronJob 이다

CronJob에 이용되는 image는 아래 dokcer hub에서 확인 할 수 있다.(amd64, arm64 아키텍쳐 사용가능)

https://hub.docker.com/r/pangyeons/image-prune

 

https://hub.docker.com/r/pangyeons/image-prune

 

hub.docker.com

 

현재버전 - 1.1

 

기능은 옵션을 통해 docker 뿐만아니라 crictl 명령어를 이용하여 image pruning 을 진행할 수 있으며,

Control Plane 도 정리할지 안할지 옵션을 통해 선택할 수 있다.

 

사용방법은 아래와 같다.

 

1. 아래는 기본적인 yaml 파일이며 command 배열과 mountPath, API_TOKEN, API_URL, KEY_NAME, defaultMode는 필수 옵션이다.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: image-prune
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: image-prune
            image: pangyeons/image-prune:1.1
            imagePullPolicy: IfNotPresent
            command: # 아래 command 배열 수정 및 삭제 금지
            - /bin/sh
            - -c
            - chmod +x image_prune.sh; /image_prune.sh
            volumeMounts:
            - mountPath: /etc/sshkey # 수정 및 삭제 금지
              name: secret-sshkey
            env:
            - name: API_TOKEN # 수정 및 삭제 금지
              valueFrom:
                secretKeyRef:
                  key:
                  name: 
            - name: API_URL # 수정 및 삭제 금지
              value: ""
            - name: KEY_NAME # 수정 및 삭제 금지
              value: ""
            - name: CRI_TYPE
              value: ""
            - name: CONTROL_PLANE
              value: ""
            - name: OS_USER
              value: ""
            - name: PORT
              value: "6443"
          restartPolicy: OnFailure
          volumes:
          - name: secret-sshkey
            secret:
              defaultMode: 0600 # 수정 및 삭제 금지
              secretName:

 

2. ssh key 생성 및 등록

ssh-keygen 을 통해 ssh key 생성

ssh-keygen -t rsa # ex) id_rsa, id_rsa.pub 생성

 

 

생성 후 나온 public key 모든 node에 등록

# id_rsa.pub 등록
vi ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNbPyWARlsD1OmjgHcQAewXvmTbAJYAYMlRgjgUKu69uVyKB8ZS0n3KuLJy9JoTF4y/VOL5DTCU2TFb1A1eIhM4Ox5sPoNTWIG7h/crH

 

생성한 ssh private key를 k8s secret에 등록

kubectl create secret generic sshkey --from-file=privatekey=./id_rsa

 

 

3. k8s API를 사용할 API Token 생성(현재 Ready 중인 Node 및 Master/Worker Node 구분을 위함)

API Token 생성을 위한 Serivce Account 생성 및 API 조회에 필요한 Role 부여

vi test-token.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-token
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: read-nodes
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-nodes-binding
subjects:
- kind: ServiceAccount
  name: test-token
  namespace: default
roleRef:
  kind: ClusterRole
  name: read-nodes
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: test-token-secret
  namespace: default
  annotations:
    kubernetes.io/service-account.name: test-token

 

 

생성한 계정에 대한 API Token 조회

API_TOKEN=$(kubectl get secret test-token-secret -o jsonpath="{.data.token}" | base64 --decode)

 

4. 생성한 API Token을 k8s secret 으로 생성

kubectl create secret generic apitoken --from-literal=apitoken=$API_TOKEN

 

5. CronJob 생성

API_TOKEN secret으로 생성한 apitoken key: apitoken
name: apitoken
필수
API_URL Control Plane API URL 127.0.0.1 필수
KEY_NAME secret으로 생성한 ssh key privatekey 필수
OS_USER Node들에 접속할 OS계정 user 기본값 : root
CRI_TYPE 컨테이너 런타임 인터페이스 docker/crictl 기본값 : root
CONTROL_PLANE CONTROL PLANE 도 정리 true/false 기본값 : true
PORT k8s API PORT 6443 기본값 : 6443

 

apiVersion: batch/v1
kind: CronJob
metadata:
  name: image-prune
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: image-prune
            image: pangyeons/image-prune:1.1
            imagePullPolicy: IfNotPresent
            command: # 아래 command 배열 수정 및 삭제 금지
            - /bin/sh
            - -c
            - chmod +x image_prune.sh; /image_prune.sh
            volumeMounts:
            - mountPath: /etc/sshkey # 수정 및 삭제 금지
              name: secret-sshkey
            env:
            - name: API_TOKEN # 수정 및 삭제 금지
              valueFrom:
                secretKeyRef:
                  key: apitoken # 위에 가이드대로 생성한 token
                  name: apitoken # 위에 가이드대로 생성한 token
            - name: API_URL # 수정 및 삭제 금지
              value: "172.1.1.1" # Control Plane API IP
            - name: KEY_NAME # 위에 가이드대로 생성한 SSH KEY Secret
              value: "privatekey"
            - name: CRI_TYPE # Container Runtime이 crictl일 경우
              value: "crictl"
            - name: CONTROL_PLANE # Control Plane에서는 동작안함.
              value: "false"
            - name: PORT
              value "6443"
          restartPolicy: OnFailure
          volumes:
          - name: secret-sshkey
            secret:
              defaultMode: 0600 # 수정 및 삭제 금지
              secretName: sshkey # 위에 가이드대로 생성한 SSH KEY Secret

 

반응형
반응형

 

shell script 작성 후 shell script를 실행 시키는 image 를 Alpine linux 로 이미지 생성 후 Pod 실행시

No such file or directory 에러가 발생하였다.

 

확인해보니 shell script 제일 상단에 #!/bin/bash 가 문제였고

Alpine linux 의 경우 shell script 제일 상단에 #!/bin/sh 로 해주어야한다.

#!/bin/bash -> #!bin/sh

 

bash를 사용해야하는 경우라면 bash를 추가해서 사용하면 된다.

apk add bash

 

반응형
반응형

 

crio 설치시 service start가 안되고 아래와 같은 에러가 발생하였다.

validating runtime config: cannot enable checkpoint/restore support without the criu binary in $PATH

 

간단히 criu만 설치해주면 되었는데, apt list를 조회하니 아래와 같이 2개가 조회되었다.

criu/jammy 3.16.1-2 arm64

golang-github-checkpoint-restore-go-criu-dev/jammy 5.1.0-1 all

 

그래서 2개다 설치해주었다. (하나만 설치해도 됐을지도)

apt-get install criu/jammy
apt-get install golang-github-checkpoint-restore-go-criu-dev/jammy

# 설치 후 cri-o 재시작
systemctl restart crio.service

 

 

반응형
반응형

 

이전에 Ubuntu 20.04 에 Docker/Contained 로 Kubernetes 설치하는 글을 썼으나,

(이전 글 - 2023.08.21 - [Develop/k8s] - Ubuntu 20.04 kubernetes(k8s) 설치)

최근에 Mac M2 Arm 아키텍처에 Ubuntu 22.04에서 그대로 설치를 진행하니 kube-apiserver가 

계속 종료되는 현상이 발생했다. 아마 k8s 와 containerd 간의 cgroup 설정문제로 그런 것 같은데

이참에 cri-o 로 cgroup 설정도 해볼겸 다시 설치를 진행하였다.

 

 

기본적으로 root 계정으로 실행하며, 아래 5번까지는 Master노드와 Worker 노드에 똑같이 설정한다

1. 패키지 업데이트 또는 필요한 항목 설치

apt-get update
apt-get install -y software-properties-common curl

 

 

2. Kubernetes과 cri-o 다운 경로 설정

해당 설정은 직접 버전과 경로를 설정하면 필요 없다.

KUBERNETES_VERSION=v1.29
PROJECT_PATH=prerelease:/main

 

 

3. Kubernetes 와 cri-o Repository 설정

# kubernetes
curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/ /" |
    tee /etc/apt/sources.list.d/kubernetes.list
    
# cri-o 
curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/$PROJECT_PATH/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/$PROJECT_PATH/deb/ /" |
    tee /etc/apt/sources.list.d/cri-o.list

 

 

4. 패키지 설치

apt-get update
apt-get install -y cri-o kubelet kubeadm kubectl

 

설치하게되면 cri-o 가 동작중이 아니므로, 서비스 실행

systemctl start crio.service

 

 

5. 클러스터 동작을 위한 기본 구성

swapoff -a
modprobe br_netfilter
sysctl -w net.ipv4.ip_forward=1

 

 

6. kubernetes init (Master 노드만 진행한다)

kubeadm init

 

진행 후 완료 되면 나오는 메세지대로 config 파일을 생성한다.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

정상적으로 성공할 경우 아래 값 반환 해당값 메모장에 임시 저장

kubeadm join 192.x.x.x:6443 --token c5l89v.9ao1r5texepx06d8 \
	--discovery-token-ca-cert-hash sha256:50cb3eaxe334612e81c2342790130801afd70ddb9967a06bb0b202141748354f

 

 

7. Node 등록 (Worker 노드만 진행한다.)

6번에서 저장한 kubeadm join 입력

kubeadm join 192.x.x.x:6443 --token c5l89v.9ao1r5texepx06d8 \
	--discovery-token-ca-cert-hash sha256:50cb3eaxe334612e81c2342790130801afd70ddb9967a06bb0b202141748354f

 

 

8. Master에서 확인

Master에서 아래 명령어 실행

kubectl get nodes -o wide

 

Ready 로 잘 연결된 것을 확인

 

 

9. cilium 설치 (Master 노드만 진행한다)

# Arm 아키텍처여서 Arm으로 진행 github에 접속해서 알맞은 아키텍처로 설치 필요
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-arm64.tar.gz

 

설치 후 압축 해제 및 cilium install

sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
cilium install

 

cilium 설치 확인(설치되고 pod가 뜨는데 몇분걸림) 및 핵심 pod 정상 확인

 

 

 

v1.29가 아닌 v1.28이하로 설치할 경우 cri-o만 따로 설치 후 이전과 비슷한 방법으로 k8s를 설치해줘야 하는 것 같다.

참고 - https://github.com/cri-o/cri-o/blob/main/install.md#installation-instructions

cri-o v1.29 이상 설치 - https://github.com/cri-o/packaging/blob/main/README.md

cri-o v1.28 이하 설치 - https://github.com/cri-o/cri-o/blob/main/install-legacy.md

cilium - https://kubernetes.io/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/

반응형
반응형

 

kubectl api-resources 명령어를 통해 모든 리소스 조회 가능하다.

root@master:/# kubectl api-resources
NAME                              SHORTNAMES         APIVERSION                             NAMESPACED   KIND
bindings                                             v1                                     true         Binding
componentstatuses                 cs                 v1                                     false        ComponentStatus
configmaps                        cm                 v1                                     true         ConfigMap
endpoints                         ep                 v1                                     true         Endpoints
events                            ev                 v1                                     true         Event
limitranges                       limits             v1                                     true         LimitRange
namespaces                        ns                 v1                                     false        Namespace
nodes                             no                 v1                                     false        Node
persistentvolumeclaims            pvc                v1                                     true         PersistentVolumeClaim
persistentvolumes                 pv                 v1                                     false        PersistentVolume
pods                              po                 v1                                     true         Pod
podtemplates                                         v1                                     true         PodTemplate
replicationcontrollers            rc                 v1                                     true         ReplicationController
resourcequotas                    quota              v1                                     true         ResourceQuota
secrets                                              v1                                     true         Secret
serviceaccounts                   sa                 v1                                     true         ServiceAccount
services                          svc                v1                                     true         Service
mutatingwebhookconfigurations                        admissionregistration.k8s.io/v1        false        MutatingWebhookConfiguration
validatingwebhookconfigurations                      admissionregistration.k8s.io/v1        false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds           apiextensions.k8s.io/v1                false        CustomResourceDefinition
apiservices                                          apiregistration.k8s.io/v1              false        APIService
controllerrevisions                                  apps/v1                                true         ControllerRevision
daemonsets                        ds                 apps/v1                                true         DaemonSet
deployments                       deploy             apps/v1                                true         Deployment
replicasets                       rs                 apps/v1                                true         ReplicaSet
statefulsets                      sts                apps/v1                                true         StatefulSet
applications                      app,apps           argoproj.io/v1alpha1                   true         Application
applicationsets                   appset,appsets     argoproj.io/v1alpha1                   true         ApplicationSet
appprojects                       appproj,appprojs   argoproj.io/v1alpha1                   true         AppProject
selfsubjectreviews                                   authentication.k8s.io/v1               false        SelfSubjectReview
tokenreviews                                         authentication.k8s.io/v1               false        TokenReview
localsubjectaccessreviews                            authorization.k8s.io/v1                true         LocalSubjectAccessReview
selfsubjectaccessreviews                             authorization.k8s.io/v1                false        SelfSubjectAccessReview
selfsubjectrulesreviews                              authorization.k8s.io/v1                false        SelfSubjectRulesReview
subjectaccessreviews                                 authorization.k8s.io/v1                false        SubjectAccessReview
horizontalpodautoscalers          hpa                autoscaling/v2                         true         HorizontalPodAutoscaler
cronjobs                          cj                 batch/v1                               true         CronJob
jobs                                                 batch/v1                               true         Job
certificatesigningrequests        csr                certificates.k8s.io/v1                 false        CertificateSigningRequest
leases                                               coordination.k8s.io/v1                 true         Lease
endpointslices                                       discovery.k8s.io/v1                    true         EndpointSlice
events                            ev                 events.k8s.io/v1                       true         Event
flowschemas                                          flowcontrol.apiserver.k8s.io/v1beta3   false        FlowSchema
prioritylevelconfigurations                          flowcontrol.apiserver.k8s.io/v1beta3   false        PriorityLevelConfiguration
ingressclasses                                       networking.k8s.io/v1                   false        IngressClass
ingresses                         ing                networking.k8s.io/v1                   true         Ingress
networkpolicies                   netpol             networking.k8s.io/v1                   true         NetworkPolicy
runtimeclasses                                       node.k8s.io/v1                         false        RuntimeClass
poddisruptionbudgets              pdb                policy/v1                              true         PodDisruptionBudget
clusterrolebindings                                  rbac.authorization.k8s.io/v1           false        ClusterRoleBinding
clusterroles                                         rbac.authorization.k8s.io/v1           false        ClusterRole
rolebindings                                         rbac.authorization.k8s.io/v1           true         RoleBinding
roles                                                rbac.authorization.k8s.io/v1           true         Role
priorityclasses                   pc                 scheduling.k8s.io/v1                   false        PriorityClass
csidrivers                                           storage.k8s.io/v1                      false        CSIDriver
csinodes                                             storage.k8s.io/v1                      false        CSINode
csistoragecapacities                                 storage.k8s.io/v1                      true         CSIStorageCapacity
storageclasses                    sc                 storage.k8s.io/v1                      false        StorageClass
volumeattachments                                    storage.k8s.io/v1                      false        VolumeAttachment

 

 

이 명령어를 잘 이용하면 각 서비스에 대한 yaml파일들을 백업할 수 있다.

#!/bin/sh

DIR=/k8s-bak/resouruces # 경로 입력

mkdir -p 

for NS in $(kubectl get ns --no-headers | awk '{ print $1 }') # 모든 Namespace 조회
do
    mkdir -p ${DIR}/${NS} # 각 Namespace 별 폴더 생성
    for RESOURCE in $(kubectl api-resources --no-headers | awk '{ print $1 }') # 리소스 조회
    do
        kubectl get ${RESOURCE} -n ${NS} -o yaml > ${DIR}/${NS}/${RESOURCE}.yaml # YAML 백업
    done
done

 

실행결과

root@master:~# ./bak.yaml

root@master:/k8s-bak/resources# ls
argocd  default  ingress-nginx  kube-node-lease  kube-public  kube-system

root@master:/k8s-bak/resources# cd argocd/

root@master:/k8s-bak/resources/argocd# ls
apiservices.yaml                 mutatingwebhookconfigurations.yaml
applicationsets.yaml             namespaces.yaml
applications.yaml                networkpolicies.yaml
appprojects.yaml                 nodes.yaml
bindings.yaml                    persistentvolumeclaims.yaml
certificatesigningrequests.yaml  persistentvolumes.yaml
clusterrolebindings.yaml         poddisruptionbudgets.yaml
clusterroles.yaml                pods.yaml
componentstatuses.yaml           podtemplates.yaml
configmaps.yaml                  priorityclasses.yaml
controllerrevisions.yaml         prioritylevelconfigurations.yaml
cronjobs.yaml                    replicasets.yaml
csidrivers.yaml                  replicationcontrollers.yaml
csinodes.yaml                    resourcequotas.yaml
csistoragecapacities.yaml        rolebindings.yaml
customresourcedefinitions.yaml   roles.yaml
daemonsets.yaml                  runtimeclasses.yaml
deployments.yaml                 secrets.yaml
endpointslices.yaml              selfsubjectaccessreviews.yaml
endpoints.yaml                   selfsubjectreviews.yaml
events.yaml                      selfsubjectrulesreviews.yaml
flowschemas.yaml                 serviceaccounts.yaml
horizontalpodautoscalers.yaml    services.yaml
ingressclasses.yaml              statefulsets.yaml
ingresses.yaml                   storageclasses.yaml
jobs.yaml                        subjectaccessreviews.yaml
leases.yaml                      tokenreviews.yaml
limitranges.yaml                 validatingwebhookconfigurations.yaml
localsubjectaccessreviews.yaml   volumeattachments.yaml
반응형
반응형

kubectl 도구를 이용하여 Windows에서 기타 서버에 설치되어있는 k8s cluster를 사용할 수 있으며,

k8s 클러스터가 여러개일 경우에도 선택하여 사용가능하다.

 

1. k8s 설치

Window에 Terminal 또는 Powershell을 열어 아래의 명령어를 통해 kubectl을 다운받는다.

curl.exe -LO "https://dl.k8s.io/release/v1.28.4/bin/windows/amd64/kubectl.exe"

 

다운 후 Path에 등록하여도 되고 직접 kubectl 설치되어있는 폴더 호출 하여 사용해도 무관.

# C드라이브 test폴더에서 kubectl 파일 다운받았을 경우
C:\test> ./kubectl

 

 

2. k8s config 파일 복사

서버에 있는 k8s config파일의 내용을 복사한다.

cat ~/.kube/config

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCRXhNelV4TlROYUZ3MHpNekE0TVRneE16VTJOVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURmZTZSREhXZE90Q25qczZnNHFESVprZHFzR2ZRUEE5MXlWYS9LN3AwcTZIRllvRzRQcGZHMkdUeFIKWi9HeXFqU3ZIS01KYWo1WUIvdHJ1TEtrMnoreTFGZXp0eHJFc1JkVi96UXlNOENrcGZVd1FhV2dNMGhSUTF3NgpKcTZGOHhsMlBSaDFlRjJ1eG9YT0pPOFFGam1EV2lUWVQrNEhTL0dRRE5hTlYvNlAxOXllM2VNQWhyZndQeU4xCkJKWXNwRnBTRFJLMTNpNG1IYTk3ZS9WMU9iQkFaQlJNZHhYc3FsaHQvaDR3UWFaRGF4N05tb2huVC9QOGlJTzYKamdtMC82cDBiRmhaaU5nam10TFkwdW9sd2R5Z1JyV1laSCtsclJ6b3FmbW0wUzRieEwrNC9lUXVCeFNlYzBmQwozWjlTaFJPaitiVkt6MFdmRG5PUEQ4S29kTmJqQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUU0x6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUJpSXNhK0EvNgpkdzIzWVNhTm8wU0FaQ3ZUckc1eE1wVWtpRktDcXd0eXU5OFhOR29JNWlhY3pyMllsS2ZueGh6eVNzSFJuWTBNCnZUT3k3aUp2ZGhseldqUlUvcGZMS0hRMVZuSkZ6ZzQ1NDhrdE5MUHQ3eVFTL25HelJDZnJhZGhzc0FKTzJMc1cKTnRxVm5kM3hibVpFTDREamM4ZWNmYVJZMHJTMm9yQ3RoZzZQZFZhMTdQaWlWTk1zS0hGSnJpMVRPdEY5VzdKYQpNQkRLN2ttL2syZThnRmJPYzhlZHc5VGVmS09QTFBLenN6NXVzbWRCNnVyNnZxRFlUYWhtVi9UV2NrQW04aFd6ClU4bkxDcEE1VVllV3hkbTBJSGU2L0NOUmNEeXQzUEtOQTlMZm5zeHcxZjh5c24zSElSSTc2REhyVE9OK3lQUjQKRFpacjI5YjRsWkVtCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.1.1:6443
  name: dev-k8s
contexts:
- context:
    cluster: dev-k8s
    user: pangyeon
  name: pangyeon-dev
current-context:  pangyeon-dev
kind: Config
preferences: {}
users:
- name: pangyeon
  user:
    client-certificate-data: LS0tLS1CRUdJTiBSUZJQDQWdtZ0F3SUJBZ0lJSGpubTVYZWxsbk13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHlOREE0TWpBeE16VTJOVFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ0QVJiWnBiUmNsdFZXRjAKaldFVCt4ZlJ6NC9naG0rYlhCM3Y1clphVzNNSkxxT0JaSGNaZUJPTEFCTmtvbk5nK3BQTVRndmNQSzQ1TGlBRgo3a0orbXg5UWVUbm0va2U5Rm1FQSswNDRIMlFmczFQOTk1ZWRrZHdZZ1RBNzhUWHc5MnNkWUZ4ZjJKNHpEV3YvCjJiSjkxR0ozSUhMYTNzeGwxRlEvS1Q3ZmFYa0U1Sk5Id283ZStYYVdBTmNET0hUT1Z5SjBYTzdEamJZeHpvZ2YKRVZxeWxEYjlrM0JmZlFHNmdNZGVtR251NGZJNHMybFlUZWtzVGZBdE1Eb0VzYk1TWVpsaGh5VXl4U0wvTGc5MApQSzJTMUFoZUhuYXArT1FKUEczaHh0cW1vKzltY0RDYlZKbkx5d2g3ejBnVWtmNXYzcnpLOE8wamxMSVBSTlAzCmtwNEF4UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUUwpMekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBc3MzUVpGT3V2Nmk1MGsrNTlrU252cUxuSDhNZkNod0ZxSHdZCmVESThQTUVXQXUvdDZ4Ty9CSUFhNm1QOVNvUk9WbHcwejRaRmVMeVIvU0tRQjZINXZjbzJGanpNUFNXaGVsdEYKdjBqTGdleTVjemkvVnJJUkNQNEttRjhqZ1JVMnJPRUsxblB4Tm5jOGp3d1NDamdrSmp6THNhRjBEVng3bjI1NwpBUER3b3NMMGNPMXA1OVVHOEZmWXNCUVhmZDZpZm9vb0VmVjJLSEdyZkZ1WVlqNmNhQjQ0ZjZEVWF0bmIrcXNZCk00VWd0dDhpRklKUEdwQlBIMGlGWjQ2R0dVbFZ0NGw5cFhSRVRQVEQ0K0txekM1UjIvbHJQRkdoVnpJUUFwWlkKYnRHeUI1ejlhRFB0UWJSSTNUaloxTHY2em1HUkk3OUNvSDJ6ZnJmN0svdHBTMk9JSVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUBUmJacGJSY2x0VldGMGpXRVQreGZSejQvZ2htK2JYQjN2NXJaYVczTUpMcU9CClpIY1plQk9MQUJOa29uTmcrcFBNVGd2Y1BLNDVMaUFGN2tKK214OVFlVG5tL2tlOUZtRUErMDQ0SDJRZnMxUDkKOTVlZGtkd1lnVEE3OFRYdzkyc2RZRnhmMko0ekRXdi8yYko5MUdKM0lITGEzc3hsMUZRL0tUN2ZhWGtFNUpOSAp3bzdlK1hhV0FOY0RPSFRPVnlKMFhPN0RqYll4em9nZkVWcXlsRGI5azNCZmZRRzZnTWRlbUdudTRmSTRzMmxZClRla3NUZkF0TURvRXNiTVNZWmxoaHlVeXhTTC9MZzkwUEsyUzFBaGVIbmFwK09RSlBHM2h4dHFtbys5bWNEQ2IKVkpuTHl3aDd6MGdVa2Y1djNyeks4TzBqbExJUFJOUDNrcDRBeFFJREFRQUJBb0lCQVFDTGxKcnBmY09uZXR5QgowSThXK014VUtsZXV2aXNOMXZnV0JRclo4NDBrTlBld2hxQ3R3OE85YzBvQ0hGemZ2QllyQWtrYnFEa3ZoRHY1CmpuZjZDdlRVWTE5a1ZXbGkzOFJoR0RRV0cwbDF6TnJqL0RwUHpLbTVOOXR4M2FEL045ZWxIUEU2WFBMUExldUgKTGxPaFBWbERPQ1NoMEdLS0tYenp1MkluSDNKSXhyby9XYk12MmkwdWtVcEllTkdta1JMZ1VDR09zL25qNmpjdAowOHl2ZHpZUVdwQW5QcHRvZ2RQbTgrUzJHVHBXWktlOUhEbDRqd3NpWVRVZW9IYnpRWWJBUys1SzRmdkJmUGN5CmUyZmU5ZGUxL3lpc1h1b3VHcmt1cGs1ejgveXN6eGJTVlZ2ZlhlWmhjV2dub0xrOEJ2RkY4WC9KV0ZIWE5oUisKY3JhUEg2bkJBb0dCQU81Z2FtVDdzQjhCUWJaQVR1VXlDMk13ZHkzV0l6K21NKzdyVFFLU3htR296VnJlOVhUcgpyN0dub1J4dnpVUWxkWk1HNWQ5RTB5ZFBqOFMzeW81NG16LzZOaW5iQnBaT1NDN2N4eHlOTnNRMFY4TWZxRVEwCnB4T094K1FJaG9GRC9vRXFlVmhvSFYvQ2IzOEN3K1p5VVlyejdxZk40WEZkQ2ovYTMrVnR5S0ROQW9HQkFPcjkKWVp4aU1PaXkzR0pJVlJ1SzY1N005QWsyZFF3blJDeWZTbUZXNzdHMTdRc1RWQy9uVXVueHRMNGNiVEhaRndVMgo2dXVIUUg2QUNMS2p1ek03djJ3MERsZUNlbG1TKzFaelZvS2I2Mmc2S3pUZXZ2bWhrczI0Vkxtc1BMT0lta1pGCndHSmJoT1lFcDhXcktZalZQNzJXSmg4bXhLVFBUSVhFNTZKSzZIL1pBb0dCQU0yQXUxaHhqdVU3M1IyMGxRK00KTkRyL3hrN3l4QktVUTBOZkFWWU5tUThLU25kanJYSnQyVnFyNi80cStHZ2VieDBnbmozOEJKbG9Rc1pSdUVOWgpBR2FJVy9kN2hsTkFDNFN5K3NqSGlRWmZKYVhtL2RaSEdoNkhRaFo1cnhOenZjNDNBc1BQaGp0TzBYWkt1UDVMClliY01FcHdCcHJCbmlIV0NTUEZ1MHI2bEFvR0FSUGdYUlJIZ3J2dUlDV1NYYmgwSTZMUFkwRGRtaFNtbExiK1cKMGhqMUF1Q1ZjUkc4UE04Vkc4cXdOTGdkS0d0Q0FXckw2bExwRC9lK0ZjaE9ja3dQODg4WGdvR3VMVW9oY0k4cgpqZXY3WEx6dDMzZWM3NkdIZDgrcE5sR2lBME9ObkNCdXhhOTh3eElNdDh4enhWQnBnOWhrMmZIRDkyZE1XMXFlCmJaaTB3b2tDZ1lFQTdoWUNYSXlXQkpkU3lrMnNPakVndHdLY3AwY2VxUC9sd2QwbGFya2VDU1laSEtweGY5TSsKMm93dGd6UzFTZ1pibHlvRytMQzVFRkF6cXJIK002aHdXZCtMcG8yeWhBZ1hVNm9SMDlNdG56ZUo0UGhBTzI5WQo1ejNiZHp5Q1RNZlN4RUYweWNOL21yZnI1N2VGVk51d1ZnUkVySWxkVGw5NkRaVENXS2ZDb0h3PQotLS0tLUVORCBSU0EgUFJJVkFURSBS0tLQo=

 

 

3. 복사한 파일 이동 및 조회

아래 두가지 방법으로 사용가능

 

3.1 Windows 계정 폴더로 이동

C:\Users\본인계정\  으로 이동하여 .kube 폴더 생성 후 아래에 config파일 이동

 

C:\Users\본인계정\.kube\config 에 파일이 있을 경우 바로 사용가능하다.

# C드라이브 test에 kubectl 파일이 있는 경우
PS C:\test> ./kubectl get pods -A
NAMESPACE       NAME                                                READY   STATUS        RESTARTS       AGE
ingress-nginx   ingress-nginx-controller-6544f7745b-z4lsr           1/1     Running       1 (52m ago)    24h
kube-system     coredns-5dd5756b68-2tl8x                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-6b55b                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-7fxrc                            1/1     Running       8 (52m ago)    1d
kube-system     coredns-5dd5756b68-h982d                            1/1     Running       9 (52m ago)    1d
kube-system     etcd-ubuntu                                         1/1     Running       0              1d
kube-system     kube-apiserver-ubuntu                               1/1     Running       0              1d
kube-system     kube-controller-manager-ubuntu                      1/1     Running       0              1d
kube-system     kube-proxy-2sngh                                    1/1     Running       9 (52m ago)    1d
kube-system     kube-proxy-mmsbf                                    1/1     Running       10 (52m ago)   1d
kube-system     kube-proxy-zh22k                                    1/1     Running       0              1d
kube-system     kube-scheduler-ubuntu                               1/1     Running       0              1d
kube-system     weave-net-94f8g                                     2/2     Running       24 (52m ago)   1d
kube-system     weave-net-gszsj                                     2/2     Running       30 (51m ago)   1d
kube-system     weave-net-hxknh                                     2/2     Running       1 (99d ago)    1d

 

 

3.2 사용하기 편한곳에 config 파일두고 kubectl --kubeconfig 옵션을 통해 사용

# C드라이브 내 test폴더에 config파일이 있을 경우
./kubectl get pods -A --kubeconfig=C:\test\config
NAMESPACE       NAME                                                READY   STATUS        RESTARTS       AGE
ingress-nginx   ingress-nginx-controller-6544f7745b-z4lsr           1/1     Running       1 (52m ago)    24h
kube-system     coredns-5dd5756b68-2tl8x                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-6b55b                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-7fxrc                            1/1     Running       8 (52m ago)    1d
kube-system     coredns-5dd5756b68-h982d                            1/1     Running       9 (52m ago)    1d
kube-system     etcd-ubuntu                                         1/1     Running       0              1d
kube-system     kube-apiserver-ubuntu                               1/1     Running       0              1d
kube-system     kube-controller-manager-ubuntu                      1/1     Running       0              1d
kube-system     kube-proxy-2sngh                                    1/1     Running       9 (52m ago)    1d
kube-system     kube-proxy-mmsbf                                    1/1     Running       10 (52m ago)   1d
kube-system     kube-proxy-zh22k                                    1/1     Running       0              1d
kube-system     kube-scheduler-ubuntu                               1/1     Running       0              1d
kube-system     weave-net-94f8g                                     2/2     Running       24 (52m ago)   1d
kube-system     weave-net-gszsj                                     2/2     Running       30 (51m ago)   1d
kube-system     weave-net-hxknh                                     2/2     Running       1 (99d ago)    1d

 

 

4. k8s 클러스터가 여러개일경우

config 파일에 등록한다.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJTC9XdXZlY1ZxL013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHpNekE0TVRneE16VTJOVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURmZTZSREhXZE90Q25qczZnNHFESVprZHFzR2ZRUEE5MXlWYS9LN3AwcTZIRllvRzRQcGZHMkdUeFIKWi9HeXFqU3ZIS01KYWo1WUIvdHJ1TEtrMnoreTFGZXp0eHJFc1JkVi96UXlNOENrcGZVd1FhV2dNMGhSUTF3NgpKcTZGOHhsMlBSaDFlRjJ1eG9YT0pPOFFGam1EV2lUWVQrNEhTL0dRRE5hTlYvNlAxOXllM2VNQWhyZndQeU4xCkJKWXNwRnBTRFJLMTNpNG1IYTk3ZS9WMU9iQkFaQlJNZHhYc3FsaHQvaDR3UWFaRGF4N05tb2huVC9QOGlJTzYKamdtMC82cDBiRmhaaU5nam10TFkwdW9sd2R5Z1JyV1laSCtsclJ6b3FmbW0wUzRieEwrNC9lUXVCeFNlYzBmQwozWjlTaFJPaitiVkt6MFdmRG5PUEQ4S29kTmJqQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUU0x6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUJpSXNhK0EvNgpkdzIzWVNhTm8wU0FaQ3ZUckc1eE1wVWtpRktDcXd0eXU5OFhOR29JNWlhY3pyMllsS2ZueGh6eVNzSFJuWTBNCnZUT3k3aUp2ZGhseldqUlUvcGZMS0hRMVZuSkZ6ZzQ1NDhrdE5MUHQ3eVFTL25HelJDZnJhZGhzc0FKTzJMc1cKTnRxVm5kM3hibVpFTDREamM4ZWNmYVJZMHJTMm9yQ3RoZzZQZFZhMTdQaWlWTk1zS0hGSnJpMVRPdEY5VzdKYQpNQkRLN2ttL2syZThnRmJPYzhlZHc5VGVmS09QTFBLenN6NXVzbWRCNnVyNnZxRFlUYWhtVi9UV2NrQW04aFd6ClU4bkxDcEE1VVllV3hkbTBJSGU2L0NOUmNEeXQzUEtOQTlMZm5zeHcxZjh5c24zSElSSTc2REhyVE9OK3lQUjQKRFpacjI5YjRsWkVtCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.1.1:6443
  name: dev-k8s
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJTC9XdXZlY1ZxL013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHpNekE0TVRneE16VTJOVE5hTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURmZTZSREhXZE90Q25qczZnNHFESVprZHFzR2ZRUEE5MXlWYS9LN3AwcTZIRllvRzRQcGZHMkdUeFIKWi9HeXFqU3ZIS01KYWo1WUIvdHJ1TEtrMnoreTFGZXp0eHJFc1JkVi96UXlNOENrcGZVd1FhV2dNMGhSUTF3NgpKcTZGOHhsMlBSaDFlRjJ1eG9YT0pPOFFGam1EV2lUWVQrNEhTL0dRRE5hTlYvNlAxOXllM2VNQWhyZndQeU4xCkJKWXNwRnBTRFJLMTNpNG1IYTk3ZS9WMU9iQkFaQlJNZHhYc3FsaHQvaDR3UWFaRGF4N05tb2huVC9QOGlJTzYKamdtMC82cDBiRmhaaU5nam10TFkwdW9sd2R5Z1JyV1laSCtsclJ6b3FmbW0wUzRieEwrNC9lUXVCeFNlYzBmQwozWjlTaFJPaitiVkt6MFdmRG5PUEQ4S29kTmJqQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUU0x6QVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUJpSXNhK0EvNgpkdzIzWVNhTm8wU0FaQ3ZUckc1eE1wVWtpRktDcXd0eXU5OFhOR29JNWlhY3pyMllsS2ZueGh6eVNzSFJuWTBNCnZUT3k3aUp2ZGhseldqUlUvcGZMS0hRMVZuSkZ6ZzQ1NDhrdE5MUHQ3eVFTL25HelJDZnJhZGhzc0FKTzJMc1cKTnRxVm5kM3hibVpFTDREamM4ZWNmYVJZMHJTMm9yQ3RoZzZQZFZhMTdQaWlWTk1zS0hGSnJpMVRPdEY5VzdKYQpNQkRLN2ttL2syZThnRmJPYzhlZHc5VGVmS09QTFBLenN6NXVzbWRCNnVyNnZxRFlUYWhtVi9UV2NrQW04aFd6ClU4bkxDcEE1VVllV3hkbTBJSGU2L0NOUmNEeXQzUEtOQTlMZm5zeHcxZjh5c24zSElSSTc2REhyVE9OK3lQUjQKRFpacjI5YjRsWkVtCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.2.1:6443
  name: stg-k8s
contexts:
- context:
    cluster: dev-k8s
    user: pangyeon-dev
  name: pangyeon-dev
- context:
    cluster: stg-k8s
    user: pangyeon-stg
  name: pangyeon-stg
current-context: pangyeon-dev
kind: Config
preferences: {}
users:
- name: pangyeon-dev
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJSGpubTVYZWxsbk13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHlOREE0TWpBeE16VTJOVFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ0QVJiWnBiUmNsdFZXRjAKaldFVCt4ZlJ6NC9naG0rYlhCM3Y1clphVzNNSkxxT0JaSGNaZUJPTEFCTmtvbk5nK3BQTVRndmNQSzQ1TGlBRgo3a0orbXg5UWVUbm0va2U5Rm1FQSswNDRIMlFmczFQOTk1ZWRrZHdZZ1RBNzhUWHc5MnNkWUZ4ZjJKNHpEV3YvCjJiSjkxR0ozSUhMYTNzeGwxRlEvS1Q3ZmFYa0U1Sk5Id283ZStYYVdBTmNET0hUT1Z5SjBYTzdEamJZeHpvZ2YKRVZxeWxEYjlrM0JmZlFHNmdNZGVtR251NGZJNHMybFlUZWtzVGZBdE1Eb0VzYk1TWVpsaGh5VXl4U0wvTGc5MApQSzJTMUFoZUhuYXArT1FKUEczaHh0cW1vKzltY0RDYlZKbkx5d2g3ejBnVWtmNXYzcnpLOE8wamxMSVBSTlAzCmtwNEF4UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUUwpMekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBc3MzUVpGT3V2Nmk1MGsrNTlrU252cUxuSDhNZkNod0ZxSHdZCmVESThQTUVXQXUvdDZ4Ty9CSUFhNm1QOVNvUk9WbHcwejRaRmVMeVIvU0tRQjZINXZjbzJGanpNUFNXaGVsdEYKdjBqTGdleTVjemkvVnJJUkNQNEttRjhqZ1JVMnJPRUsxblB4Tm5jOGp3d1NDamdrSmp6THNhRjBEVng3bjI1NwpBUER3b3NMMGNPMXA1OVVHOEZmWXNCUVhmZDZpZm9vb0VmVjJLSEdyZkZ1WVlqNmNhQjQ0ZjZEVWF0bmIrcXNZCk00VWd0dDhpRklKUEdwQlBIMGlGWjQ2R0dVbFZ0NGw5cFhSRVRQVEQ0K0txekM1UjIvbHJQRkdoVnpJUUFwWlkKYnRHeUI1ejlhRFB0UWJSSTNUaloxTHY2em1HUkk3OUNvSDJ6ZnJmN0svdHBTMk9JSVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMnRBUmJacGJSY2x0VldGMGpXRVQreGZSejQvZ2htK2JYQjN2NXJaYVczTUpMcU9CClpIY1plQk9MQUJOa29uTmcrcFBNVGd2Y1BLNDVMaUFGN2tKK214OVFlVG5tL2tlOUZtRUErMDQ0SDJRZnMxUDkKOTVlZGtkd1lnVEE3OFRYdzkyc2RZRnhmMko0ekRXdi8yYko5MUdKM0lITGEzc3hsMUZRL0tUN2ZhWGtFNUpOSAp3bzdlK1hhV0FOY0RPSFRPVnlKMFhPN0RqYll4em9nZkVWcXlsRGI5azNCZmZRRzZnTWRlbUdudTRmSTRzMmxZClRla3NUZkF0TURvRXNiTVNZWmxoaHlVeXhTTC9MZzkwUEsyUzFBaGVIbmFwK09RSlBHM2h4dHFtbys5bWNEQ2IKVkpuTHl3aDd6MGdVa2Y1djNyeks4TzBqbExJUFJOUDNrcDRBeFFJREFRQUJBb0lCQVFDTGxKcnBmY09uZXR5QgowSThXK014VUtsZXV2aXNOMXZnV0JRclo4NDBrTlBld2hxQ3R3OE85YzBvQ0hGemZ2QllyQWtrYnFEa3ZoRHY1CmpuZjZDdlRVWTE5a1ZXbGkzOFJoR0RRV0cwbDF6TnJqL0RwUHpLbTVOOXR4M2FEL045ZWxIUEU2WFBMUExldUgKTGxPaFBWbERPQ1NoMEdLS0tYenp1MkluSDNKSXhyby9XYk12MmkwdWtVcEllTkdta1JMZ1VDR09zL25qNmpjdAowOHl2ZHpZUVdwQW5QcHRvZ2RQbTgrUzJHVHBXWktlOUhEbDRqd3NpWVRVZW9IYnpRWWJBUys1SzRmdkJmUGN5CmUyZmU5ZGUxL3lpc1h1b3VHcmt1cGs1ejgveXN6eGJTVlZ2ZlhlWmhjV2dub0xrOEJ2RkY4WC9KV0ZIWE5oUisKY3JhUEg2bkJBb0dCQU81Z2FtVDdzQjhCUWJaQVR1VXlDMk13ZHkzV0l6K21NKzdyVFFLU3htR296VnJlOVhUcgpyN0dub1J4dnpVUWxkWk1HNWQ5RTB5ZFBqOFMzeW81NG16LzZOaW5iQnBaT1NDN2N4eHlOTnNRMFY4TWZxRVEwCnB4T094K1FJaG9GRC9vRXFlVmhvSFYvQ2IzOEN3K1p5VVlyejdxZk40WEZkQ2ovYTMrVnR5S0ROQW9HQkFPcjkKWVp4aU1PaXkzR0pJVlJ1SzY1N005QWsyZFF3blJDeWZTbUZXNzdHMTdRc1RWQy9uVXVueHRMNGNiVEhaRndVMgo2dXVIUUg2QUNMS2p1ek03djJ3MERsZUNlbG1TKzFaelZvS2I2Mmc2S3pUZXZ2bWhrczI0Vkxtc1BMT0lta1pGCndHSmJoT1lFcDhXcktZalZQNzJXSmg4bXhLVFBUSVhFNTZKSzZIL1pBb0dCQU0yQXUxaHhqdVU3M1IyMGxRK00KTkRyL3hrN3l4QktVUTBOZkFWWU5tUThLU25kanJYSnQyVnFyNi80cStHZ2VieDBnbmozOEJKbG9Rc1pSdUVOWgpBR2FJVy9kN2hsTkFDNFN5K3NqSGlRWmZKYVhtL2RaSEdoNkhRaFo1cnhOenZjNDNBc1BQaGp0TzBYWkt1UDVMClliY01FcHdCcHJCbmlIV0NTUEZ1MHI2bEFvR0FSUGdYUlJIZ3J2dUlDV1NYYmgwSTZMUFkwRGRtaFNtbExiK1cKMGhqMUF1Q1ZjUkc4UE04Vkc4cXdOTGdkS0d0Q0FXckw2bExwRC9lK0ZjaE9ja3dQODg4WGdvR3VMVW9oY0k4cgpqZXY3WEx6dDMzZWM3NkdIZDgrcE5sR2lBME9ObkNCdXhhOTh3eElNdDh4enhWQnBnOWhrMmZIRDkyZE1XMXFlCmJaaTB3b2tDZ1lFQTdoWUNYSXlXQkpkU3lrMnNPakVndHdLY3AwY2VxUC9sd2QwbGFya2VDU1laSEtweGY5TSsKMm93dGd6UzFTZ1pibHlvRytMQzVFRkF6cXJIK002aHdXZCtMcG8yeWhBZ1hVNm9SMDlNdG56ZUo0UGhBTzI5WQo1ejNiZHp5Q1RNZlN4RUYweWNOL21yZnI1N2VGVk51d1ZnUkVySWxkVGw5NkRaVENXS2ZDb0h3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
- name: pangyeon-stg
  user:
    client-certificate-data: LS0tLS1CRUdJTiGSDKLNMFKGHtLS0tCk1JSURJVENDQWdubTVYZWxsbk13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNE1qRXhNelV4TlROYUZ3MHlOREE0TWpBeE16VTJOVFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ0QVJiWnBiUmNsdFZXRjAKaldFVCt4ZlJ6NC9naG0rYlhCM3Y1clphVzNNSkxxT0JaSGNaZUJPTEFCTmtvbk5nK3BQTVRndmNQSzQ1TGlBRgo3a0orbXg5UWVUbm0va2U5Rm1FQSswNDRIMlFmczFQOTk1ZWRrZHdZZ1RBNzhUWHc5MnNkWUZ4ZjJKNHpEV3YvCjJiSjkxR0ozSUhMYTNzeGwxRlEvS1Q3ZmFYa0U1Sk5Id283ZStYYVdBTmNET0hUT1Z5SjBYTzdEamJZeHpvZ2YKRVZxeWxEYjlrM0JmZlFHNmdNZGVtR251NGZJNHMybFlUZWtzVGZBdE1Eb0VzYk1TWVpsaGh5VXl4U0wvTGc5MApQSzJTMUFoZUhuYXArT1FKUEczaHh0cW1vKzltY0RDYlZKbkx5d2g3ejBnVWtmNXYzcnpLOE8wamxMSVBSTlAzCmtwNEF4UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSNHR3RnVGY2JTN3VxZXhPcFdCdTEreTFUUwpMekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBc3MzUVpGT3V2Nmk1MGsrNTlrU252cUxuSDhNZkNod0ZxSHdZCmVESThQTUVXQXUvdDZ4Ty9CSUFhNm1QOVNvUk9WbHcwejRaRmVMeVIvU0tRQjZINXZjbzJGanpNUFNXaGVsdEYKdjBqTGdleTVjemkvVnJJUkNQNEttRjhqZ1JVMnJPRUsxblB4Tm5jOGp3d1NDamdrSmp6THNhRjBEVng3bjI1NwpBUER3b3NMMGNPMXA1OVVHOEZmWXNCUVhmZDZpZm9vb0VmVjJLSEdyZkZ1WVlqNmNhQjQ0ZjZEVWF0bmIrcXNZCk00VWd0dDhpRklKUEdwQlBIMGlGWjQ2R0dVbFZ0NGw5cFhSRVRQVEQ0K0txekM1UjIvbHJQRkdoVnpJUUFwWlkKYnRHeUI1ejlhRFB0UWJSSTNUaloxTHY2em1HUkk3OUNvSDJ6ZnJmN0svdHBTMk9JSVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJGSNRYJASQAWpNSUlFcFFJQkFBS0NBUUVBMnRBUmJacGJSY2x0VldGMGpXRVQreGZSejQvZ2htK2JYQjN2NXJaYVczTUpMcU9CClpIY1plQk9MQUJOa29uTmcrcFBNVGd2Y1BLNDVMaUFGN2tKK214OVFlVG5tL2tlOUZtRUErMDQ0SDJRZnMxUDkKOTVlZGtkd1lnVEE3OFRYdzkyc2RZRnhmMko0ekRXdi8yYko5MUdKM0lITGEzc3hsMUZRL0tUN2ZhWGtFNUpOSAp3bzdlK1hhV0FOY0RPSFRPVnlKMFhPN0RqYll4em9nZkVWcXlsRGI5azNCZmZRRzZnTWRlbUdudTRmSTRzMmxZClRla3NUZkF0TURvRXNiTVNZWmxoaHlVeXhTTC9MZzkwUEsyUzFBaGVIbmFwK09RSlBHM2h4dHFtbys5bWNEQ2IKVkpuTHl3aDd6MGdVa2Y1djNyeks4TzBqbExJUFJOUDNrcDRBeFFJREFRQUJBb0lCQVFDTGxKcnBmY09uZXR5QgowSThXK014VUtsZXV2aXNOMXZnV0JRclo4NDBrTlBld2hxQ3R3OE85YzBvQ0hGemZ2QllyQWtrYnFEa3ZoRHY1CmpuZjZDdlRVWTE5a1ZXbGkzOFJoR0RRV0cwbDF6TnJqL0RwUHpLbTVOOXR4M2FEL045ZWxIUEU2WFBMUExldUgKTGxPaFBWbERPQ1NoMEdLS0tYenp1MkluSDNKSXhyby9XYk12MmkwdWtVcEllTkdta1JMZ1VDR09zL25qNmpjdAowOHl2ZHpZUVdwQW5QcHRvZ2RQbTgrUzJHVHBXWktlOUhEbDRqd3NpWVRVZW9IYnpRWWJBUys1SzRmdkJmUGN5CmUyZmU5ZGUxL3lpc1h1b3VHcmt1cGs1ejgveXN6eGJTVlZ2ZlhlWmhjV2dub0xrOEJ2RkY4WC9KV0ZIWE5oUisKY3JhUEg2bkJBb0dCQU81Z2FtVDdzQjhCUWJaQVR1VXlDMk13ZHkzV0l6K21NKzdyVFFLU3htR296VnJlOVhUcgpyN0dub1J4dnpVUWxkWk1HNWQ5RTB5ZFBqOFMzeW81NG16LzZOaW5iQnBaT1NDN2N4eHlOTnNRMFY4TWZxRVEwCnB4T094K1FJaG9GRC9vRXFlVmhvSFYvQ2IzOEN3K1p5VVlyejdxZk40WEZkQ2ovYTMrVnR5S0ROQW9HQkFPcjkKWVp4aU1PaXkzR0pJVlJ1SzY1N005QWsyZFF3blJDeWZTbUZXNzdHMTdRc1RWQy9uVXVueHRMNGNiVEhaRndVMgo2dXVIUUg2QUNMS2p1ek03djJ3MERsZUNlbG1TKzFaelZvS2I2Mmc2S3pUZXZ2bWhrczI0Vkxtc1BMT0lta1pGCndHSmJoT1lFcDhXcktZalZQNzJXSmg4bXhLVFBUSVhFNTZKSzZIL1pBb0dCQU0yQXUxaHhqdVU3M1IyMGxRK00KTkRyL3hrN3l4QktVUTBOZkFWWU5tUThLU25kanJYSnQyVnFyNi80cStHZ2VieDBnbmozOEJKbG9Rc1pSdUVOWgpBR2FJVy9kN2hsTkFDNFN5K3NqSGlRWmZKYVhtL2RaSEdoNkhRaFo1cnhOenZjNDNBc1BQaGp0TzBYWkt1UDVMClliY01FcHdCcHJCbmlIV0NTUEZ1MHI2bEFvR0FSUGdYUlJIZ3J2dUlDV1NYYmgwSTZMUFkwRGRtaFNtbExiK1cKMGhqMUF1Q1ZjUkc4UE04Vkc4cXdOTGdkS0d0Q0FXckw2bExwRC9lK0ZjaE9ja3dQODg4WGdvR3VMVW9oY0k4cgpqZXY3WEx6dDMzZWM3NkdIZDgrcE5sR2lBME9ObkNCdXhhOTh3eElNdDh4enhWQnBnOWhrMmZIRDkyZE1XMXFlCmJaaTB3b2tDZ1lFQTdoWUNYSXlXQkpkU3lrMnNPakVndHdLY3AwY2VxUC9sd2QwbGFya2VDU1laSEtweGY5TSsKMm93dGd6UzFTZ1pibHlvRytMQzVFRkF6cXJIK002aHdXZCtMcG8yeWhBZ1hVNm9SMDlNdG56ZUo0UGhBTzI5WQo1ejNiZHp5Q1RNZlN4RUYweWNOL21yZnI1N2VGVk51d1ZnUkVySWxkVGw5NkRaVENXS2ZDb0h3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

 

 

5. use-context 옵션을 통해 클러스터를 선택한다.

PS C:\test> ./kubectl config use-context pangyeon-dev
Switched to context "pangyeon-dev".

 

 

6. 선택 후 조회

./kubectl get pods -A
NAMESPACE       NAME                                                READY   STATUS        RESTARTS       AGE
ingress-nginx   ingress-nginx-controller-6544f7745b-z4lsr           1/1     Running       1 (52m ago)    24h
kube-system     coredns-5dd5756b68-2tl8x                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-6b55b                            1/1     Terminating   0              1d
kube-system     coredns-5dd5756b68-7fxrc                            1/1     Running       8 (52m ago)    1d
kube-system     coredns-5dd5756b68-h982d                            1/1     Running       9 (52m ago)    1d
kube-system     etcd-ubuntu                                         1/1     Running       0              1d
kube-system     kube-apiserver-ubuntu                               1/1     Running       0              1d
kube-system     kube-controller-manager-ubuntu                      1/1     Running       0              1d
kube-system     kube-proxy-2sngh                                    1/1     Running       9 (52m ago)    1d
kube-system     kube-proxy-mmsbf                                    1/1     Running       10 (52m ago)   1d
kube-system     kube-proxy-zh22k                                    1/1     Running       0              1d
kube-system     kube-scheduler-ubuntu                               1/1     Running       0              1d
kube-system     weave-net-94f8g                                     2/2     Running       24 (52m ago)   1d
kube-system     weave-net-gszsj                                     2/2     Running       30 (51m ago)   1d
kube-system     weave-net-hxknh                                     2/2     Running       1 (99d ago)    1d

 

 

참고

https://kubernetes.io/ko/docs/tasks/tools/install-kubectl-windows/

https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

반응형
반응형

 

1. etcdctl leader 및 list 조회 명령어 입력시 Error: context deadline exceeded 발생할 경우 cacert, cert, key를 입력해준다.

# 아래 명령어 중 하나 사용
etcdctl member list --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/peer.crt" --key="/etc/kubernetes/pki/etcd/peer.key"
etcdctl endpoint status --cluster --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/peer.crt" --key="/etc/kubernetes/pki/etcd/peer.key"
or
etcdctl member list --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key"
etcdctl endpoint status --cluster --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key"

 

 

2. etcdctl 사용시 open /etc/kubernetes/pki/etcd/ca.crt: permission denied 발생할 경우

kube-system의 etcd 버전과 맞춰준다.

cat /etc/kubernetes/manifests/etcd.yaml | grep image
# image: image: registry.k8s.io/etcd:3.5.9-0 
# 위 image 버전확인 후 etcdctl 다운
# https://github.com/etcd-io/etcd/releases
curl https://storage.googleapis.com/etcd/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz -o ./etcd-v3.5.9-linux-amd64.tar.gz
반응형

+ Recent posts