cluster manage

delete eks

eksctl delete cluster --name cluster01 --wait

cluster ๋ฆฌ์ŠคํŠธ๋ณด๊ธฐ

eksctl get cluster

๊ธฐํƒ€ ์‚ฌ์šฉ๋ฒ•

eksctl get nodegroup --cluster=cluster01

# ๋…ธ๋“œ ํ™•์žฅ

eksctl scale nodegroup --cluster=cluster01  --nodes=2 --name=cluster01-nodes

eksctl scale nodegroup --cluster=cluster01  --nodes=3 --nodes-max=3 --name=cluster01-nodes

eksctl scale nodegroup --cluster=<clusterName> --nodes=<desiredCount> --name=<nodegroupName> [ --nodes-min=<minSize> ] [ --nodes-max=<maxSize> ]

manage cluster

eksctl scale nodegroup --cluster=<clusterName> --nodes=<desiredCount> --name=<nodegroupName> [ --nodes-min=<minSize> ] [ --nodes-max=<maxSize> ]

kubectl ์‚ฌ์šฉ

kubectl get pod --all-namespaces
kubectl get pod --all-namespaces -o wide

์ „์ฒด pod ๊ฐฏ์ˆ˜ :

kubectl get pod --all-namespaces  | wc -l

๋…ธ๋“œ๋‹น ๊ฐฏ์ˆ˜ (๋…ธ๋“œ ์ด๋ฆ„์„ ํ™•์ธํ›„ ๋…ธ๋“œ๋ณ„๋กœ ์ฒดํฌ)

kubectl get node
kubectl get pod --all-namespaces -o wide | grep ip-192-168-10-183 | wc -l
kubectl get pod --all-namespaces -o wide | grep ip-192-168-28-3 | wc -l
kubectl get pod --all-namespaces -o wide | grep ip-192-168-4-220 | wc -l
kubectl get pod --all-namespaces -o wide | grep ip-192-168-65-172 | wc -l
kubectl get pod --all-namespaces -o wide | grep ip-192-168-69-253 | wc -l
kubectl get pod --all-namespaces -o wide | grep ip-192-168-78-242 | wc -l
kubectl get pod --all-namespaces -o wide | grep ip-192-168-9-123 | wc -l

max pod ๊ฐฏ์ˆ˜

๊ณต์‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.

ENI * (# of IPv4 per ENI - 1)  + 2

https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/using-eni.html

์—ฌ๊ธฐ์—์„œ eni(์ตœ๋Œ€ ๋„คํŠธ์›Œํฌ ์ธํ„ฐํŽ˜์ด์Šค ์ˆ˜) ํ•˜๊ณ  ์ธํ„ฐํŽ˜์ด์Šค๋‹น ํ”„๋ผ์ด๋ฐ‹ ์ฃผ์†Œ ์•Œ์ˆ˜ ์žˆ๋‹ค.

t3.small ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด

3 * (4-1) + 2 = 11

๋…ธ๋“œ๋‹น 11๊ฐœ์˜ pod๋ฅผ ์‚ฌ์šฉํ• ์ˆ˜ ์žˆ๋‹ค.

๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ์ •๋ฆฌ๋ฅผ ํ•ด์„œ ์ ์–ด๋‘”๊ฒŒ ์žˆ๋‹ค.

https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt

์•„๋ž˜ ํ‘œ์—์„œ ๊ฐ€๊ฒฉ์„ ๊ตฌํ• ์ˆ˜ ์žˆ๋‹ค.

https://aws.amazon.com/ec2/pricing/on-demand

pod ๊ฐฏ์ˆ˜๋ฅผ ์ž˜ ํ™•์ธํ•ด์„œ ์–ด๋А ํƒ€์ž…์ด ํŽธํ•œ๊ฑด์ง€ ๊ณ ๋ ค์•ผํ• ๋“ฏํ•˜๋‹ค.

node not ready status

node๊ฐ€ ๊ฐ‘์ž๊ธฐ not ready ์ƒํƒœ์ด๋‹ค.

k9s๋กœ ๋…ธ๋“œ ์„ ํƒํ›„ cordon ==> drain ==> delete๋ฅผ ์ˆœ์„œ๋Œ€๋กœ ํ•ด์ฃผ์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‹ˆ ์ƒˆ๋กœ์šด ๋…ธ๋“œ๋ฅผ ๋งŒ๋“ค์–ด ์ค€๋‹ค.

๋…ธ๋“œ๊ทธ๋ฃน ๋ณ€๊ฒฝ

๋…ธ๋“œ ๊ทธ๋ฃน ์ถ”๊ฐ€

nodegroup.yml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: cluster01
  region: us-west-1

managedNodeGroups:
  - name: nodegroup-2
    instanceType: t3.medium
    desiredCapacity: 4
    volumeSize: 80
    minSize: 3
    maxSize: 10
    ssh:
      allow: true
eksctl create nodegroup --config-file nodegroup.yaml

eksctl get nodegroup --cluster=cluster01

๊ธฐ์กด ๋…ธ๋“œ๊ทธ๋ฃน ์‚ญ์ œ

eksctl delete nodegroup cluster01-nodes --cluster=cluster01

Last updated

Was this helpful?