CKA认证模拟题


题目1: HPA

  在autoscale namespace 中找到一个名为 apache-server 的deployment,为这个deployment创建一个名为 apache-server 的新 HorizontalPodAutoscaler(HPA)

  将 HPA 设置为每个 Pod 的 CPU 使用率定在 50% 。将其配置为至少有 1 个 Pod,且不超过 4 个 Pod 。此外,将缩容稳定窗口设置为 30 秒。

  查看本题环境准备情况:

root@k8s-master:~# kubectl get pod -n autoscale
NAME                             READY   STATUS    RESTARTS   AGE
apache-server-8495b5dd5b-rlkb9   1/1     Running   0          3d13h

root@k8s-master:~# kubectl get deployments.apps -n autoscale
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
apache-server   1/1     1            1           3d13h

root@k8s-master:~# kubectl get hpa -n autoscale
No resources found in autoscale namespace.

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,复制粘贴就行
root@k8s-master:~# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".

root@k8s-master:~# ssh root@k8s-master

root@k8s-master:~# vim hpa.yml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: apache-server
  namespace: autoscale
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: apache-server
  minReplicas: 1
  maxReplicas: 4
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 30

root@k8s-master:~# kubectl create -f hpa.yml
horizontalpodautoscaler.autoscaling/apache-server created

  检查:

root@k8s-master:~# kubectl get -f hpa.yml
NAME            REFERENCE                  TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
apache-server   Deployment/apache-server   cpu: 1%/50%   1         4         1          8m46s

root@k8s-master:~# kubectl describe hpa apache-server -n autoscale
Name:                                                  apache-server
Namespace:                                             autoscale
......
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  1% (1m) / 50%
Min replicas:                                          1
Max replicas:                                          4
Behavior:
......
  Scale Down:
    Stabilization Window: 30 seconds

题目2: Ingress

  创建如下ingress资源:

   名称:echo

   Namespace: sound-repeater

   使用service端口8080在 http://ingress-q2.linuxcenter.cn/echo 上公开名为echoserver-service的服务

  验证:

   访问 curl -kL http://ingress-q2.linuxcenter.cn/echo 会返回 hello world

   访问 curl -o /dev/null -s -w "%{http_code}\n" http://ingress-q2.linuxcenter.cn/echo 会输出200

  解析:

# 不要忘了切换集群,每道题前面都有ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

root@k8s-master:~# vim ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo
  namespace: sound-repeater
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: ingress-q2.linuxcenter.cn
    http:
      paths:
      - pathType: Prefix
        path: "/echo"
        backend:
          service:
            name: echoserver-service
            port:
              number: 8080

root@k8s-master:~# kubectl create -f ingress.yml
ingress.networking.k8s.io/echo created

  检查:

root@k8s-master:~# kubectl get ingress -n sound-repeater
NAME   CLASS   HOSTS                       ADDRESS                   PORTS   AGE
echo   nginx   ingress-q2.linuxcenter.cn   192.168.8.4,192.168.8.5   80      34s

root@k8s-master:~# curl -kL http://ingress-q2.linuxcenter.cn/echo
hello world

root@k8s-master:~# curl -o /dev/null -s -w "%{http_code}\n" http://ingress-q2.linuxcenter.cn/echo
200

题目3: Sidecar

  您需要将一个传统应用程序集成到 Kubernetes 的日志架构(例如 kubectl logs)中。

  实现这个要求的通常方法是添加一个流式传输并置容器:

  更新现有名为 synergy-leverager 的Deployment

   添加一个容器名称为 busybox 且镜像为 busybox:stable的 sidecar 类型容器

   这个sidecar的启动命令为 /bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log'

   这个sidecar和原有的容器都必须同时挂载一个名为 logs 的volume,挂载的目录为 /var/log/

  除了添加所需的卷挂载之外,请勿修改现有容器的规范。

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

root@k8s-master:~# kubectl get deployments.apps synergy-leverager
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
synergy-leverager   1/1     1            1           3d15h

# 导出现有deployment的yaml用于备份
root@k8s-master:~# kubectl get deployments.apps synergy-leverager -o yaml > sidecar.yml

# 直接编辑资源
root@k8s-master:~# kubectl edit deployments.apps synergy-leverager
apiVersion: apps/v1
kind: Deployment
....
spec:
  template:
    spec:
      containers:
      - args:
        ...
        imagePullPolicy: IfNotPresent
        # 找到第一个容器,在下面添加volumeMounts参数,添加下面的logs卷
        volumeMounts:
          - name: logs
            mountPath: /var/log
          ...
        # 新加一个容器
      - name: busybox
        image: busybox:stable #这里自己找加速器或者找一个可用的仓库地址就行
        args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log']
        volumeMounts:
          - name: logs
            mountPath: /var/log
      ...
  # 创建出这个卷,需要注意的是,如果本来就有volumes参数,请在它本来的volumes参数下面写logs卷的创建,不然就自己写volumes参数,注意对齐的格式
      volumes:
        - name: logs
          emptyDir: {}

root@k8s-master:~# kubectl edit deployments.apps synergy-leverager
deployment.apps/synergy-leverager edited

# 确认没问题就行,你的pod名称未必是这个,自己TAB

  验证:

root@k8s-master:~# kubectl get pod
NAME                                      READY   STATUS             RESTARTS   AGE
nfs-client-provisioner-5998cfc9bc-gmkrh   1/1     Running            0          3d16h
synergy-leverager-67ff8bb4bf-p88f2        1/1     Running            0          3d16h
synergy-leverager-75f46879f9-v77gc        1/2     ImagePullBackOff   0          24m
web-6f7fff8685-zmntq                      1/1     Running            0          3d16h

root@k8s-master:~# kubectl get pod synergy-leverager-75f46879f9-v77gc
NAME                                 READY   STATUS             RESTARTS   AGE
synergy-leverager-75f46879f9-v77gc   2/2     Running   0          37m

  解决镜像问题:

# 考试时无需解决镜像拉取失败的问题

root@k8s-master:~# kubectl describe pod synergy-leverager-75f46879f9-v77gc
......
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  26m                  default-scheduler  Successfully assigned default/synergy-leverager-75f46879f9-v77gc to k8s-worker2
  Warning  Failed     26m                  kubelet            Failed to pull image "busybox:stable": Error response from daemon: Get "https://registry-1.docker.io/v2/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Normal   Pulled     26m                  kubelet            Container image "docker.io/library/busybox" already present on machine
  Normal   Created    26m                  kubelet            Created container: legacy
  Normal   Started    26m                  kubelet            Started container legacy
  Normal   Pulling    22m (x5 over 26m)    kubelet            Pulling image "busybox:stable"
  Warning  Failed     22m (x5 over 26m)    kubelet            Error: ErrImagePull
  Warning  Failed     22m (x4 over 25m)    kubelet            Failed to pull image "busybox:stable": Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   BackOff    94s (x98 over 26m)   kubelet            Back-off pulling image "busybox:stable"
  Warning  Failed     71s (x100 over 26m)  kubelet            Error: ImagePullBackOff

题目4: StorageClass

  首先,为名为 cnlxh/nfs-storage 的存储类制备器,创建一个名为 cka-sc 的新 StorageClass

  将卷绑定模式设置为 WaitForFirstConsumer

注意,没有设置卷绑定模式,或者将其设置为 WaitForFirstConsumer 之外的其他任何模式,都将导致分数降低。

  接下来,将 cka-sc StorageClass 配置为默认的 StorageClass

请勿修改任何现有的 Deployment 和 PersistentVolumeClaim,否则将导致分数降低。

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

root@k8s-master:~# vim sc.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cka-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: cnlxh/nfs-storage
volumeBindingMode: WaitForFirstConsumer

root@k8s-master:~# kubectl create -f sc.yml
storageclass.storage.k8s.io/cka-sc created

  验证:

root@k8s-master:~# kubectl get storageclasses.storage.k8s.io
NAME               PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
cka-sc (default)   cnlxh/nfs-storage   Delete          WaitForFirstConsumer   false                  4m20s

题目5: service

  重新配置一个位于spline-reticulator namespace中且名为front-end的deployment

  在名字为nginx的容器里面添加一个端口配置,端口名字为http,暴露端口号为80/tcp

  然后创建一个名为front-end-svc且service的类型为NodePort的service,用于暴露容器的http端口。

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

root@k8s-master:~# kubectl edit deployments.apps -n spline-reticulator front-end
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
      - image: nginx
        ports: #添加了这里几行
        - containerPort: 80
          name: http
          protocol: TCP
...

# 以NodePort方式暴露端口并验证
root@k8s-master:~# kubectl expose deployment -n spline-reticulator front-end --port=80 --target-port=80 --name=front-end-svc --type=NodePort
service/front-end-svc exposed

  验证:

root@k8s-master:~# kubectl get service -n spline-reticulator
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
front-end-svc   NodePort   10.100.109.43   <none>        80:31403/TCP   56s

root@k8s-master:~# kubectl describe -n spline-reticulator service front-end-svc
Name:                     front-end-svc
Namespace:                spline-reticulator
Labels:                   app=front-end
Annotations:              <none>
Selector:                 app=front-end
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.109.43
IPs:                      10.100.109.43
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31403/TCP
Endpoints:                172.16.194.77:80
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>

root@k8s-master:~# curl 10.100.109.43
......
<title>Welcome to nginx!</title>

root@k8s-master:~# curl 192.168.8.3:31403
......
<title>Welcome to nginx!</title>

题目6: PriorityClass

  为用户工作负载创建一个名为 high-priority 的新 PriorityClass ,其值比用户定义的现有最高PriorityClass值小一个优先级。

  修改在 priority namespace 中运行的现有 busybox-logger Deployment ,以使用 high-priority 的PriorityClass。

  确保 busybox-logger Deployment 在设置了新PriorityClass后成功部署。

请勿修改在 priority namespace 中运行的其他 Deployment,否则可能导致分数降低。

  分析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# 先看看现有的用户自己定义最高的数字是多少,看完之后,发现是1000000000
root@k8s-master:~# kubectl get priorityclasses.scheduling.k8s.io
NAME                      VALUE        GLOBAL-DEFAULT   AGE     PREEMPTIONPOLICY
max-user-prloruty         1000000000   false            3d18h   PreemptLowerPriority
system-cluster-critical   2000000000   false            55d     PreemptLowerPriority
system-node-critical      2000001000   false            55d     PreemptLowerPriority

root@k8s-master:~# bc
bc 1.07.1
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
1000000000 - 1
999999999
quit

  解析:

# 创建一个新的,比用户自带的那个少一个就行
root@k8s-master:~# vim new-priority.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 999999999
globalDefault: false
description: "This priority class is created by yu luo."

root@k8s-master:~# kubectl create -f new-priority.yaml
priorityclass.scheduling.k8s.io/high-priority created

# 验证优先级类是否已改好
root@k8s-master:~# kubectl get priorityclasses.scheduling.k8s.io
NAME                      VALUE        GLOBAL-DEFAULT   AGE     PREEMPTIONPOLICY
high-priority             999999999    false            5m39s   PreemptLowerPriority
max-user-prloruty         1000000000   false            3d18h   PreemptLowerPriority
system-cluster-critical   2000000000   false            55d     PreemptLowerPriority
system-node-critical      2000001000   false            55d     PreemptLowerPriority

# 从参考链接上,我们发现,Deployment里的priority参数要和pod中spec的下一级对齐,如果不会写不要紧的,看看explain怎么说就行
root@k8s-master:~# kubectl explain deploy.spec.template.spec.priorityClassName

root@k8s-master:~# kubectl edit deployments.apps busybox-logger -n priority
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      priorityClassName: high-priority
      containers:
...

  验证:

root@k8s-master:~# kubectl get pod -n priority
NAME                              READY   STATUS    RESTARTS   AGE
busybox-logger-5db5f9df74-xfgtf   1/1     Running   0          81s

root@k8s-master:~# kubectl get pod -n priority -o yaml | more
......
priority: 999999999
......

root@k8s-master:~# kubectl describe pod -n priority busybox-logger-5db5f9df74-xfgtf | grep -i priority: -A 2
Priority:             999999999
Priority Class Name:  high-priority
Service Account:      default

题目7: Argo CD

  文档 Argo Helm Charts

  通过执行以下任务在集群中安装 Argo CD:

   添加名为 argo 的官方 Argo CD Helm 存储库。

  注意:Argo CD CRD 已在集群中预安装。

   为 argocd namespace 生成 Argo CD Helm 图表版本 5.5.22 的模板,并将其保存到 ~/argo-helm.yaml ,将图表配置为安装 CRDs 。

   使用 Helm 安装 Argo CD ,并设置发布名称为 argocd ,使用与模板中相同的配置和版本(5.5.22) ,将其安装在 argocd namespace 中,并配置为安装 CRDs 。

  注意:您不需要配置对 Argo CD 服务器 UI 的访问权限。

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# 本地加一下Helm库
root@k8s-master:~# helm repo add argo https://argoproj.github.io/argo-helm

# helm仓库在GitHub上,可能无法连接,使用已有的helm加速,正式考试用题目本身给的链接即可
root@k8s-master:~# helm repo add argo https://oss.linuxcenter.cn/files/cka/helm
"argo" has been added to your repositories

# 更新一下仓库信息 考试的时候argo cd的版本未必是固定的哪个版本,这个没关系,按照步骤安装即可
root@k8s-master:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "argo" chart repository
Update Complete. ⎈Happy Helming!# 看看库里的argo cd信息(如果用我的博客,就只有题目要求的一个包)
root@k8s-master:~# helm search repo argo
NAME                   CHART VERSION   APP VERSION     DESCRIPTION
argo/argo-cd           5.5.22          v2.4.14         A Helm chart for Argo CD, a declarative, GitOps...
argo/kubernetes-dashboard   7.13.0                     General-purpose web UI for Kubernetes clusters

# 顺便看看argo cd都有哪些版本
root@k8s-master:~# helm search repo argo-cd --versions | more
NAME            CHART VERSION   APP VERSION     DESCRIPTION
argo/argo-cd    5.5.22          v2.4.14         A Helm chart for Argo CD, a declarative, GitOps...

# 默认情况下,argo cd的crds配置为安装,而题目说的是不安装,需要将install设置为false
root@k8s-master:~# helm show values argo/argo-cd | grep -A 8 ^crds
crds:
  # -- Install and upgrade CRDs
  install: true
  # -- Keep CRDs on chart uninstall
  keep: true
  # -- Annotations to be added to all CRDs
  annotations: {}

global:

# 生成模板  这里的参数都来自于helm template -h
root@k8s-master:~# helm template argocd argo/argo-cd --namespace argocd --version 5.5.22 --set crds.install=false > ~/argo-helm.yaml

# 开始安装
root@k8s-master:~# helm install argocd argo/argo-cd --namespace argocd --version 5.5.22 --set crds.install=false
NAME: argocd
LAST DEPLOYED: Thu Aug 28 16:47:08 2025
NAMESPACE: argocd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
In order to access the server UI you have the following options:

1. kubectl port-forward service/argocd-server -n argocd 8080:443

    and then open the browser on http://localhost:8080 and accept the certificate

2. enable ingress in the values file `server.ingress.enabled` and either
      - Add the annotation for ssl passthrough: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-1-ssl-passthrough
      - Add the `--insecure` flag to `server.extraArgs` in the values file and terminate SSL at your ingress: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-2-multiple-ingress-objects-and-hosts


After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://github.com/argoproj/argo-cd/blob/master/docs/getting_started.md#4-login-using-the-cli)

  验证:

# 国内网络情况复杂,不用管pod是什么状态,有这几个pod就行了
root@k8s-master:~# kubectl get pod -n argocd
NAME                                                READY   STATUS              RESTARTS   AGE
argocd-application-controller-0                     1/1     Running             0          106s
argocd-applicationset-controller-69f65f94d4-qqjqx   0/1     ContainerCreating   0          106s
argocd-dex-server-8496b55dcc-jtsfw                  0/1     Init:0/1            0          106s
argocd-notifications-controller-6f68d54df5-tl5jz    1/1     Running             0          106s
argocd-redis-5df9769596-xg75q                       1/1     Running             0          106s
argocd-repo-server-5d5b6c4466-nm87m                 0/1     Init:0/1            0          106s
argocd-server-98cd79c7c-fmmcg                       1/1     Running             0          106s

题目8: PVC

  mariadb namespace 中的 MariaDB Deployment 被误删除。请恢复该 Deployment 并确保数据持久性。请按照以下步骤:

   按照如下规格在 mariadb namespace 中创建名为 mariadb 的 PersistentVolumeClaim (PVC):

   访问模式为 ReadWriteOnce

   存储为 250Mi

  集群中现有一个 PersistentVolume,您必须使用现有的 PersistentVolume (PV)

   编辑位于 ~/mariadb-deployment.yaml 的 MariaDB Deployment 文件,以使用上一步中创建的 PVC

   将更新的 Deployment 文件应用到集群。

   确保 MariaDB Deployment 正在运行且稳定。

  查看本题环境准备情况:

root@k8s-master:~# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
mariadb-pv   250Mi      RWO            Retain           Available         <unset>                    3d23h

  解析: 创建pvc

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# 创建一个pvc,用上题目中说的pv
root@k8s-master:~# vim pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mariadb
  namespace: mariadb
spec:
  storageClassName: ""  # 设置为空,以使用题目要求的PV而不是从默认存储类申请
  volumeName: mariadb-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 250Mi
      
root@k8s-master:~# kubectl create -f pvc.yml
persistentvolumeclaim/mariadb created

# 确定一下pv和pvc已经成功绑定
root@k8s-master:~# kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
mariadb-pv   250Mi      RWO      Retain     Bound    mariadb/mariadb         <unset>               4d

  解析: 创建pvc

# 编辑一下位于 ~/mariadb-deployment.yaml 的 MariaDB Deployment 文件,以使用上一步中创建的pvc
root@k8s-master:~# vim ~/mariadb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mariadb
  namespace: mariadb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mariadb
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      volumes:
        - name: mariadb-storage
          persistentVolumeClaim:
            claimName: mariadb
      containers:
      - name: mariadb
        image: docker.io/library/mariadb
        imagePullPolicy: IfNotPresent
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: rootpassword
        securityContext:
          runAsUser: 999
        volumeMounts:
          - mountPath: "/var/lib/mysql"
            name: mariadb-storage
            
root@k8s-master:~# kubectl apply -f ~/mariadb-deployment.yaml
deployment.apps/mariadb created

  验证:

root@k8s-master:~# kubectl get -f ~/mariadb-deployment.yaml
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
mariadb   0/1     1            0           2m12s

# 考试时无需关心镜像问题
root@k8s-worker1:~# docker pull mariadb
root@k8s-worker2:~# docker pull mariadb
root@k8s-worker1:~# docker tag mariadb docker.io/library/mariadb
root@k8s-worker2:~# docker tag mariadb docker.io/library/mariadb

root@k8s-master:~# kubectl get -f ~/mariadb-deployment.yaml
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
mariadb   1/1     1            1           14m
root@k8s-master:~# kubectl get pod -n mariadb
NAME                       READY   STATUS    RESTARTS   AGE
mariadb-59dd7c57dc-rj5nv   1/1     Running   0          16m

root@k8s-master:~# kubectl describe deployments.apps -n mariadb | grep -A 5 Volumes:
  Volumes:
   mariadb-storage:
    Type:          PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:     mariadb
    ReadOnly:      false
  Node-Selectors:  <none>

题目9: Gateway

  将现有 Web 应用程序从 Ingress 迁移到 Gateway API。您必须维护 HTTPS 访问权限。

  注意:集群中安装了一个名为 nginx 的 GatewayClass 。

   首先,创建一个名为 web-gateway 的 Gateway ,主机名为 gateway.linuxcenter.cn ,并保持现有名为 web 的 Ingress 资源的现有 TLS 和侦听器配置。

   接下来,创建一个名为 web-route 的 HTTPRoute ,主机名为 gateway.linuxcenter.cn ,并保持现有名为 web 的 Ingress 资源的现有路由规则。

   您可以使用以下命令测试 Gateway API 配置:

root@k8s-master:~# curl -Lk https://gateway.linuxcenter.cn

  最后,删除名为 web 的现有 Ingress 资源。

  查看本题环境准备情况:

root@k8s-master:~# kubectl get gatewayclasses.gateway.networking.k8s.io
NAME    CONTROLLER                                   ACCEPTED   AGE
nginx   gateway.nginx.org/nginx-gateway-controller   True       4d

root@k8s-master:~# kubectl get ingress
NAME   CLASS   HOSTS                    ADDRESS                   PORTS     AGE
web    nginx   ingress.linuxcenter.cn   192.168.8.4,192.168.8.5   80, 443   4d

root@k8s-master:~# kubectl describe ingress web
Name:             web
Labels:           <none>
Namespace:        default
Address:          192.168.8.4,192.168.8.5
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-cert terminates ingress.linuxcenter.cn
Rules:
  Host                    Path  Backends
  ----                    ----  --------
  ingress.linuxcenter.cn
                          /   web:80 (172.16.194.83:80)
Annotations:              nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    17m (x3 over 4d)   nginx-ingress-controller  Scheduled for sync
  Normal  Sync    16m (x3 over 17m)  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    16m                nginx-ingress-controller  Scheduled for sync
  
# 看看它的ingress是否可以访问
root@k8s-master:~# curl -kL http://ingress.linuxcenter.cn
......
<title>Welcome to nginx!</title>
......

  解析-创建Gateway:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

root@k8s-master:~# vim gateway.yml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: web-gateway
spec:
  gatewayClassName: nginx
  listeners:
    - name: https
      protocol: HTTPS
      port: 443
      hostname: gateway.linuxcenter.cn
      tls:
        certificateRefs:
          - kind: Secret
            group: ""
            name: web-cert
                    
root@k8s-master:~# kubectl create -f gateway.yml
gateway.gateway.networking.k8s.io/web-gateway created

root@k8s-master:~# kubectl describe -f gateway.yml
......
  Gateway Class Name:  nginx
  Listeners:
    Allowed Routes:
      Namespaces:
        From:  Same
    Hostname:  gateway.linuxcenter.cn
    Name:      web
    Port:      443
    Protocol:  HTTPS
    Tls:
      Certificate Refs:
        Group:
        Kind:   Secret
        Name:   web-cert
      Mode:     Terminate
......

  解析: 创建HTTPRoute

root@k8s-master:~# vim httproute.yml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: web-route
spec:
  parentRefs:
    - name: web-gateway
  hostnames:
    - "gateway.linuxcenter.cn"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: web
          port: 80

root@k8s-master:~# kubectl create -f httproute.yml
httproute.gateway.networking.k8s.io/web-route created

root@k8s-master:~# kubectl get httproutes.gateway.networking.k8s.io
NAME        HOSTNAMES                    AGE
web-route   ["gateway.linuxcenter.cn"]   51s

# 测试一下能不能访问
root@k8s-master:~# curl -kL https://gateway.linuxcenter.cn

# 最后删除 web ingress
root@k8s-master:~# kubectl delete ingress web
ingress.networking.k8s.io "web" deleted

题目10: network policy

  从提供的 YAML 样本中查看并应用适当的 NetworkPolicy。

   确保选择的 NetworkPolicy 不过于宽松,同时允许运行在 frontend 和 backend namespaces 中的 frontendbackend Deployment 之间的通信。

   首先,分析 frontendbackend Deployment,以确定需要应用的 NetworkPolicy 的具体要求。

   接下来,检查位于 ~/networkpolicy 文件夹中的 NetworkPolicy YAML 示例。

  注意:请勿删除或修改提供的示例。仅应用其中一个。否则可能会导致分数降低。

   最后,应用启用 frontendbackend Deployment 之间的通信的 NetworkPolicy,但不要过于宽容。

  注意:请勿删除或修改现有的默认拒绝所有入站流量或出口流量 NetworkPolicy。否则可能导致零分。

  查看本题环境准备情况:

# 根据题目,先看看这两个deployment里的pod,有没有什么可以被网络策略选中的特征
# 看完之后,发现必须用app这个标签来做pod的允许
root@k8s-master:~# kubectl get namespaces --show-labels | grep -E "backend|fronted"
backend              Active   4d13h   kubernetes.io/metadata.name=backend

root@k8s-master:~# kubectl get pod -n frontend --show-labels
NAME                        READY   STATUS    RESTARTS      AGE     LABELS
frontend-6546f6985b-lpkmd   1/1     Running   1 (13h ago)   4d13h   app=frontend,pod-template-hash=6546f6985b

root@k8s-master:~# kubectl get pod -n backend --show-labels
NAME                       READY   STATUS    RESTARTS      AGE     LABELS
backend-5c687b8b4b-5lp5x   1/1     Running   1 (13h ago)   4d13h   app=backend,pod-template-hash=5c687b8b4b

# 看完了pod,再看看题目里说的默认策略
# 这个策略还是很严格的,所有pod都被隔离了,但是题目说不让我们动,我们看看就行

root@k8s-master:~# kubectl get networkpolicies.networking.k8s.io -n frontend
No resources found in frontend namespace.

root@k8s-master:~# kubectl get networkpolicies.networking.k8s.io -n backend
NAME       POD-SELECTOR   AGE
deny-all   <none>         4d13h

root@k8s-master:~# kubectl describe networkpolicies deny-all -n backend
Name:         deny-all
Namespace:    backend
Created on:   2025-08-24 20:27:31 +0800 CST
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
  Allowing ingress traffic:
    <none> (Selected pods are isolated for ingress connectivity)
  Not affecting egress traffic
  Policy Types: Ingress

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# 再根据题目分析位于 ~/networkpolicy 文件夹中的 NetworkPolicy YAML 示例,分析完选一个正好合适且不宽松的策略
root@k8s-master:~# ls ~/networkpolicy/
policy1.yaml  policy2.yaml  policy3.yaml

# 看看第一个,这个看完之后感觉podSelector的范围太大了,不适合仅用于这两个pod之间的通信
root@k8s-master:~# cd networkpolicy/
root@k8s-master:~/networkpolicy# cat policy1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: policy1
  namespace: backend
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: frontend

# 再看看第二个,这个还行,在backend的ns中,只允许frontend的ns里具有app: frontend标签的pod来连接backend的ns里的app: backend标签的pod
root@k8s-master:~/networkpolicy# cat policy2.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: policy2
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: frontend
    - podSelector:
        matchLabels:
          app: frontend
          
# 再看看第三个,第三个看上去和我们没啥关系,我们没用这个IP段,而且在backend的ns中,也没有database标签的pod
root@k8s-master:~/networkpolicy# cat policy3.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: policy3
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: frontend
      podSelector:
        matchLabels:
          app: frontend
    - ipBlock:
        cidr: 10.0.0.0/24
# 综合分析之下,第二个policy比较合适,不过于宽松,又刚好合适,直接apply即可
root@k8s-master:~# kubectl apply -f ~/networkpolicy/policy2.yaml
networkpolicy.networking.k8s.io/policy2 created

题目11: CRD

  验证已部署到cert-manager命名空间中的 cert-manager 应用程序。

  使用 kubectl ,将 cert-manager 所有定制资源定义(CRD)的列表,保存到 ~/resources.yaml

  注意:您必须使用 kubectl 的默认输出格式。请勿设置输出格式。否则将导致分数降低。

  使用 kubectl ,提取crd资源中Certificate 资源类型里spec下面的 subject 字段的文档,并将其保存到 ~/subject.yaml

  注意:您可以使用 kubectl 支持的任何输出格式。如果不确定,请使用默认输出格式。

  查看本题环境准备情况:

# 先看看题目说的应用程序有没有
root@k8s-master:~# kubectl get pod -n cert-manager
NAME                                       READY   STATUS    RESTARTS      AGE
cert-manager-7979fbf6b6-6cp85              1/1     Running   2 (13h ago)   4d14h
cert-manager-cainjector-68b64d44c7-tnx72   1/1     Running   1 (13h ago)   4d14h
cert-manager-webhook-ff897cd5d-dfpzd       1/1     Running   1 (13h ago)   4d14h

root@k8s-master:~# kubectl get deployments.apps -n cert-manager
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
cert-manager              1/1     1            1           4d14h
cert-manager-cainjector   1/1     1            1           4d14h
cert-manager-webhook      1/1     1            1           4d14h

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# 看看它有哪些crd,然后按照题目保存起来
root@k8s-master:~# kubectl get crd | grep cert-manager
certificaterequests.cert-manager.io                     2025-08-24T12:27:39Z
certificates.cert-manager.io                            2025-08-24T12:27:39Z
challenges.acme.cert-manager.io                         2025-08-24T12:27:39Z
clusterissuers.cert-manager.io                          2025-08-24T12:27:39Z
issuers.cert-manager.io                                 2025-08-24T12:27:39Z
orders.acme.cert-manager.io                             2025-08-24T12:27:39Z

root@k8s-master:~# kubectl get crd | grep cert-manager > ~/resources.yaml
root@k8s-master:~# cat resources.yaml

# 获取一下题目要求的spec下面的subject字段,然后保存就行
root@k8s-master:~# kubectl explain certificate.spec.subject
GROUP:      cert-manager.io
KIND:       Certificate
VERSION:    v1

FIELD: subject <Object>


DESCRIPTION:
    Requested set of X509 certificate subject attributes.
    More info: https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6

    The common name attribute is specified separately in the `commonName` field.
    Cannot be set if the `literalSubject` field is set.

FIELDS:
  countries     <[]string>
    Countries to be used on the Certificate.

  localities    <[]string>
    Cities to be used on the Certificate.

  organizationalUnits   <[]string>
    Organizational Units to be used on the Certificate.

  organizations <[]string>
    Organizations to be used on the Certificate.

  postalCodes   <[]string>
    Postal codes to be used on the Certificate.

  provinces     <[]string>
    State/Provinces to be used on the Certificate.

  serialNumber  <string>
    Serial number to be used on the Certificate.

  streetAddresses       <[]string>
    Street addresses to be used on the Certificate.

root@k8s-master:~# kubectl explain certificate.spec.subject > ~/subject.yaml
root@k8s-master:~# cat subject.yaml

题目12: ConfigMap

  名为 nginx-static 的 Deployment 正运行于 nginx-static 命名空间,并通过 nginx-config ConfigMap 进行配置。

  更新 nginx-config ConfigMap 以仅允许 TLSv1.3 连接。

  注意:您可以根据需要重新创建、重新启动或扩展资源。

  您可以使用以下命令测试更改:

root@k8s-master:~# curl -k --tls-max 1.3 https://ssl.linuxcenter.cn

  查看本题环境准备情况:

root@k8s-master:~# kubectl get deployments.apps -n nginx-static
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-static   1/1     1            1           4d14h

root@k8s-master:~# kubectl get deployments.apps -n nginx-static -o yaml
......
        volumes:
        - configMap:
            defaultMode: 420
            name: nginx-tls
          name: tls
        - configMap:
            defaultMode: 420
            name: nginx-config
          name: config
......

root@k8s-master:~# kubectl describe configmaps -n nginx-static nginx-config
......
server {
  listen 443 ssl default_server;
  server_name ssl.linuxcenter.cn;

  ssl_certificate /etc/nginx/ssl/tls.crt;
  ssl_certificate_key /etc/nginx/ssl/tls.key;
  ssl_prefer_server_ciphers on;
  ssl_protocols TLSv1.2 TLSv1.3;
  location / {
    root   /usr/share/nginx/html;
    index  index.html index.htm;
  }
}
......

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# configmap这个东西不方便直接edit,所以我们先导出来一份,然后改改,再给他重新创建就行
root@k8s-master:~# kubectl get configmaps -n nginx-static nginx-config -o yaml > configmap.yml

# 把里面的ssl_protocols参数只留下TLSv1.3
root@k8s-master:~# vim configmap.yml
apiVersion: v1
data:
  default.conf: |
    server {
      listen 443 ssl default_server;
      server_name ssl.linuxcenter.cn;

      ssl_certificate /etc/nginx/ssl/tls.crt;
      ssl_certificate_key /etc/nginx/ssl/tls.key;
      ssl_prefer_server_ciphers on;
      ssl_protocols TLSv1.3;   # 改这里,只留下TLSv1.3
      location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
      }
    }
......

root@k8s-master:~# kubectl delete configmaps -n nginx-static nginx-config
configmap "nginx-config" deleted

root@k8s-master:~# kubectl apply -f configmap.yml
configmap/nginx-config created

  验证:

# 重新创建configmap后,发现configmap就只有TLSv1.3了
root@k8s-master:~# kubectl describe configmaps -n nginx-static nginx-config
Name:         nginx-config
Namespace:    nginx-static
Labels:       <none>
Annotations:  <none>

Data
====
default.conf:
----
server {
  listen 443 ssl default_server;
  server_name ssl.linuxcenter.cn;

  ssl_certificate /etc/nginx/ssl/tls.crt;
  ssl_certificate_key /etc/nginx/ssl/tls.key;
  ssl_prefer_server_ciphers on;
  ssl_protocols TLSv1.3;
  location / {
    root   /usr/share/nginx/html;
    index  index.html index.htm;
  }
}
......

  重新推出deployment的新版本,测试是否只支持TLSv1.3:

root@k8s-master:~# kubectl rollout restart deployment -n nginx-static nginx-static
deployment.apps/nginx-static restarted

root@k8s-master:~# curl -k --tls-max 1.3 https://ssl.linuxcenter.cn
......
<title>Welcome to nginx!</title>

root@k8s-master:~# curl -k --tls-max 1.2 https://ssl.linuxcenter.cn
curl: (35) OpenSSL/3.0.13: error:0A00042E:SSL routines::tlsv1 alert protocol version

题目13: resources

  您管理一个 WordPress 应用程序。由于资源请求过高,某些 Pod 无法启动。

  Task:

  relative-fawn namespace 中的 WordPress 应用程序包含:

   具有 3 个副本的 WordPress Deployment

  按如下方式调整所有 Pod 资源请求:

   将节点资源平均分配给这 3 个 Pod

   为每个 Pod 分配公平的 CPU 和内存份额

   添加足够的开销以保持节点稳定

  请确保,对容器和初始化容器使用完全相同的请求。

  您无需更改任何资源限制。

  在更新资源请求时,暂时将 WordPress Deployment 缩放为 0 个副本可能会有所帮助。

  更新后,请确认:

   WordPress 保持 3 个副本

   所有 Pod 都在运行并准备就绪

  查看本题环境准备情况:

root@k8s-master:~# kubectl get deployments.apps -n relative-fawn
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
wordpress   0/3     3            0           4d18h

root@k8s-master:~# kubectl get pod -n relative-fawn
NAME                         READY   STATUS    RESTARTS   AGE
wordpress-5c56f7fd5c-2v5hm   0/1     Pending   0          4d18h
wordpress-5c56f7fd5c-4ht2p   0/1     Pending   0          4d18h
wordpress-5c56f7fd5c-lcccn   0/1     Pending   0          4d18h

# 预期节点是k8s-worker1,不过不用纠结是哪个节点,考试的时候,这个题目所在的集群一共就只有一个节点
root@k8s-master:~# kubectl describe deployments.apps -n relative-fawn wordpress
......
  Node-Selectors:  kubernetes.io/hostname=k8s-worker1
......

  分析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# 看看为啥起不来
# 是因为没有足够的CPU和内存的原因,和题目说的一样,题目要求均分这个节点资源,我们在模拟的时候,你自己的电脑性能有限,不能部署单独的节点,模拟一下就行,不过你要会算

root@k8s-master:~# kubectl get pod -n relative-fawn -o yaml
......
    conditions:
    - lastProbeTime: null
      lastTransitionTime: "2025-08-24T12:30:07Z"
      message: '0/3 nodes are available: 1 Insufficient cpu, 1 Insufficient memory,

root@k8s-master:~# kubectl describe pod -n relative-fawn wordpress-5c56f7fd5c-2v5hm
......
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  10m (x523 over 4d18h)  default-scheduler  0/3 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

# 先去到目标节点上看看有多少内存
root@k8s-master:~# ssh root@k8s-worker1

# 我们的节点一共有这么多资源
root@k8s-worker1:~# lsmem
RANGE                                 SIZE  STATE REMOVABLE BLOCK
0x0000000000000000-0x000000007fffffff   2G online       yes  0-15

Memory block size:       128M
Total online memory:       2G
Total offline memory:      0B

root@k8s-worker1:~# ls cpu

# 或者在master上这么看
root@k8s-master:~# kubectl describe node k8s-worker1
......
Allocatable:
  cpu:                2
  ephemeral-storage:  94580335255
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1863712Ki
  pods:               110
......

# 回到master上看看已分配了多少,就能算出来还剩多少
root@k8s-master:~# kubectl describe node k8s-worker1
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                200m (10%)   0 (0%)
  memory             290Mi (15%)  0 (0%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
......

# 看来我们有2个CPU,2G内存
# 你以当时的情况为准,比如说我这个内存占了90Mi,那就还剩下2048Mi-90Mi=1958Mi内存,CPU的话我这个占用了100m,那就还剩下2000m-100=1900m
# 现在来算账,一共有几个容器,然后除一下,就知道了
# 来看看deployment里有几个容器,从下面的信息来看,一个deployment里,有一个init,有一个普通的容器,一共3个副本的话,那就是6个容器

root@k8s-master:~# kubectl describe deployments.apps -n relative-fawn wordpress
......
Pod Template:
  Labels:  app=wordpress
  Init Containers:
   initcontainer:
    Image:      docker.io/library/busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      echo "Initializing..." && sleep 10
    Limits:
      cpu:        2500m
      memory:     2600Mi
    Environment:  <none>
    Mounts:       <none>
  Containers:
   wordpress:
    Image:      docker.io/library/nginx
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:         2500m
      memory:      2600Mi
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  kubernetes.io/hostname=k8s-worker1
  Tolerations:     <none>
 
 来除一下:
1.CPU: 1900/6=316m
2.memory:1958/6=326Mi

  解析:

# 完之后,更新一下deployment即可,不过需要注意,不要太老实,你的电脑性能有限,我们这里就用100Mi内存和0.1cpu代替,考试的时候,按照你算的为准,模拟考试要求按照下面的步骤设置
root@k8s-master:~# kubectl -n relative-fawn set resources deployment wordpress --requests 'cpu=0.1,memory=100Mi'
deployment.apps/wordpress resource requirements updated

# 重新推出deployment,看看能不能启动
root@k8s-master:~# kubectl rollout restart deployment -n relative-fawn wordpress
deployment.apps/wordpress restarted

root@k8s-master:~# kubectl get pod -n relative-fawn
NAME                         READY   STATUS    RESTARTS   AGE
wordpress-7dc57b5fd7-9mjl9   1/1     Running   0          77s
wordpress-7dc57b5fd7-h6bgp   1/1     Running   0          65s
wordpress-7dc57b5fd7-hghb2   1/1     Running   0          53s

题目14: cri-dockerd

  您必须连接到k8s-worker2主机。不这样做可能导致零分。

  Context

  您的任务是为 Kubernetes 准备一个 Linux 系统。 Docker 已被安装,但您需要为 kubeadm 配置它。

  Task

  完成以下任务,为 Kubernetes 准备系统:

  设置 cri-dockerd

   安装 Debian 软件包 ~/cri-dockerd_0.3.20.3-0.ubuntu-jammy_amd64.deb Debian 软件包使用 dpkg 安装。

   启用并启动 cri-docker 服务

  配置以下系统参数:

    net.bridge.bridge-nf-call-iptables 设置为 1

    net.ipv6.conf.all.forwarding 设置为 1

   net.ipv4.ip_forward 设置为 1

    net.netfilter.nf_conntrack_max 设置为 131072

  确保这些系统参数在系统重启后仍然存在,并应用于正在运行的系统。

  查看本题环境准备情况:

# 先看看现在的版本
root@k8s-master:~# ssh root@k8s-worker2
root@k8s-worker2:~# sudo cri-dockerd --version
cri-dockerd 0.3.17 (483e3b6)

# 再看看我们有没有预期的文件
root@k8s-worker2:~# ls
cri-dockerd_0.3.18.3-0.ubuntu-jammy_amd64.deb  readme.txt

  解析:

# 考试的时候看题目让你怎么ssh,如果ssh上来不是root,就需要sudo -i,或者命令前面加sudo进行授权
root@k8s-worker2:~# ls
cri-dockerd_0.3.18.3-0.ubuntu-jammy_amd64.deb  readme.txt

root@k8s-worker2:~# sudo dpkg -i cri-dockerd_0.3.18.3-0.ubuntu-jammy_amd64.deb

# 再看看版本
root@k8s-worker2:~# sudo cri-dockerd --version
cri-dockerd 0.3.18 (5709af9)

# 启用并启动cri-docker服务
root@k8s-worker2:~# sudo systemctl daemon-reload
root@k8s-worker2:~# sudo systemctl enable cri-docker
root@k8s-worker2:~# sudo systemctl restart cri-docker
root@k8s-worker2:~# sudo systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; preset: enabled)
     Active: active (running) since Fri 2025-08-29 15:49:26 CST; 9s ago
......

# 服务升级搞定,现在来配一下系统参数,如果你没有sudo -i成root,就需要加sudo命令提升权限,不然无法保存
root@k8s-worker2:~# sudo vim /etc/sysctl.d/cka.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.ip_forward = 1
net.netfilter.nf_conntrack_max = 131072

# 用下面的命令让它立刻生效
root@k8s-worker2:~# sudo sysctl --system

# 检查
root@k8s-worker2:~# sudo sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1

root@k8s-worker2:~# sudo sysctl net.ipv6.conf.all.forwarding
net.ipv6.conf.all.forwarding = 1

root@k8s-worker2:~# sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

root@k8s-worker2:~# sudo sysctl net.netfilter.nf_conntrack_max
net.netfilter.nf_conntrack_max = 131072

题目15: etcd

  您必须连接到正确的主机。不这样做可能导致零分。

root@k8s-master:~# ssh root@k8s-standalone

  Context

   kubeadm 配置的集群已迁移到新机器。它需要更改配置才能成功运行。

  Task

   修复在机器迁移过程中损坏的单节点集群。

   首先,确定损坏的集群组件,并调查导致其损坏的原因。

   注意:已停用的集群使用外部 etcd 服务器。

   接下来,修复所有损坏的集群组件的配置。

   注意:确保重新启动所有必要的服务和组件,以使更改生效。否则可能导致分数降低。

   最后,确保集群运行正常。确保:每个节点 和 所有 Pod 都处于 Ready 状态。

  解析:

# 不要忘了切换集群,每道题前面都有给你ssh命令连接到相对应的机器,或者给你切换context的命令,你复制粘贴就行
root@k8s-master:~# ssh root@k8s-master

# 考试的时候看题目让你怎么ssh,如果ssh上来不是root,就需要sudo -i,或者命令前面加sudo进行授权
root@k8s-master:~# ssh root@k8s-standalone

# 看看是否能获取集群信息

root@k8s-standalone:~# kubectl get nodes
E0829 16:05:59.114692  112032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:05:59.116590  112032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:05:59.118419  112032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:05:59.119941  112032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:05:59.121723  112032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
The connection to the server 192.168.8.6:6443 was refused - did you specify the right host or port?

# 已经连不上了,但是看它返回的192.168.8.6:6443又是对的,这个说明我们的~/.kube/config没问题,那我们看看静态pod配置文件
root@k8s-standalone:~# cd /etc/kubernetes/manifests/
root@k8s-standalone:/etc/kubernetes/manifests# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml

# 看看etcd配置文件,发现etcd配置文件,里面指向的ip地址和api服务器配置文件中的不同,这就是原因
root@k8s-standalone:/etc/kubernetes/manifests# sudo vim kube-apiserver.yaml
...
apiVersion: v1
kind: Pod
spec:
  containers:
  - command:
    - --advertise-address=192.168.8.6 # 顺便看看这个对不对,正常应该等于此机器的ip
    - --etcd-servers=https://1.1.1.1:2379  #改成- --etcd-servers=https://127.0.0.1:2379
...

# API服务器连接的数据库地址不对,那正确的地址是什么?这是单节点,所以直接用127.0.0.1其实就行了,不过还是去etcd配置文件里看看
# ok,已经发现了问题,将其在APIServer配置文件中修正,修正后,请至少等待30秒,然后再尝试,因为静态Pod可能需要30s才能扫描到更改,我们并不需要重启kubelet服务
root@k8s-standalone:/etc/kubernetes/manifests# sudo vim etcd.yaml
apiVersion: v1
kind: Pod
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://192.168.8.6:2379
...
# ok,已经发现了问题,将其在APIServer配置文件中修正,修正后,请至少等待30秒,然后再尝试,因为静态Pod可能需要30s才能扫描到更改,我们并不需要重启kubelet服务

# 节点NotReady?看看pod咋样了,还是不对劲,kube-scheduler组件还是异常,它也是静态pod
root@k8s-standalone:/etc/kubernetes/manifests# kubectl get nodes
E0829 16:21:59.983501  112606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:21:59.985084  112606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:21:59.987313  112606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:21:59.989136  112606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
E0829 16:21:59.990962  112606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.8.6:6443/api?timeout=32s\": dial tcp 192.168.8.6:6443: connect: connection refused"
The connection to the server 192.168.8.6:6443 was refused - did you specify the right host or port?

root@k8s-standalone:/etc/kubernetes/manifests# kubectl get pod -A

# 调度器也是用的静态pod,如果静态pod无法自动创建,那估计是文件内容有问题,我们直接创建,看看有啥问题?

题目16: calico

  您必须连接到正确的主机。不这样做可能导致零分。

root@k8s-master:~# ssh root@k8s-standalone

  文档地址

  Flannel Manifest

   https://github.com/flannel-io/flannel/releases/download/v0.26.1/kube-flannel.yml

  Calico Manifest

   https://raw.githubusercontent.com/projectcalico/calico/refs/tags/v3.30.2/manifests/tigera-operator.yaml

  Context

   集群的 CNI 未通过安全审核,已被移除。您必须安装一个可以实施网络策略的新 CNI。

  Task

  安装并设置满足以下要求的容器网络接口(CNI):

  选择并安装以下 CNI 选项之一:

   Flannel 版本 0.26.1

   Calico 版本 3.29.3

  选择的 CNI 必须:

   让 Pod 相互通信

   支持 Network Policy 实施

   从清单文件安装(请勿使用 Helm)


文章作者: 罗宇
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 罗宇 !
 本篇
CKA认证模拟题 CKA认证模拟题
容器编排是指自动化地部署、扩展、管理和协调容器化应用的过程,它涉及到多个方面,如容器的调度、资源分配、容器之间的通信、故障恢复等
下一篇 
containerd原理与实战 containerd原理与实战
Containerd是一个开源的行业标准容器运行时,专注于管理容器的生命周期、镜像传输与存储、低级别存储和网络等功能
2025-06-03
  目录