Tag Archives: k8s

K8s binary deployment start API-server error: Error: unknown flag: –etcdservers

 systemctl status kube-apiserver  failed to startup

check the error in the log

cat /var/log/messages|grep kube-apiserver|grep -i error

Jan 11 11:22:44 m1 kube-apiserver: --logtostderr                      log to standard error instead of files
Jan 11 11:25:16 m1 kube-apiserver: Error: unknown flag: --etcdservers
Jan 11 11:25:16 m1 kube-apiserver: --alsologtostderr                  log to standard error as well as files
Jan 11 11:25:16 m1 kube-apiserver: --logtostderr    

【Error: unknown flag: --etcdservers】 that I wrote the wrong string.

I copied the pdf content of the textbook; --etcdservers copied out and pasted into notepad, and found that the [-] symbol was missing.

chrome browser copy pdf - line feed content to notepad++ less [-] symbol

so pay attention to the differences in copying content.

Successful startup after correction

K8S Error: no metrics known for node [How to Solve]

Today, after deploying metrics server, I checked the pod log and found a pile of errors:

The error information is as follows:

]# kubectl  logs -f -n kube-system            metrics-server-d8669575f-xl6mw
I1202 09:09:31.217954       1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I1202 09:09:37.725863       1 secure_serving.go:116] Serving securely on [::]:443
E1202 09:09:49.807117       1 reststorage.go:135] unable to fetch node metrics for node "master": no metrics known for node
E1202 09:09:49.807185       1 reststorage.go:135] unable to fetch node metrics for node "node1": no metrics known for node
E1202 09:09:49.807202       1 reststorage.go:135] unable to fetch node metrics for node "node2": no metrics known for node
E1202 09:09:50.940606       1 reststorage.go:160] unable to fetch pod metrics for pod linux40/nginx-deployment-7d8599fbc9-68pf8: no metrics known for pod
E1202 09:09:53.825493       1 reststorage.go:135] unable to fetch node metrics for node "node1": no metrics known for node
E1202 09:09:53.825540       1 reststorage.go:135] unable to fetch node metrics for node "node2": no metrics known for node
E1202 09:09:53.825551       1 reststorage.go:135] unable to fetch node metrics for node "master": no metrics known for node
E1202 09:10:05.976306       1 reststorage.go:160] unable to fetch pod metrics for pod linux40/nginx-deployment-7d8599fbc9-68pf8: no metrics known for pod
E1202 09:10:21.291923       1 reststorage.go:160] unable to fetch pod metrics for pod linux40/nginx-deployment-7d8599fbc9-68pf8: no metrics known for pod
E1202 09:10:31.601208       1 reststorage.go:135] unable to fetch node metrics for node "master": no metrics known for node
E1202 09:10:31.601330       1 reststorage.go:135] unable to fetch node metrics for node "node1": no metrics known for node
E1202 09:10:31.601353       1 reststorage.go:135] unable to fetch node metrics for node "node2": no metrics known for node
E1202 09:10:31.610963       1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/kube-flannel-ds-64qdh: no metrics known for pod
E1202 09:10:31.611032       1 reststorage.go:160] unable to fetch pod metrics for pod linux40/magedu-tomcat-app1-deployment-6cd664c5bd-wprjb: no metrics known for pod

No valid error message was found when viewing the details of pod

]# kubectl  describe pod metrics-server-6c97c89fd5-j2rql -n kube-system
Name:                 metrics-server-6c97c89fd5-j2rql
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 node2/192.168.64.112
Start Time:           Thu, 02 Dec 2021 16:50:50 +0800
Labels:               k8s-app=metrics-server
                      pod-template-hash=6c97c89fd5
Annotations:          <none>
Status:               Running
IP:                   10.244.2.61
IPs:
  IP:           10.244.2.61
Controlled By:  ReplicaSet/metrics-server-6c97c89fd5
Containers:
  metrics-server:
    Container ID:  docker://eac4a2db02ca75315047eb778b7d3e1d7543d10ed6d33b4b1eddb006f824e34e
    Image:         mirrorgooglecontainers/metrics-server-amd64:v0.3.6
    Image ID:      docker://sha256:9dd718864ce61b4c0805eaf75f87b95302960e65d4857cb8b6591864394be55b
    Port:          4443/TCP
    Host Port:     0/TCP
    Args:
      --cert-dir=/tmp
      --secure-port=4443
      --kubelet-preferred-address-types=InternalIP
      --kubelet-use-node-status-port
      --kubelet-insecure-tls
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 02 Dec 2021 16:51:10 +0800
      Finished:     Thu, 02 Dec 2021 16:51:11 +0800
    Ready:          False
    Restart Count:  2
    Liveness:       http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get https://:https/readyz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /tmp from tmp-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-4xrbc (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  metrics-server-token-4xrbc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  metrics-server-token-4xrbc
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned kube-system/metrics-server-6c97c89fd5-j2rql to node2
  Normal   Pulled     16s (x3 over 34s)  kubelet, node2     Container image "mirrorgooglecontainers/metrics-server-amd64:v0.3.6" already present on machine
  Normal   Created    16s (x3 over 34s)  kubelet, node2     Created container metrics-server
  Normal   Started    16s (x3 over 34s)  kubelet, node2     Started container metrics-server
  Warning  BackOff    8s (x5 over 32s)   kubelet, node2     Back-off restarting failed container

Because the CA certificate does not sign the IP of each node when deploying the cluster, when the metrics server requests through IP, it will prompt that the signed certificate does not have a corresponding IP (error: x509: cannot validate certificate for 192.168.33.11 because it doesn’t contain any IP SANS),

We can add a — kubelet secure TLS parameter to skip certificate verification:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        command:
        - /metrics-server
        - --kubelet-insecure-tls  //skip tls
        - --kubelet-preferred-address-types=InternalIP  //Using internal IP communication
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
        resources:
          limits:
            cpu: 300m
            memory: 200Mi
          requests:
            cpu: 200m
            memory: 100Mi

[Solved] K8s cannot delete the namespace. It is always in the terminating state.

Problem phenomenon:

After kubectl delete ns XXXX, the namespace is always in the terminating state.

Use: kubectl delete ns monitoring — grace period = 0 – force cannot be deleted

Error message:

warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.

Error from server (Conflict): Operation cannot be fulfilled on namespaces ” monitoring “: The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.

Solution:

1. Export namespace information

kubectl get namespace monitoring -o json > monitoring.json

2. Delete the contents under spec in JSON file

The purpose of this step is to clear the content and overwrite the original ns with the ns of the empty content, so as to ensure that the NS content to be deleted is empty and the deletion command cannot be blocked

3. Overlay the empty namespace into the k8s cluster through the API server interface

curl -k -H “Content-Type: application/json” -X PUT –data-binary @monitoring.json http://127.0.0.1:8081/api/v1/namespaces/monitoring/finalize

4. For clusters without authentication, you can perform steps 1-3. For those with authentication, you need to use Kube proxy to proxy:

Open k8s two windows of the master node,
Execute kubectl proxy in one window
Execute in another window: curl – K – H “content type: application/JSON” – x put — data binary @ monitoring.json http://127.0.0.1:8081/api/v1/namespaces/monitoring/finalize

The newly deployed k8s cluster of the virtual machine reports an error when executing kubectl logs and exec

[root@k8smain01 ~]# kubectl exec -it deploy-nginx-8458f6dbbb-vc2jd -- bash
error: unable to upgrade connection: pod does not exist

[root@k8smain01 ~]# kubectl logs nginx-test
Error from server (NotFound): the server could not find the requested resource ( pods/log nginx-test)

Cause: the virtual machine has two network cards and k8s uses the wrong network card

#Carry the -v=9 parameter when using the command to increase the output level of the log
[root@k8smain01 ~]# kubectl logs nginx-test -v=9
I1022 16:45:22.676030  418422 loader.go:372] Config loaded from file:  /etc/kubernetes/admin.conf
I1022 16:45:22.684784  418422 round_trippers.go:435] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.22.1 (linux/amd64) kubernetes/632ed30" 'https://192.168.56.108:6443/api/v1/namespaces/default/pods/nginx-test'
I1022 16:45:22.701372  418422 round_trippers.go:454] GET https://192.168.56.108:6443/api/v1/namespaces/default/pods/nginx-test 200 OK in 16 milliseconds
I1022 16:45:22.701423  418422 round_trippers.go:460] Response Headers:
I1022 16:45:22.701448  418422 round_trippers.go:463]     Audit-Id: 3617bb1d-11dd-49ca-8e9d-fc2e048e9db1
I1022 16:45:22.701459  418422 round_trippers.go:463]     Cache-Control: no-cache, private
I1022 16:45:22.701472  418422 round_trippers.go:463]     Content-Type: application/json
I1022 16:45:22.701480  418422 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: b85220fe-e7c3-47df-a746-6624f4a44353
I1022 16:45:22.701486  418422 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: 46ccf519-799d-40e5-8bdf-1a4250479d75
I1022 16:45:22.701493  418422 round_trippers.go:463]     Date: Fri, 22 Oct 2021 08:45:22 GMT
I1022 16:45:22.702103  418422 request.go:1181] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-test","namespace":"default","uid":"ab3ccffc-f477-4556-8db8-54fc2a2f3632","resourceVersion":"10005","creationTimestamp":"2021-10-21T11:11:40Z","labels":{"run":"nginx-test"},"annotations":{"cni.projectcalico.org/containerID":"c136174460e3d55deb4ef60e14e5f1af000ee6ea500a03e2e96f517fce22da4f","cni.projectcalico.org/podIP":"10.50.249.65/32","cni.projectcalico.org/podIPs":"10.50.249.65/32"},"managedFields":[{"manager":"kubectl-run","operation":"Update","apiVersion":"v1","time":"2021-10-21T11:11:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:run":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"nginx-test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},{"manager":"calico","operation":"Update","apiVersion":"v1","time":"2021-10-21T11:11:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-21T11:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.50.249.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-4kv5q","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"nginx-test","image":"nginx","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-4kv5q","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8snode01","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-21T11:11:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-21T11:50:42Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-21T11:50:42Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-21T11:11:40Z"}],"hostIP":"10.0.2.15","podIP":"10.50.249.65"

From the output log, you can see the used  “hostIP”:”10.0.2.15″, Not the IP used for mutual access between virtual machines: 192.168.56. X

Solution:

#centos
vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--node-ip=192.168.56.xxx

systemctl restart kubelet

#ubuntu system, add a line before ExecStart
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.56.xxx"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS



systemctl restart kubelet