Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K3S Logging is not working in Alma 9.3 environment #10357

Closed
sjose1x opened this issue Jun 14, 2024 · 7 comments
Closed

K3S Logging is not working in Alma 9.3 environment #10357

sjose1x opened this issue Jun 14, 2024 · 7 comments

Comments

@sjose1x
Copy link

sjose1x commented Jun 14, 2024

Environmental Info:
K3s Version:
k3s version v1.29.4+k3s1 (94e29e2)

Node(s) CPU architecture, OS, and Version:
Linux localhost.localdomain 5.14.0-362.18.1.el9_3.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 29 07:05:48 EST 2024 x86_64 x86_64 x86_64 GNU/Linux

AlmaLinux release 9.3 (Shamrock Pampas Cat)

Cluster Configuration:
Single pod is running

Describe the bug:
Unable to retrieve log information of pods, ingress etc

# k3s -v
k3s version v1.29.4+k3s1 (94e29e2e)
go version go1.21.9
# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
auth-app-6f97875c8-2mrc7   1/1     Running   0          31h
# kubectl logs auth-app-6f97875c8-2mrc7
Error from server: Get "https://192.168.180.45:10250/containerLogs/default/auth-app-6f97875c8-2mrc7/auth-app": net/http: TLS handshake timeout
# kubectl logs --insecure-skip-tls-verify-backend auth-app-6f97875c8-2mrc7
Error from server: Get "https://192.168.180.45:10250/containerLogs/default/auth-app-6f97875c8-2mrc7/auth-app": net/http: TLS handshake timeout
# cat /etc/redhat-release
AlmaLinux release 9.3 (Shamrock Pampas Cat)

Steps To Reproduce:

  • Installed K3s:
  • Run the below command to retrieve logs from a pod
    kubectl logs auth-app-6f97875c8-2mrc7

Expected behavior:
Logs should appear

Actual behavior:
Error - Error from server: Get "https://192.168.180.45:10250/containerLogs/default/auth-app-6f97875c8-2mrc7/auth-app": net/http: TLS handshake timeout

@brandond
Copy link
Member

Please show the full steps you use to install and configure k3s, as well as a description of the type and count of nodes in the cluster, and the output of kubectl get node -o yaml.

@sjose1x
Copy link
Author

sjose1x commented Jun 15, 2024

Installation coomand - curl -sfL https://get.k3s.io | sh -
After installation I started running containers with K3s did not do any other config changes..
don't have a cluster set up... running K3S in a single Alma 9 node with 4GB RAM & 2VCPs

[root@localhost ~]# kubectl get node -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 192.168.180.45
      flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"22:a6:15:19:7d:20"}'
      flannel.alpha.coreos.com/backend-type: vxlan
      flannel.alpha.coreos.com/kube-subnet-manager: "true"
      flannel.alpha.coreos.com/public-ip: 192.168.180.45
      k3s.io/hostname: localhost.localdomain
      k3s.io/internal-ip: 192.168.180.45
      k3s.io/node-args: '["server"]'
      k3s.io/node-config-hash: X4QWIEC5NTH4VTNX3IIGXUVFE5UB53UOJLKXHXWT5OQRMY3VP5DA====
      k3s.io/node-env: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/b159f6e26663d8c92285e7bc4a6881d85bd8c81efc55eb2cf191c54100387fbb"}'
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2024-05-15T07:24:56Z"
    finalizers:
    - wrangler.cattle.io/node
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: k3s
      beta.kubernetes.io/os: linux
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: localhost.localdomain
      kubernetes.io/os: linux
      node-role.kubernetes.io/control-plane: "true"
      node-role.kubernetes.io/master: "true"
      node.kubernetes.io/instance-type: k3s
    name: localhost.localdomain
    resourceVersion: "826226"
    uid: add5d9fe-7707-4bba-964a-ccba128a792f
  spec:
    podCIDR: 10.42.0.0/24
    podCIDRs:
    - 10.42.0.0/24
    providerID: k3s://localhost.localdomain
  status:
    addresses:
    - address: 192.168.180.45
      type: InternalIP
    - address: localhost.localdomain
      type: Hostname
    allocatable:
      cpu: "2"
      ephemeral-storage: "45898478352"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 3735436Ki
      pods: "110"
    capacity:
      cpu: "2"
      ephemeral-storage: 46076Mi
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      memory: 3735436Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2024-06-15T06:12:51Z"
      lastTransitionTime: "2024-05-15T07:24:56Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-06-15T06:12:51Z"
      lastTransitionTime: "2024-05-15T07:24:56Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-06-15T06:12:51Z"
      lastTransitionTime: "2024-05-15T07:24:56Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-06-15T06:12:51Z"
      lastTransitionTime: "2024-05-29T08:40:17Z"
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - docker.io/sijod/shinken-k3s-poller@sha256:a83234eef01fd147b59ddb6b540326cdf44d9c7d8c3812eefd4c17d5c170b193
      sizeBytes: 1192796222
    - names:
      - docker.io/sijod/shinken-k3s-poller@sha256:9b5e8cda20584249c8c5fb20bf81b884c94a60cb9d131da319c949ddf6f553ff
      - docker.io/sijod/shinken-k3s-poller:latest
      sizeBytes: 1192794514
    - names:
      - docker.io/sijod/idm-celery@sha256:2c529d26b4dc0752d2d6336756b44441360d4d952bb1d311bbae7029932f0b5e
      sizeBytes: 574158800
    - names:
      - docker.io/sijod/idm-celery@sha256:c4ee4f73b546b5b6aebf92518c020eefecb9bc08c33c3c68aa0d21e6f96b0890
      - docker.io/sijod/idm-celery:latest
      sizeBytes: 574158734
    - names:
      - docker.io/sijod/idm-celery@sha256:6690412d8c76b41fc77f74a590d4d22754a53ab79f1726ff41fc437c45571603
      sizeBytes: 573563229
    - names:
      - docker.io/sijod/shinken-k3s-receiver@sha256:b116b41b627d5ea2d542cce322e1e317179c3b8d02bca37d07c4a57e8cf75fb1
      - docker.io/sijod/shinken-k3s-receiver:latest
      sizeBytes: 455403138
    - names:
      - docker.io/sijod/shinken-k3s-receiver@sha256:b05ba51fbe17f276c2684e0532ee0ee33ef26ffc9db23966d4519f0b5e4d4a05
      sizeBytes: 453820096
    - names:
      - docker.io/sijod/shinken-k3s-receiver@sha256:c97de0dc07f70ee65cfab7a1989fa300c1845354fc51d96bd334e9cbbe8d044e
      sizeBytes: 453279349
    - names:
      - docker.io/sijod/shinken-k3s-arbiter@sha256:99da9c7baf14857fe69c5488dcbc144f63db00602a70ca19152bdfd120e0d121
      - docker.io/sijod/shinken-k3s-arbiter:latest
      sizeBytes: 452252985
    - names:
      - docker.io/sijod/shinken-k3s-broker@sha256:19cc3a18d4244d53282abaa4934f70b01b06304e7286ff00383e3dd8d3b3f885
      - docker.io/sijod/shinken-k3s-broker:latest
      sizeBytes: 452059107
    - names:
      - docker.io/sijod/shinken-k3s-scheduler@sha256:26a3bc07037d9b2c88d0a6eccea51941db3fbf6b14b8302eca5de74c4febe194
      - docker.io/sijod/shinken-k3s-scheduler:latest
      sizeBytes: 452058996
    - names:
      - docker.io/sijod/shinken-k3s-reactionner@sha256:3bc1da38b3e021d785b973b943068fcf2ad230fbc3b8b9f5f3b75a68e545c5ec
      - docker.io/sijod/shinken-k3s-reactionner:latest
      sizeBytes: 452058949
    - names:
      - docker.io/sijod/shinken-k3s-receiver@sha256:3b8d29ad74559002fff28037b9b038a4e3480eaeefe457cfe219c5575fb982f5
      sizeBytes: 452058918
    - names:
      - docker.io/sijod/auth-app2@sha256:02310e76bb7164aabd1243cbd4a3b048c7d0ab15bb5a0dfc7303830164b919bb
      sizeBytes: 373376614
    - names:
      - docker.io/sijod/auth-appt@sha256:b9382f1560cfe71143ba9ff83602c3ae741166fddf92c9ca89348317c0a8e63f
      - docker.io/sijod/auth-appt:latest
      sizeBytes: 373376612
    - names:
      - docker.io/sijod/auth-app@sha256:386bf9962ffe99ef11b5cad105c4888de1ca9bd8626bfeb3ec96271cccf4b1de
      sizeBytes: 373376586
    - names:
      - docker.io/sijod/auth-app@sha256:9fc02c27525cb2ae44c815f529743fc955734c419785a9e906fff6838091a542
      sizeBytes: 373376582
    - names:
      - docker.io/sijod/auth-app@sha256:8b959f4f84b5d2d22373b4749fb88a0499a1c525324d897c9a13ae7d85de3997
      sizeBytes: 373376448
    - names:
      - docker.io/sijod/auth-app2@sha256:a5005204b90097f590b19ed6b5d935a8545694b848ae8c19c9154f06247e3588
      - docker.io/sijod/auth-app2:latest
      sizeBytes: 373375450
    - names:
      - docker.io/sijod/auth-app@sha256:e4e5dc04dc7c9bec3af9ecc92007b0824df80628ff3675315631f2e82b20ed7a
      - docker.io/sijod/auth-app:latest
      sizeBytes: 373375330
    - names:
      - docker.io/sijod/auth-app4@sha256:80dfc47c16ae2466eb3d28a7129a5b3f1f662129959aa8e2aa9340f9affc353c
      - docker.io/sijod/auth-app4:latest
      sizeBytes: 373371814
    - names:
      - docker.io/sijod/shinken-k3s-thruk@sha256:b532573692927721e809e8e822f2a06a146d48cc91a8601f55cce321e6487bcb
      - docker.io/sijod/shinken-k3s-thruk:latest
      sizeBytes: 162704666
    - names:
      - docker.io/library/rabbitmq@sha256:eee9afbc17c32424ba6309dfd2d9efc9b9b1863ffe231b3d2be2815758b0d649
      - docker.io/library/rabbitmq:management
      sizeBytes: 114946889
    - names:
      - docker.io/rancher/klipper-helm@sha256:87db3ad354905e6d31e420476467aefcd8f37d071a8f1c8a904f4743162ae546
      - docker.io/rancher/klipper-helm:v0.8.3-build20240228
      sizeBytes: 91162124
    - names:
      - docker.io/library/redis@sha256:b32ea6ea4d5b38496c6e93d02083e97f461cb09d6b8672462b53071236ef4b12
      sizeBytes: 45533889
    - names:
      - docker.io/library/redis@sha256:01afb31d6d633451d84475ff3eb95f8c48bf0ee59ec9c948b161adb4da882053
      - docker.io/library/redis:latest
      sizeBytes: 45533269
    - names:
      - docker.io/library/redis@sha256:5a93f6b2e391b78e8bd3f9e7e1e1e06aeb5295043b4703fb88392835cec924a0
      - docker.io/library/redis@sha256:bf2eef6365155332a8a9f86255818c8cef43f1ebb70ed0335712d596662c1510
      sizeBytes: 45532180
    - names:
      - docker.io/rancher/mirrored-library-traefik@sha256:606c4c924d9edd6d028a010c8f173dceb34046ed64fabdbce9ff29b2cf2b3042
      - docker.io/rancher/mirrored-library-traefik:2.10.7
      sizeBytes: 43240420
    - names:
      - docker.io/rancher/mirrored-metrics-server@sha256:20b8b36f8cac9e25aa2a0ff35147b13643bfec603e7e7480886632330a3bbc59
      - docker.io/rancher/mirrored-metrics-server:v0.7.0
      sizeBytes: 19434712
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:aee53cadc62bd023911e7f077877d047c5b3c269f9bba25724d558654f43cea0
      - docker.io/rancher/local-path-provisioner:v0.0.26
      sizeBytes: 17182090
    - names:
      - docker.io/rancher/mirrored-coredns-coredns@sha256:a11fafae1f8037cbbd66c5afa40ba2423936b72b4fd50a7034a7e8b955163594
      - docker.io/rancher/mirrored-coredns-coredns:1.10.1
      sizeBytes: 16190137
    - names:
      - docker.io/rancher/klipper-lb@sha256:558dcf96bf0800d9977ef46dca18411752618cd9dd06daeb99460c0a301d0a60
      - docker.io/rancher/klipper-lb:v0.4.7
      sizeBytes: 4777465
    - names:
      - docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893
      - docker.io/rancher/mirrored-pause:3.6
      sizeBytes: 301463
    nodeInfo:
      architecture: amd64
      bootID: 2861a063-535e-4bad-a9a8-ba0700a5fbcb
      containerRuntimeVersion: containerd://1.7.15-k3s1
      kernelVersion: 5.14.0-362.18.1.el9_3.x86_64
      kubeProxyVersion: v1.29.4+k3s1
      kubeletVersion: v1.29.4+k3s1
      machineID: fa2d6e6d175a435bbca225720b4f9a06
      operatingSystem: linux
      osImage: AlmaLinux 9.3 (Shamrock Pampas Cat)
      systemUUID: dbf91342-c893-bea5-c4b9-118c1865e420
kind: List
metadata:
  resourceVersion: ""

Let me know anything else is required...

@brandond
Copy link
Member

brandond commented Jun 15, 2024

So it is timing out accessing its own kubelet port?

Make sure you have disabled the local firewall, or opened the correct ports as documented at https://docs.k3s.io/installation/requirements#inbound-rules-for-k3s-nodes

@sjose1x
Copy link
Author

sjose1x commented Jun 17, 2024

Its not working even after disabling firewall,

[root@localhost ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; preset: enabled)
     Active: inactive (dead) since Mon 2024-06-17 09:02:15 IST; 2min 46s ago
   Duration: 2w 4d 18h 52min 31.529s
       Docs: man:firewalld(1)
    Process: 855 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 855 (code=exited, status=0/SUCCESS)
        CPU: 1.557s

Jun 17 09:02:11 localhost.localdomain systemd[1]: Stopping firewalld - dynamic firewall daemon...
Jun 17 09:02:15 localhost.localdomain systemd[1]: firewalld.service: Deactivated successfully.
Jun 17 09:02:15 localhost.localdomain systemd[1]: Stopped firewalld - dynamic firewall daemon.
Jun 17 09:02:15 localhost.localdomain systemd[1]: firewalld.service: Consumed 1.557s CPU time.
Notice: journal has been rotated since unit was started, output may be incomplete.
[root@localhost ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
auth-app-6f97875c8-2mrc7   1/1     Running   0          3d22h
[root@localhost ~]# kubectl logs auth-app-6f97875c8-2mrc7
Error from server: Get "https://192.168.180.45:10250/containerLogs/default/auth-app-6f97875c8-2mrc7/auth-app": net/http: TLS handshake timeout

@sjose1x
Copy link
Author

sjose1x commented Jun 18, 2024

@brandond In this environment for internet access we have set up HTTP and HTTPS Proxies .. can it be blocked due to the proxy?

@brandond
Copy link
Member

brandond commented Jun 18, 2024

It sure could be. Have you included your node IP ranges in the NO_PROXY environment variable? This is covered in the docs at https://docs.k3s.io/advanced#configuring-an-http-proxy

K3s will automatically add the cluster internal Pod and Service IP ranges and cluster DNS domain to the list of NO_PROXY entries. You should ensure that the IP address ranges used by the Kubernetes nodes themselves (i.e. the public and private IPs of the nodes) are included in the NO_PROXY list, or that the nodes can be reached through the proxy.

I don't usually see this sort of problem manifesting as TLS handshake timeouts, but rather errors from the proxy. Something is definitely causing k3s to not be able to connect back to a port on the local node, though.

@sjose1x
Copy link
Author

sjose1x commented Jun 18, 2024

Logging is working after setting the NO_PROXY variable in the /etc/systemd/system/k3s.service.env file

thanks Brad

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

2 participants