Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chroot can not read from /dev/urandom #9549

Closed
schoentoon opened this issue Jan 27, 2023 · 12 comments
Closed

chroot can not read from /dev/urandom #9549

schoentoon opened this issue Jan 27, 2023 · 12 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@schoentoon
Copy link
Contributor

This is basically #8680 but unlike what we thought at the time, it doesn't seem fixed on newer kernel/Fedora CoreOS/cri-o.

What happened:

When starting the controller it errors with the following.

2023/01/27 16:17:52 [warn] 37#37: *1 [lua] lua_ingress.lua:25: get_seed_from_urandom(): failed to open /dev/urandom: /dev/urandom: Permission denied, context: init_worker_by_lua*
2023/01/27 16:17:52 [warn] 37#37: *1 [lua] lua_ingress.lua:59: randomseed(): failed to get seed from urandom, context: init_worker_by_lua*

What you expected to happen:

I expected the controller to start without errors.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):


NGINX Ingress controller
Release: v1.5.1
Build: d003aae
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6


Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: libvirt virtual machine
  • OS (e.g. from /etc/os-release):
NAME="Fedora Linux"
VERSION="37.20230110.3.1 (CoreOS)"
ID=fedora
VERSION_ID=37
VERSION_CODENAME=""
PLATFORM_ID="platform:f37"
PRETTY_NAME="Fedora CoreOS 37.20230110.3.1"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:37"
HOME_URL="https://getfedora.org/coreos/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-coreos/"
SUPPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
BUG_REPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=37
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=37
SUPPORT_END=2023-11-14
VARIANT="CoreOS"
VARIANT_ID=coreos
OSTREE_VERSION='37.20230110.3.1'
  • Kernel (e.g. uname -a): Linux node1 6.0.18-300.fc37.x86_64 Basic structure  #1 SMP PREEMPT_DYNAMIC Sat Jan 7 17:10:00 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:

  • I installed this cluster using kubespray 2.18 to be more precise

    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
  • Basic cluster related info:

  • NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    node1 Ready control-plane,master 9d v1.23.1 192.168.122.82 Fedora CoreOS 37.20230110.3.1 6.0.18-300.fc37.x86_64 cri-o://1.24.1
    node2 Ready control-plane,master 9d v1.23.1 192.168.122.176 Fedora CoreOS 37.20230110.3.1 6.0.18-300.fc37.x86_64 cri-o://1.24.1
    node3 Ready 9d v1.23.1 192.168.122.110 Fedora CoreOS 37.20230110.3.1 6.0.18-300.fc37.x86_64 cri-o://1.24.1

  • How was the ingress-nginx-controller installed:
    I took https://github.com/kubernetes/ingress-nginx/blob/release-1.5/deploy/static/provider/baremetal/deploy.yaml and made the modifications needed for chroot. So change the image and add SYS_CHROOT capability. Then just applied this using kubectl apply -f ingress-nginx.yml

  • Current State of the controller:

    • kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.5.1
Annotations:  <none>
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> get all -A -o wide
NAMESPACE       NAME                                            READY   STATUS             RESTARTS         AGE     IP                NODE    NOMINATED NODE   READINESS GATES
ingress-nginx   pod/ingress-nginx-admission-create-szrss        0/1     Completed          0                56m     10.233.92.71      node3   <none>           <none>
ingress-nginx   pod/ingress-nginx-admission-patch-fdlm6         0/1     Completed          0                56m     10.233.90.29      node1   <none>           <none>
ingress-nginx   pod/ingress-nginx-controller-598fcf4865-9bnks   1/1     Running            1                28m     10.233.90.33      node1   <none>           <none>
ingress-nginx   pod/ingress-nginx-controller-598fcf4865-h4zvd   0/1     Terminated         0                56m     10.233.92.72      node3   <none>           <none>
  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name:         ingress-nginx-controller-598fcf4865-9bnks
Namespace:    ingress-nginx
Priority:     0
Node:         node1/192.168.122.82
Start Time:   Fri, 27 Jan 2023 17:45:35 +0100
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=598fcf4865
Annotations:  cni.projectcalico.org/containerID: d2fd40e01752dbbe6daeac2551c407167992f957dcf36b6c445a0fe85a8c838e
              cni.projectcalico.org/podIP: 10.233.90.33/32
              cni.projectcalico.org/podIPs: 10.233.90.33/32
Status:       Running
IP:           10.233.90.33
IPs:
  IP:           10.233.90.33
Controlled By:  ReplicaSet/ingress-nginx-controller-598fcf4865
Containers:
  controller:
    Container ID:  cri-o://8990cfd588f100bf2f846ac428e2ce47ec9d0374cd68b399b7585e193d77c032
    Image:         registry.k8s.io/ingress-nginx/controller-chroot:v1.5.1
    Image ID:      registry.k8s.io/ingress-nginx/controller-chroot@sha256:404043cd0073e4cafe4e68a785ae76b4a67f24d7a58d8a3487e915f24a2db0cd
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Fri, 27 Jan 2023 18:06:16 +0100
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-598fcf4865-9bnks (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rqc6v (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-rqc6v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason        Age   From                      Message
  ----     ------        ----  ----                      -------
  Normal   Scheduled     31m   default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-598fcf4865-9bnks to node1
  Normal   Pulling       31m   kubelet                   Pulling image "registry.k8s.io/ingress-nginx/controller-chroot:v1.5.1"
  Normal   Pulled        31m   kubelet                   Successfully pulled image "registry.k8s.io/ingress-nginx/controller-chroot:v1.5.1" in 23.574040172s
  Normal   Created       31m   kubelet                   Created container controller
  Normal   Started       31m   kubelet                   Started container controller
  Normal   RELOAD        31m   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  Warning  NodeNotReady  11m   node-controller           Node is not ready
  Warning  FailedMount   10m   kubelet                   MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition
  Normal   Pulled        10m   kubelet                   Container image "registry.k8s.io/ingress-nginx/controller-chroot:v1.5.1" already present on machine
  Normal   Created       10m   kubelet                   Created container controller
  Normal   Started       10m   kubelet                   Started container controller
  Normal   RELOAD        10m   nginx-ingress-controller  NGINX reload triggered due to a change in configuration
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.5.1
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.233.42.169
IPs:                      10.233.42.169
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32275/TCP
Endpoints:                10.233.90.33:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30702/TCP
Endpoints:                10.233.90.33:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     30847
Events:                   <none>
  • Current state of ingress object, if applicable:
    This is just a testing cluster, so not applicable.

    • kubectl -n <appnnamespace> get all,ing -o wide
    • kubectl -n <appnamespace> describe ing <ingressname>
    • If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
  • Others:

    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
      • Any other related information that may help

How to reproduce this issue:

Anything else we need to know:

@schoentoon schoentoon added the kind/bug Categorizes issue or PR as related to a bug. label Jan 27, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jan 27, 2023
@Volatus
Copy link
Contributor

Volatus commented Jan 31, 2023

@schoentoon Could you check how the filesystems are mounted on the container? If it is mounted with the nodev option, this would inhibit block and character special device operations.

@schoentoon
Copy link
Contributor Author

It looks like / is, but /dev isn't. Not sure whether that matters or not.

$ kubectl exec -it --namespace ingress-nginx ingress-nginx-controller-598fcf4865-9bnks -- /bin/sh
/chroot/etc/nginx $ mount
overlay on / type overlay (rw,nodev,relatime,seclabel,lowerdir=/var/lib/containers/storage/overlay/l/H66SR4LHSWGRWJHJ5GYUT37B5L:/var/lib/containers/storage/overlay/l/J4AA262VH4LRDGBZOULIGKXQSB:/var/lib/containers/storage/overlay/l/DYQ3P6K7OWCGFLX62WPH4UVG7Z:/var/lib/containers/storage/overlay/l/YCM7KTR6RLZLKITYVXYYDEVUUI:/var/lib/containers/storage/overlay/l/4E3CUFTIHKEBCGFO2CYSRS7ZJT:/var/lib/containers/storage/overlay/l/FVIVWB7OI525NAF6WA2QAXDXTQ:/var/lib/containers/storage/overlay/l/7NMUXOIFZQHROTGP3ZGAODCWIN:/var/lib/containers/storage/overlay/l/DCY6VIAHMZA3HVGXKMT3GOEHPL:/var/lib/containers/storage/overlay/l/NMZHSQOG3TTBWQ7LE4PPFLCIDD:/var/lib/containers/storage/overlay/l/XRQ7MBGRTVUDGV4ZLLFNN54CQF:/var/lib/containers/storage/overlay/l/SNB5TSC2VMFCG5I2RQ5P6AYOVZ:/var/lib/containers/storage/overlay/l/OP5AT6VAXU6TPHKPVB45QAWAQ2:/var/lib/containers/storage/overlay/l/34IDGNNUAIN25QDDEOFFI7BQPM,upperdir=/var/lib/containers/storage/overlay/f41713e7727fc9d9933adae5a73785a316e19ee960a4fc8c8002ce5d59c267c8/diff,workdir=/var/lib/containers/storage/overlay/f41713e7727fc9d9933adae5a73785a316e19ee960a4fc8c8002ce5d59c267c8/work,metacopy=on,volatile)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime,seclabel)
cgroup on /sys/fs/cgroup type cgroup2 (ro,nosuid,nodev,noexec,relatime,seclabel)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k,inode64)
tmpfs on /etc/resolv.conf type tmpfs (rw,nosuid,nodev,noexec,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /etc/hostname type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /run/.containerenv type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
/dev/vda4 on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/vda4 on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
tmpfs on /run/secrets type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /usr/local/certificates type tmpfs (ro,relatime,seclabel,size=3384612k,inode64)
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel,size=3384612k,inode64)
proc on /proc/asound type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/fs type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/irq type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime,seclabel,inode64)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/keys type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/latency_stats type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/scsi type tmpfs (ro,relatime,seclabel,inode64)
tmpfs on /sys/firmware type tmpfs (ro,relatime,seclabel,inode64)
/chroot/etc/nginx $

@Volatus
Copy link
Contributor

Volatus commented Feb 1, 2023

It looks like / is, but /dev isn't. Not sure whether that matters or not.

$ kubectl exec -it --namespace ingress-nginx ingress-nginx-controller-598fcf4865-9bnks -- /bin/sh
/chroot/etc/nginx $ mount
overlay on / type overlay (rw,nodev,relatime,seclabel,lowerdir=/var/lib/containers/storage/overlay/l/H66SR4LHSWGRWJHJ5GYUT37B5L:/var/lib/containers/storage/overlay/l/J4AA262VH4LRDGBZOULIGKXQSB:/var/lib/containers/storage/overlay/l/DYQ3P6K7OWCGFLX62WPH4UVG7Z:/var/lib/containers/storage/overlay/l/YCM7KTR6RLZLKITYVXYYDEVUUI:/var/lib/containers/storage/overlay/l/4E3CUFTIHKEBCGFO2CYSRS7ZJT:/var/lib/containers/storage/overlay/l/FVIVWB7OI525NAF6WA2QAXDXTQ:/var/lib/containers/storage/overlay/l/7NMUXOIFZQHROTGP3ZGAODCWIN:/var/lib/containers/storage/overlay/l/DCY6VIAHMZA3HVGXKMT3GOEHPL:/var/lib/containers/storage/overlay/l/NMZHSQOG3TTBWQ7LE4PPFLCIDD:/var/lib/containers/storage/overlay/l/XRQ7MBGRTVUDGV4ZLLFNN54CQF:/var/lib/containers/storage/overlay/l/SNB5TSC2VMFCG5I2RQ5P6AYOVZ:/var/lib/containers/storage/overlay/l/OP5AT6VAXU6TPHKPVB45QAWAQ2:/var/lib/containers/storage/overlay/l/34IDGNNUAIN25QDDEOFFI7BQPM,upperdir=/var/lib/containers/storage/overlay/f41713e7727fc9d9933adae5a73785a316e19ee960a4fc8c8002ce5d59c267c8/diff,workdir=/var/lib/containers/storage/overlay/f41713e7727fc9d9933adae5a73785a316e19ee960a4fc8c8002ce5d59c267c8/work,metacopy=on,volatile)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime,seclabel)
cgroup on /sys/fs/cgroup type cgroup2 (ro,nosuid,nodev,noexec,relatime,seclabel)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k,inode64)
tmpfs on /etc/resolv.conf type tmpfs (rw,nosuid,nodev,noexec,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /etc/hostname type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /run/.containerenv type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
/dev/vda4 on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/vda4 on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
tmpfs on /run/secrets type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /usr/local/certificates type tmpfs (ro,relatime,seclabel,size=3384612k,inode64)
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel,size=3384612k,inode64)
proc on /proc/asound type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/fs type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/irq type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime,seclabel,inode64)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/keys type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/latency_stats type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/scsi type tmpfs (ro,relatime,seclabel,inode64)
tmpfs on /sys/firmware type tmpfs (ro,relatime,seclabel,inode64)
/chroot/etc/nginx $

/dev seems mounted properly. Can you check ls -la | grep rand? And also, can you try running cat /dev/urandom | head, it's weird that you wouldn't be able to read from it as root.

@Volatus
Copy link
Contributor

Volatus commented Feb 1, 2023

/assign

@Volatus
Copy link
Contributor

Volatus commented Feb 1, 2023

It looks like / is, but /dev isn't. Not sure whether that matters or not.

$ kubectl exec -it --namespace ingress-nginx ingress-nginx-controller-598fcf4865-9bnks -- /bin/sh
/chroot/etc/nginx $ mount
overlay on / type overlay (rw,nodev,relatime,seclabel,lowerdir=/var/lib/containers/storage/overlay/l/H66SR4LHSWGRWJHJ5GYUT37B5L:/var/lib/containers/storage/overlay/l/J4AA262VH4LRDGBZOULIGKXQSB:/var/lib/containers/storage/overlay/l/DYQ3P6K7OWCGFLX62WPH4UVG7Z:/var/lib/containers/storage/overlay/l/YCM7KTR6RLZLKITYVXYYDEVUUI:/var/lib/containers/storage/overlay/l/4E3CUFTIHKEBCGFO2CYSRS7ZJT:/var/lib/containers/storage/overlay/l/FVIVWB7OI525NAF6WA2QAXDXTQ:/var/lib/containers/storage/overlay/l/7NMUXOIFZQHROTGP3ZGAODCWIN:/var/lib/containers/storage/overlay/l/DCY6VIAHMZA3HVGXKMT3GOEHPL:/var/lib/containers/storage/overlay/l/NMZHSQOG3TTBWQ7LE4PPFLCIDD:/var/lib/containers/storage/overlay/l/XRQ7MBGRTVUDGV4ZLLFNN54CQF:/var/lib/containers/storage/overlay/l/SNB5TSC2VMFCG5I2RQ5P6AYOVZ:/var/lib/containers/storage/overlay/l/OP5AT6VAXU6TPHKPVB45QAWAQ2:/var/lib/containers/storage/overlay/l/34IDGNNUAIN25QDDEOFFI7BQPM,upperdir=/var/lib/containers/storage/overlay/f41713e7727fc9d9933adae5a73785a316e19ee960a4fc8c8002ce5d59c267c8/diff,workdir=/var/lib/containers/storage/overlay/f41713e7727fc9d9933adae5a73785a316e19ee960a4fc8c8002ce5d59c267c8/work,metacopy=on,volatile)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=666)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime,seclabel)
cgroup on /sys/fs/cgroup type cgroup2 (ro,nosuid,nodev,noexec,relatime,seclabel)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k,inode64)
tmpfs on /etc/resolv.conf type tmpfs (rw,nosuid,nodev,noexec,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /etc/hostname type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /run/.containerenv type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
/dev/vda4 on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/vda4 on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
tmpfs on /run/secrets type tmpfs (rw,nosuid,nodev,seclabel,size=802260k,nr_inodes=819200,mode=755,inode64)
tmpfs on /usr/local/certificates type tmpfs (ro,relatime,seclabel,size=3384612k,inode64)
tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel,size=3384612k,inode64)
proc on /proc/asound type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/fs type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/irq type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime,seclabel,inode64)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/keys type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/latency_stats type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/timer_list type tmpfs (rw,nosuid,seclabel,size=65536k,mode=755,inode64)
tmpfs on /proc/scsi type tmpfs (ro,relatime,seclabel,inode64)
tmpfs on /sys/firmware type tmpfs (ro,relatime,seclabel,inode64)
/chroot/etc/nginx $

What's the version of CRI-O?

@schoentoon
Copy link
Contributor Author

I'm not root inside the container though. As for your questions, here you go.

/chroot/etc/nginx $ id
uid=101(www-data) gid=82(www-data) groups=82(www-data)
/chroot/etc/nginx $ cd /dev/
/dev $ ls -la | grep rand
crw-rw-rw-    1 root     root        1,   8 Feb  1 08:18 random
crw-rw-rw-    1 root     root        1,   9 Feb  1 08:18 urandom
/dev $ cd /chroot/dev/
/chroot/dev $ ls -la | grep rand
crw-rw-rw-    1 root     root        1,   8 Nov  8 22:47 random
crw-rw-rw-    1 root     root        1,   9 Nov  8 22:47 urandom
/chroot/dev $ cat /dev/urandom | head
<snip gibberish>
/chroot/dev $ cat /chroot/dev/urandom | head
cat: can't open '/chroot/dev/urandom': Permission denied
/chroot/dev $

It does seem a bit weird to me that creation date of the devices in /chroot/dev are so far in the past though, I thought those are supposed to be created by the init container?

cri-o version is 1.24.1

@longwuyuan
Copy link
Contributor

@schoentoon do you know where/how/why that seed is getting used in lua_ingress.lua ?

@schoentoon
Copy link
Contributor Author

I have no idea about that, I just run the controller as shown in the deploy folder of this repository. Only modification I made to it were for the chroot.

@github-actions
Copy link

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Mar 13, 2023
@longwuyuan
Copy link
Contributor

The project has decided to deprecate the chrooted image as the final goal to increase security of the control is getting implemented in the regular image.

The project also needs to focus on minimizing the support/maintenance of features that are not directly implied by the Ingress-API or rather closely tied to the Ingress-API specs, because there is a lack of resources like developer time. Parallel efforts are in progress to implement the Gateway-API.

Since this issue is adding to the tally of open issues without any action item, I will close this issue now.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

The project has decided to deprecate the chrooted image as the final goal to increase security of the control is getting implemented in the regular image.

The project also needs to focus on minimizing the support/maintenance of features that are not directly implied by the Ingress-API or rather closely tied to the Ingress-API specs, because there is a lack of resources like developer time. Parallel efforts are in progress to implement the Gateway-API.

Since this issue is adding to the tally of open issues without any action item, I will close this issue now.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

4 participants