Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container has runAsNonRoot and image will run as root #31

Open
yobome opened this issue Feb 23, 2021 · 9 comments
Open

container has runAsNonRoot and image will run as root #31

yobome opened this issue Feb 23, 2021 · 9 comments

Comments

@yobome
Copy link
Contributor

yobome commented Feb 23, 2021

First I tried

helm install jupyterhub-ssh jupyterhub-ssh/jupyterhub-ssh --version 0.0.1-n077.h0c9caba --set hubUrl=172.17.45.96 --set-file hostKey="/root/id_rsa" --namespace jhub

and I get an Error:

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.volumes[0].persistentVolumeClaim): missing required field "claimName" in io.k8s.api.core.v1.PersistentVolumeClaimVolumeSource

Refer to this issue #24 , I tried to git clone the repo to my host and use this command:

helm install jupyterhub-ssh jupyterhub-ssh/jupyterhub-ssh -f values.yaml --version 0.0.1-n077.h0c9caba --set hubUrl=172.17.45.96 --set-file hostKey="/root/id_rsa" --namespace jhub

(I changed sftp.enable to false in "values.yaml")

and I get this:

NAMESPACE     NAME                                        READY   STATUS                       RESTARTS   AGE
jhub          jupyterhub-ssh-c7446f988-wxwqr              0/1     CreateContainerConfigError   0          43s

then I check the pod and get the Error event:

  Type     Reason     Age               From               Message
Warning  Failed     1s (x3 over 38s)  kubelet            Error: container has runAsNonRoot and image will run as root (pod: "jupyterhub-ssh-c7446f988-wxwqr_jhub(f313c345-6767-42f7-abd6-020e396a3fc8)", container: server)

I think the user should not be given root privileges in jhub pod, what should I do?
I would appreciate it if you could help me.😄

@yuvipanda
Copy link
Owner

Ah, that's strange - we do set UID to be non root in the dockerfile, but that doesn't seem to pass through here.

We could set runAsUser explicitly to 1000 in https://github.com/yuvipanda/jupyterhub-ssh/blob/main/helm-chart/jupyterhub-ssh/templates/ssh/deployment.yaml#L36 to fix that. Would love if you could make a PR :D

@consideRatio
Copy link
Collaborator

consideRatio commented Feb 23, 2021

I assume that runAsNonRoot as a pod security policy doesn't know that the container will start as non-root, and requires it to be explicitly set.

I suggest a containerSecurityContext configuration option is added , of which runAsUser is a k8s native option that can be set in the default values.yaml.

@yobome
Copy link
Contributor Author

yobome commented Feb 24, 2021

containerSecurityContext:
  runAsUser: 1000

like this?
where should I added to?

@consideRatio
Copy link
Collaborator

consideRatio commented Feb 24, 2021

@yobome yepp like that.

As this Helm chart contain two separate k8s Deployments with their associated pods, could you add the logic for both?

  1. Add ssh.containerSecurityContext and sftp.containerSecurityContext to be {} by default in values.yaml.
  2. Visit deployment.yaml for both ssh and sftp and add something like this to both.
              {{- with .Values.ssh.containerSecurityContext }}
              securityContext:
                {{- . | toYaml | trimSuffix "\n" | nindent 12 }}
              {{- end }}
  3. Update values.yaml to default to runAsUser: 1000 for the deployment you considered, I don't know if that was for ssh or sftp specifically.
  4. Bonus: change the default also for the other deployment to match what the Dockerfile use.
  5. Bonus: consider other related defaults such as ...
    A sensible default for a containerSecurityContext could be the following btw. So perhaps...
        containerSecurityContext:
          runAsUser: 65534 # nobody user (the main point it isn't root)
          runAsGroup: 65534 # nobody group (the main point it isn't root)
          allowPrivilegeEscalation: false

@yobome
Copy link
Contributor Author

yobome commented Feb 24, 2021

That's why I love open source :) 😄, thanks for guiding me.

(Actually I don't quite understand what "Bonus" means because of my English. I mean that I know what "bonus" is, but I don't know what this word means in your context.
Is it your suggestion for my project? Do you advise me to change my other deployment ?
Or you advise me to make a PR to this project? I didn't see anything else that needs fixing.)

I'm still new to this project, but I'll try to make a PR if I could.

Thank you all again. 👍

@consideRatio
Copy link
Collaborator

I'm not an english native myself either, I'm not sure it is a sensible way to use the word "bonus" like that ;D

What I meant with bonus: was that it would be a relevant change in addition to the others, but that it wouldn't be a required additional change. I also were only considering this specific github repository, but it contained two Helm templates representing two separate k8s Deployment resources, each of which would benefit from a container securityContext.

I appreciate your positive spirit @yobome, thanks for your contributions ❤️ 🎉!

@yobome
Copy link
Contributor Author

yobome commented Feb 26, 2021

@yobome yepp like that.

As this Helm chart contain two separate k8s Deployments with their associated pods, could you add the logic for both?

  1. Add ssh.containerSecurityContext and sftp.containerSecurityContext to be {} by default in values.yaml.
  2. Visit deployment.yaml for both ssh and sftp and add something like this to both.
              {{- with .Values.ssh.containerSecurityContext }}
              securityContext:
                {{- . | toYaml | trimSuffix "\n" | nindent 12 }}
              {{- end }}
  3. Update values.yaml to default to runAsUser: 1000 for the deployment you considered, I don't know if that was for ssh or sftp specifically.
  4. Bonus: change the default also for the other deployment to match what the Dockerfile use.
  5. Bonus: consider other related defaults such as ...
    A sensible default for a containerSecurityContext could be the following btw. So perhaps...
        containerSecurityContext:
          runAsUser: 65534 # nobody user (the main point it isn't root)
          runAsGroup: 65534 # nobody group (the main point it isn't root)
          allowPrivilegeEscalation: false

I tried these:

  1. add ssh.containerSecurityContext and sftp.containerSecurityContext and change them in valuse.yaml to :
  containerSecurityContext:
    runAsUser: 1000
  1. add
      {{- with .Values.ssh.containerSecurityContext }}
      securityContext:
        {{- . | toYaml | trimSuffix "\n" | nindent 12 }}
      {{- end }}

before last line both in sftp and ssh deployment.yaml

But still Error: container has runAsNonRoot and image will run as root (pod: "jupyterhub-ssh-f88c485dc-gwhvl_jhub(87d98be5-89af-41e6-bead-555a3de62642)", container: server)

Sad :(

@yuvipanda
Copy link
Owner

@yobome if you run your helm upgrade or helm install commands with --debug and --dry-run, it will show you the generated manifests before applying them. When you do that, do these values show up correctly?

@consideRatio
Copy link
Collaborator

When I'm debugging rendering of templates, I typically do...

helm template myhelmrelease <chart reference>

helm template myhelmrelease <chart reference> --validate

helm template myhelmrelease <chart reference> --show-only templates/ssh/deployment.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants