Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot get efs-csi-node to Assume Role #746

Closed
sinkr opened this issue Jul 29, 2022 · 13 comments
Closed

Cannot get efs-csi-node to Assume Role #746

sinkr opened this issue Jul 29, 2022 · 13 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sinkr
Copy link

sinkr commented Jul 29, 2022

/kind bug

What happened?

Hello, thanks for your help in advance!

Chart 2.2.7 (I have also tried chart 2.1.5 to no avail):

efs-csi-controller, according to CloudWatch, correctly assumes the EFS role using the prescribed policy in the documentation here and create an access point, however, efs-csi-node with the same annotation will not assume the rule.

When I exec into an efs-csi-node pod, install awscli and perform aws sts get-caller-identity, the assumed role is correct.

The annotation is set correctly on the DaemonSet's annotation, and the AWS_ROLE_ARN environment variable are correctly set, however, while I see efs-csi-controller correctly assuming the role for activities such as CreateAccessPoint, anything coming from efs-csi-node comes across as ANONYMOUS_PRINCIPAL.

What you expected to happen?
I expect efs-csi-node to assume the EFS-specific role using the prescribed policy in the documentation just like efs-csi-controller correctly does, per the annotation in the SA + what is shown on the pod under AWS_ROLE_ARN.

How to reproduce it (as minimally and precisely as possible)?

  1. Helm install chart v2.2.7
  2. Create the IAM role w/ the prescribed policy in the documentation here.
  3. Update the trust policy to allow both node and controller service accounts to assume the IAM role.
  4. Apply the EFS policy allowing the IAM role access to the EFS filesystem.
  5. Annotate the service accounts for both the controller and node w/ the IAM role from step 2 above.
  6. Create the storage class using the FSid in examples/kubernetes/dynamic_provisioning/specs.
  7. Create the pod using pod.yaml in examples/kubernetes/dynamic_provisioning/specs
  8. Inspect the efs-csi logs from Kubernetes (permission denied)
  9. Inspect the CloudWatch logs for all elasticfilesystem activity in the last minute (see that efs-csi-controller correctly assumes the EFS role and that efs-csi-node comes across as ANONYMOUS_PRINCIPAL.

Anything else we need to know?:

efs-csi-controller correctly assumes the role, but
efs-csi-node does not attempt to assume the role, issues no error, even at logLevel 9, instead attempting to mount EFS via ANONYMOUS_PRINCIPAL.

Environment

  • Kubernetes version (use kubectl version):
    EKS 1.22

  • Driver version:
    1.4.0

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 29, 2022
@sinkr sinkr changed the title Cannot Get efs-csi-node to Assume Role Cannot get efs-csi-node to Assume Role Jul 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 26, 2022
@sinkr
Copy link
Author

sinkr commented Jun 16, 2023

Ping, this is still an issue with 1.5.4.

@sinkr
Copy link
Author

sinkr commented Jun 16, 2023

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Jun 16, 2023
@k8s-ci-robot
Copy link
Contributor

@sinkr: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nicoodle
Copy link

Seeing the same on v1.5.6

@gadiener
Copy link

Same on v1.5.7

@andrewhharmon
Copy link

andrewhharmon commented Sep 7, 2023

as a workaround i think you can specify the iam mount_option flag on the storageClass or the PV. That seemed to resolve it for me. I set a file system policy and it stated failing bc it was coming in as anonymous.

#280

@goyertp
Copy link

goyertp commented Sep 20, 2023

Hello, I have added the iam option. Creating the pvc and mounting the volume in my pod is successful. But a closer look at aws organisation trail reveals the problems.

The access is anonymous.

Here is an example:
{type=AWSAccount, principalid=, arn=null, accountid=ANONYMOUS_PRINCIPAL, invokedby=null, accesskeyid=null, username=null, sessioncontext=null}

My StorageClass seems to be set up correctly:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs
provisioner: efs.csi.aws.com
mountOptions:
  - tls
  - iam

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants