Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eksctl create iamserviceaccount with --override-existing-serviceaccounts does not update existing seriviceaccounts #2665

Closed
kishorj opened this issue Sep 23, 2020 · 5 comments
Labels
kind/feature New feature or request priority/important-soon Ideally to be resolved in time for the next release

Comments

@kishorj
Copy link

kishorj commented Sep 23, 2020

What happened?
I had an existing k8s serviceaccount object, and when I run the eksctl imserviceaccount with --override-existing-serviceaccounts, I don't see the serviceaccount being updated with the eks.amazonaws.com/role-arn annotation.

$ eksctl create iamserviceaccount  --override-existing-serviceaccounts --cluster=my-cluster --namespace=kube-system --name=my-controller --attach-policy-arn=arn:aws:iam::<account_id>:policy/ALBIngressControllerIAMPolicy --approve
[ℹ]  eksctl version 0.28.1
[ℹ]  using region us-west-2
[ℹ]  3 existing iamserviceaccount(s) (kube-system/alb-xxx,kube-system/my-controller,kube-system/ebs-xxx) will be excluded
[ℹ]  1 iamserviceaccount (kube-system/my-controller) was excluded (based on the include/exclude rules)
[!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
[ℹ]  no tasks

I had to delete the existing role via eksctl delete iamserviceaccount first and run the eksctl create iamserviceaccount again for the serviceaccount object to get updated.

What you expected to happen?
When the -override-existing-serviceaccounts flag was specified, I expected the k8s serviceaccount to get updated with the annotation.

How to reproduce it?

  1. eksctl create iamserviceaccount -cluster=my-cluster --namespace=kube-system --name=my-controller --attach-policy-arn=arn:aws:iam::<account_id>:policy/ALBIngressControllerIAMPolicy --approve
  2. kubectl delete serviceaccount my-controller -n kube-system
  3. run step 1 again with additional --override-existing-serviceaccounts option

Anything else we need to know?

Versions
Please paste in the output of these commands:

$ eksctl version
0.28.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.0-beta.0.192+a79c711191d5c0", GitCommit:"xxxx", GitTreeState:"clean", BuildDate:"2020-05-26T22:07:45Z", GoVersion:"go1.14", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-eks-4c6976", GitCommit:"4c6976793196d70bc5cd29d56ce5440c9473648e", GitTreeState:"clean", BuildDate:"2020-07-17T18:46:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Logs

@michaelbeaumont michaelbeaumont added the priority/important-soon Ideally to be resolved in time for the next release label Sep 23, 2020
@michaelbeaumont
Copy link
Contributor

michaelbeaumont commented Sep 23, 2020

Hi, was the SA created outside of eksctl? That'd be the reason it didn't work. I agree it should be fixed and the log output isn't especially helpful.

@kishorj
Copy link
Author

kishorj commented Sep 23, 2020

If the SA gets deleted, whether created by eksctl or outside, eksctl will not create/update the SA even if it is run with --override-existing-serviceaccounts.

@cdenneen
Copy link

@kishorj only reason I can think of here is because eksctl relies heavily on existing stacks. If there is a stack or isn't a stack for a resource. So if you do a kubectl delete sa foo but the stack still exists then I can see why eksctl didn't recreate it. You'd need to delete the sa via eksctl in order for the stack to get deleted.

I've run into similar issue with nodegroup updates. Back in 0.24.0 I was able to use eksctl to update an existing, non eksctl, managed nodegroup. I was able to issue an update on the nodegroup to the next kubernetes version. However if running 0.28.1 it scanned for a matching stack for the managed nodegroup, couldn't find it and refused to upgrade via the API.

I think scanning for stacks is helpful but should always fall back to API calls as the source of truth (kubernetes api in the case of the service account or aws api in the case of the managed nodegroup)

@michaelbeaumont
Copy link
Contributor

Ultimately, this is due to the fact that create iamserviceaccount is not intended to reconcile the config with the cluster, i.e. --override-existing-serviceaccounts is intended for cases where the kubernetes SA exists but not the IAM stack.
But it's definitely a worthy feature, which we're tracking at #1497.

@michaelbeaumont
Copy link
Contributor

Duplicate of #1497

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature New feature or request priority/important-soon Ideally to be resolved in time for the next release
Projects
None yet
Development

No branches or pull requests

3 participants