You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when upgrading an existing CDK EKS application to blueprints 1.8.1 from a previous version, the masters role was created as MastersRole* by CDK EKS. This is used in both the aws eks update-kubeconfig command and aws eks get-token command outputs. Since the master role used to authenticate to the cluster is now changed to something like AccessRole*, the commands changed, and the user must update their kubeconfig with the new commands in order to be able to authenticate to the cluster again (if they had updated their kubeconfig with the output command earlier, using the MastersRole).
This breaking change (the change in EKS master role) should be documented in the release notes.
Current Behavior
A user would lose their kubectl access to the cluster if they created their kubeconfig via the default commands.
Reproduction Steps
Deploy a cluster with blueprints < 1.8.1.
Update kubeconfig via the aws eks update-kubeconfig command output by cdk deploy.
Make sure you have access to the cluster via kubectl cluster-info.
Upgrade the blueprints and linked dependencies to 1.8.1. Run npm i.
Deploy the stack with the new version. kubectl commands will not work anymore. An error is returned because the previously configured role does not exist anymore.
Re-run the aws eks update-kubeconfig command output from the second cdk deploy.
You should now be able to issue kubectl commands again.
Possible Solution
The change in master role between versions should be documented in the release notes.
Furthermore, we should addressed the security concerns of creating by default a cluster master role with an insecure wildcard trust policy.
Additional Information/Context
No response
CDK CLI Version
2.81.0 (build bd920f2)
EKS Blueprints Version
1.8.1
Node.js Version
v16.20.0
Environment details (OS name and version, etc.)
Linux
Other information
No response
The text was updated successfully, but these errors were encountered:
Please note that, since I use SSO on my laptop to login to AWS services, I created an IAM identity mapping in aws-auth, mapping the SSO-based role I assume in that AWS account to the cluster-admin user in the EKS cluster.
I then configured an aws cli profile to assume that role. When configuring the kubeconfig, I should have something like this:
In this scenario, the change in master role should not cause any issues, since I am not using it to authenticate.
The security concern still stands, since the master role still exist, and can be assumed by anyone in the AWS account.
Describe the bug
In the latest release v1.8.1, cdk-lib was upgraded to v2.81.0 which included PR aws/aws-cdk#25473 fixing the security issue aws/aws-cdk#25674
This breaking change was reconciled in EKS blueprints by creating another MastersRole with an equally permissive trust policy in
https://github.com/aws-quickstart/cdk-eks-blueprints/pull/702/files#diff-2deda5874d3f9e558bf44658acfe823824902966d1c87c6c0eb87660f43d4d61
The effect of this is twofold:
MastersRole*
by CDK EKS. This is used in both theaws eks update-kubeconfig
command andaws eks get-token
command outputs. Since the master role used to authenticate to the cluster is now changed to something likeAccessRole*
, the commands changed, and the user must update their kubeconfig with the new commands in order to be able to authenticate to the cluster again (if they had updated their kubeconfig with the output command earlier, using the MastersRole).*
trust policy.Expected Behavior
This breaking change (the change in EKS master role) should be documented in the release notes.
Current Behavior
A user would lose their kubectl access to the cluster if they created their kubeconfig via the default commands.
Reproduction Steps
Deploy a cluster with blueprints < 1.8.1.
Update kubeconfig via the
aws eks update-kubeconfig
command output bycdk deploy
.Make sure you have access to the cluster via
kubectl cluster-info
.Upgrade the blueprints and linked dependencies to 1.8.1. Run
npm i
.Deploy the stack with the new version.
kubectl
commands will not work anymore. An error is returned because the previously configured role does not exist anymore.Re-run the
aws eks update-kubeconfig
command output from the secondcdk deploy
.You should now be able to issue kubectl commands again.
Possible Solution
The change in master role between versions should be documented in the release notes.
Furthermore, we should addressed the security concerns of creating by default a cluster master role with an insecure wildcard trust policy.
Additional Information/Context
No response
CDK CLI Version
2.81.0 (build bd920f2)
EKS Blueprints Version
1.8.1
Node.js Version
v16.20.0
Environment details (OS name and version, etc.)
Linux
Other information
No response
The text was updated successfully, but these errors were encountered: