You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given that RFC-0003 needs more debating, I propose we add an optional flag --default-service-account to kustomize-controller and helm-controller so that cluster admins can setup Flux on multi-tenant clusters without having to enforce tenant impersonation using an admission controller.
When the flag is set to a value that's not empty string, all Kustomizations and HelmReleases which don't have spec.serviceAccountName specified, they will use the service account name provided by --default-service-account in the namespace of the object.
When --default-service-account is not set, Flux will behave the same as before, where it uses the cluster-admin role binding to reconcile resource.
Components:
kustomize-controller
helm-controller
The text was updated successfully, but these errors were encountered:
I like this enhancement very much as it simplifies deploying Flux in multi-tenant scenarios a lot by not having to deploy Gatekeeper or Kyverno along with it.
Maybe add a ability to force SA inheritance by child KS? Suppose I want to implement a restricted SA for single tenant, not cluster-wide, what should I do? Now tenant can create child KS object without parent SA and do what they want.
Given that RFC-0003 needs more debating, I propose we add an optional flag
--default-service-account
to kustomize-controller and helm-controller so that cluster admins can setup Flux on multi-tenant clusters without having to enforce tenant impersonation using an admission controller.When the flag is set to a value that's not empty string, all Kustomizations and HelmReleases which don't have
spec.serviceAccountName
specified, they will use the service account name provided by--default-service-account
in the namespace of the object.When
--default-service-account
is not set, Flux will behave the same as before, where it uses the cluster-admin role binding to reconcile resource.Components:
The text was updated successfully, but these errors were encountered: