-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subscription Content Access #384
Subscription Content Access #384
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
projected-resource CSI driver will be responsible for conducting a subject access review (SAR) check | ||
to ensure that the service account for the pod has permission to mount any projected resources. | ||
|
||
The full details of this component are specified in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noting here so I don’t forget later, but is the csi driver itself only able to read the one named secret?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the CSI driver design is generic, the intent is that it can read any configmap or secret in the cluster, because any of them could be the target of a projection/mount. Auth/admission logic is what prevents a particular user from accessing/mounting a particular secret.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it helps @derekwaynecarr the sample yaml we settled on based on prompting from @deads2k :
kind: ProjectedResource
apiVersion: projected-resource.storage.openshift.io/v1
metadata:
name: shared-cool-secret
spec:
backingResource:
kind: Secret
apiVersion: v1
namespace: ns-one
name: cool-secret
description: "used to access my internal git repo, RHEL entitlements, or something else cool"
status:
conditions:
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Furthermore we will need a ConfigMap to mount in the right redhat.repo
file (these need to be different if you're running a rhel7 or rhel8 container).
@gabemontero food for thought - k8s supports a ProjectedVolume
which lets you combine Secrets
and ConfigMaps
[1]. We should separately discuss if this is something we want to support in the CSI driver API.
[1] https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#volumeprojection-v1-core
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting @adambkaplan ... yeah add it as a note to the appropriate Jira in this space you recently opened
|
||
### Version Skew Strategy | ||
|
||
Each component will be responsible for managing version skew during upgrade. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth discussing the behavior if the customer is using rhel 7 hosts in a cluster.
Worth discussing how an existing cluster will get the secret when upgrading. Will we alert in absence of secret? What was thinking around that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think the presence of rhel7 hosts changes this. Yes technically those hosts might have subscriptions we could hostmount, but i don't think we should hostmount them. Even if the pod is running on a rhel7 host, we could just continue using the CSI driver to get the cluster's subscription config/creds into the pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be interesting, because if a RHEL 7 host is subscribed independently of the cluster, the containers on that host already have access to entitlements. The defaults in cri-o pass those through to containers (via mounts.conf
). In this scenario, "do nothing" means your container could consume the host entitlement, but adding the annotation lets you consume a different entitlement with potentially different access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@derekwaynecarr I added this as a "Risk" - we could address by having the Subscription Injection operator apply a default annotation to every pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated. However, adding the annotation to every pod introduces a worse risk of pods being denied. The current proposal for the injection operator+webhook will use a custom resource to expose the entitlement, and a SAR checks against the custom resource to ensure the requesting service account has permission to read the entitlement.
Is there an easy way to make a custom resource readable to every service account?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe aggregate it to the view role, but the view role is not bound to every SA by default IIRC.
So we are talking an operator that is monitoring namespace creation a la what is done with the builder SA, etc.
At most it would be something a cluster admin would opt into.
Whether that is "easy" ....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per @dmage - we can address this by granting read permissions to the system:serviceaccount
group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i like the idea of introducing this(the subscription manager, subscription admission webhook, as well as the CSI driver) as a community olm operator first (with no support/expectations). that seems like a good way to bypass all the upgrade/support/tech preview conversations while iterating on it. Not sure if @derekwaynecarr has thoughts on how acceptable that is. (we'd basically abandon the community operator in place once we were ready to release a supported one)
enhancements/subscription-content/subscription-content-access.md
Outdated
Show resolved
Hide resolved
projected-resource CSI driver will be responsible for conducting a subject access review (SAR) check | ||
to ensure that the service account for the pod has permission to mount any projected resources. | ||
|
||
The full details of this component are specified in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the CSI driver design is generic, the intent is that it can read any configmap or secret in the cluster, because any of them could be the target of a projection/mount. Auth/admission logic is what prevents a particular user from accessing/mounting a particular secret.
|
||
### Version Skew Strategy | ||
|
||
Each component will be responsible for managing version skew during upgrade. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think the presence of rhel7 hosts changes this. Yes technically those hosts might have subscriptions we could hostmount, but i don't think we should hostmount them. Even if the pod is running on a rhel7 host, we could just continue using the CSI driver to get the cluster's subscription config/creds into the pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few days later than everyone else but I've got some comments/suggestions below. Nothing too major I think.
#### Projected-resource CSI Driver | ||
|
||
Once the entitlement key(s) and yum repository definitions are stored on the cluster, the next step | ||
is to share these definitions across namespaces. This task will fall to the projected-resource |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make projected-resource ...
a link to https://github.com/openshift/enhancements/blob/master/enhancements/cluster-scope-secret-volumes/csi-driver-host-injections.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do on the next push.
enhancements/subscription-content/subscription-content-access.md
Outdated
Show resolved
Hide resolved
projected-resource CSI driver will be responsible for conducting a subject access review (SAR) check | ||
to ensure that the service account for the pod has permission to mount any projected resources. | ||
|
||
The full details of this component are specified in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it helps @derekwaynecarr the sample yaml we settled on based on prompting from @deads2k :
kind: ProjectedResource
apiVersion: projected-resource.storage.openshift.io/v1
metadata:
name: shared-cool-secret
spec:
backingResource:
kind: Secret
apiVersion: v1
namespace: ns-one
name: cool-secret
description: "used to access my internal git repo, RHEL entitlements, or something else cool"
status:
conditions:
...
enhancements/subscription-content/subscription-content-access.md
Outdated
Show resolved
Hide resolved
Could this possibly live on as an OLM operator if we end up supporting installation of OLM operators ootb at install time, kinda the "on-top-of.." story ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updates stemming from my last set of comments look good @adambkaplan
The cluster subscription manager must also manage entitlement key rotation events. In the event a | ||
cluster's entitlement key is changed, the cluster subscription manager must update the key stored | ||
in `openshift-config/etc-pki-entitlement`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to include certificate rotation. Entitlement keys can be revoked and rotated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does the cluster subscription manager receive the key rotation events @adambkaplan ? Is it the thing that registers the cluster with Red Hat's cloud manager (and the registration includes supplying some sort of callback/listener)? Or does some other component within OCP already doing registering and recording events somewhere?
Some hint on the mechanics could be useful here IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gabemontero to be honest, I don't know enough w.r.t entitlement rotation/revocation works today to say how this would be implemented. I can add this as an open question - presumably there are existing APIs/infra to do this in RHEL today.
Meta-enhancement to describe how workloads can access RHEL subscription content in OpenShift 4 clusters. This outlines the key features and components that will allow workloads to have entitlement keys inejcted. These keys and optional yum repo configurations are needed to access RHEL subscription RPMs. The following components are proposed: 1. A CSI driver to project Secrets and ConfigMaps across namespaces, with appropriate RBAC. 2. A Cluster Subscription Manager to store entitlement keys and yum repo configurations onto the cluster. This subscription manager would also be responsible for rotating entitlement keys. 3. A mutating admission webhook that allows workloads to opt into having entitlement keys injected. This webhook would be deployed via an operator, with custom resources to control configuration and access. A separate enhancement for OpenShift Builds leveraging these capabilities is also proposed.
f31170f
to
7eb5231
Compare
@bparees I think this is ready for final review. Commits have been squashed. |
nice job summarizing the high level pieces and how they all interact/come together to provide the overall function. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: adambkaplan, bparees The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Would it be possible for users to specify their own subscription instead of only using Cluster subscription? Cluster might be operating under whatever production subscription organization has. But developers may want to try unavailable channels. |
Meta-enhancement to describe how workloads can access RHEL subscription content in OpenShift 4 clusters. This outlines the key features and components that will allow workloads to have
entitlement keys inejcted. These keys and yum repo configurations are needed to access RHEL
subscription RPMs.
The following components are proposed:
cluster.
A separate enhancement for OpenShift Builds leveraging these capabilities is also proposed.