diff --git a/enhancements/machine-config/auth-token.md b/enhancements/machine-config/auth-token.md new file mode 100644 index 00000000000..5da6c683667 --- /dev/null +++ b/enhancements/machine-config/auth-token.md @@ -0,0 +1,131 @@ +--- +title: mco-auth-token +authors: + - "@cgwalters" +reviewers: + - "@crawford" +approvers: + - "@ashcrow" + - "@crawford" + - "@imcleod" + - "@runcom" +creation-date: 2020-08-05 +last-updated: 2020-08-19 +status: provisional +see-also: +replaces: +superseded-by: +--- + +# Support a provisioning token for the Machine Config Server + +## Release Signoff Checklist + +- [ ] Enhancement is `implementable` +- [ ] Design details are appropriately documented from clear requirements +- [ ] Test plan is defined +- [ ] Graduation criteria for dev preview, tech preview, GA +- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/) + +## Summary + +This enhancement proposes that for new (e.g. 4.7+) installations a "provisioning token" is required by default +to access the Machine Config Server. + +For IPI/machineAPI scenarios, this will be handled fully automatically. + +For user-provisioned installations, openshift-install will generate a token internally via the equivalent of + +``` +$ dd if=/dev/urandom bs=32 count=1 | base64 > token +$ oc -n openshift-config create secret generic machineconfig-auth --from-file=token +``` + +And then this token will be injected into the "pointer ignition configs" it creates. + +## Motivation + +See https://github.com/openshift/machine-config-operator/pull/784 + +The Ignition configuration can contain secrets, and we want to avoid it being accessible both inside and outside the cluster. + +### Goals + +- Increase security +- Avoid breaking upgrades +- Easy to implement +- Easy to understand for operators +- Eventually support being replaced by something more sophisticated (e.g. TPM2 attestation) +- Avoid too strong dependence on platform-specific functionality (i.e. UPI AWS should remain close to UPI bare metal) + +### Non-Goals + +- Completely rework major parts of the provisioning process today + +## Proposal + +Change the MachineConfigServer to support requiring a token in order to retrieve the Ignition config for new installs. + +In IPI/machineAPI managed clusters, `openshift-install` and the MCO would automatically handle +this end to end; the initial implementation would work similarly to the UPI case, but may change +in the future to be even stronger. + +### Implementation Details/Notes/Constraints + +For UPI, the documentation would need to be enhanced to describe this. + +For IPI, it's now possible to implement automatic rotation because the `user-data` provided to nodes is owned by the MCO: https://github.com/openshift/enhancements/blob/master/enhancements/machine-config/user-data-secret-managed.md + +In order to avoid race conditions, the MCS might support the "previous" token for a limited period of time (e.g. one hour/day). + +#### IPI/machineAPI details + +It's probably easiest to start out by automatically generating a rotating secret that works the same way as UPI, and also (per above) support the previous token for a period of time. + +### Risks and Mitigations + +#### Disaster recovery + +See: [Disaster recovery](https://docs.openshift.com/container-platform/4.5/backup_and_restore/disaster_recovery/about-disaster-recovery.html) + +This proposes that the secret for provisioning a node is stored in the cluster itself. But so is the configuration for the cluster, so this is not a new problem. + +That said, see "Rotating the token" below. + +#### Non-Ignition components fetching the Ignition config + +As part of the Ignition spec 3 transition, we needed to deal with non-Ignition consumers of the Ignition config, such as [openshift-ansible](github.com/openshift/openshift-ansible/) and Windows nodes. These should generally instead fetch the rendered `MachineConfig` object from the cluster, and not access the MCS port via TCP. + +#### Rotating the token + +In UPI scenarios, we should document rotating the token, though doing so incurs some danger if the administrator forgets to update the pointer configuration. We should recommend against administrators rotating the token *without* also implementing a "periodic reprovisioning" policy to validate that they can restore workers (and ideally control plane) machines. + +Or better, migrate to an IPI/machineAPI managed cluster. + +That said, we will need to discuss this in a "node provisioning" section in the documentation, and also link to it from the disaster recovery documentation to ensure that an administrator trying to re-provision a failed control plane node has some documentation if they forgot to update the pointer configuration for the control plane. + +This scenario would be easier to debug if [Ignition could report failures to the MCS](https://github.com/coreos/ignition/issues/585). + +### Upgrades + +This will not apply by default on upgrades, even in IPI/machineAPI managed clusters (to start). However, an administrator absolutely could enable this "day 2". Particularly for "static" user provisioned infrastructure where the new nodes are mostly manually provisioned, the burden of adding the token into the process of provisioning would likely be quite low. + +For existing machineAPI managed installs, it should be possible to automatically adapt existing installs to use this, but we would need to very carefully test and roll out such a change; that would be done as a separate phase. + +## Alternatives + +#### Firewalling + +Not obvious to system administrators in e.g. bare metal environments and difficult to enforce with 3rd party SDNs. + +#### Move all secret data out of Ignition + +This would then gate access on CSR approval, but this introduces bootstrapping problems for the kubelet such as the pull secret. + +#### Encrypt secrets in Ignition + +Similar to above, we have a bootstrapping problem around fetching the key. + +#### Improved node identity + +@cgwalters believes that we should support [Trusted Platform Module](https://en.wikipedia.org/wiki/Trusted_Platform_Module)s for the whole provisioning process, including [CSR approval](https://github.com/openshift/cluster-machine-approver). But this is a much bigger change, and while TPM2 is available in GCP and most modern bare metal, it wouldn't apply in e.g. AWS, and we need a "baseline" solution soon.