-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remote config via configmap and secrets #22
Comments
To flesh this out a bit more, Also, another variant of this is that we should support fetching all this stuff from the cloud user data too. |
So this design would work in an obvious way outside of Kubernetes - we're using a standard API type, but we're not depending on an API server or kubelet, etc. But, in a Kubernetes context, it would make total sense to fetch this data from the API server instead of just running http. (And potentially use proper watches to react to events instead of polling). I'm thinking we probably want to make the mechanism to fetch this data pluggable; ultimately it might be that there's This intersects with openshift/machine-config-operator#3327 too. |
A hugely interesting topic here of course is whether we should have a declarative way to set this stuff up. Today for example we have an imperative interface in But...following the kubelet "static pod" model, and building on the pod analogy above, perhaps we have (This topic also relates to coreos/rpm-ostree#2326 ) |
It may also be interesting here to try to support something like an annotation which says the configmap changes should be live-applied by default - in the general case we'd probably need some sort of "post-apply hook" to ask affected services to reload. Or, such services could use inotify of course (at the risk of seeing partial states). |
To implement this I'm thinking:
Now...what many people will want is a read-only |
Trying to play around with this, the ergonomics are definitely somewhat annoying because configmaps can't have Edit: Or, we could accept a |
This is part of containers#22
This is part of containers#22
This is part of containers#22
This is part of containers#22
This is part of containers#22
This is part of containers#22
This is part of containers#22
This interactively replaces the specified state, in a similar way as `kubectl edit`. As of right now, the only supported state to change is the desired image. But this will help unblock [configmap support]. [configmap support]: containers#22
This interactively replaces the specified state, in a similar way as `kubectl edit`. As of right now, the only supported state to change is the desired image. But this will help unblock [configmap support]. [configmap support]: containers#22
This is part of containers#22
This interactively replaces the specified state, in a similar way as `kubectl edit`. As of right now, the only supported state to change is the desired image. But this will help unblock [configmap support]. [configmap support]: containers#22 Signed-off-by: Colin Walters <[email protected]>
This is part of containers#22
This is part of containers#22
Also, we should add support for kernel arguments via ConfigMap too...it'd be a really good match. |
Dug into this a bit more, and actually we have all the bits we need for client-side support for configmaps-in-registry today, which is awesome. The main gap is documenting support for uploading them (xref containers/buildah#5091 ) |
@cgwalters, apart from being closer to existing Kubernetes objects, what ConfigMap/Secret bring compared to systemd-confext? Systemd expects to find the config extensions in a specific folder, e.g. |
This is a complex topic. First, I think it makes sense to enable confext use on top of bootc. However the other case here is I think many will want support for "lifecycle binding" configs and the OS, i.e. And yes, there is the Kubernetes orientation versus the DDI stuff. |
The other thing that seems obvious to me to support is actually supporting per node state, i.e. associate configmaps with a specific node. This couldn't be used for pre-kubelet state (static IPs, hostname) but I can imagine it being useful for things like dynamic tuning. That said though, there's definitely a coordination issue for any config changes that imply disruption (hence require drains etc.); so ultimately there'd need to be something MCO-like that is managing rollout I would say. |
For clarity, when you say a ConfigMap, what would it look like? Currently, it's a single file or list of files with their content base64-encoded. So, I see it as just a way to store the data, but it could as well be the files in an OCI image that is mounted as an overlay for I like the idea of the |
Yes, though a key difference here is: if you're shipping binaries, you need to care about multi-arch in the general case. For uploading generic configuration, you don't. See #22 (comment)
https://docs.podman.io/en/v4.2/markdown/podman-play-kube.1.html exists and is usable without an API server. See also https://www.redhat.com/en/blog/running-containers-cars which argues for reusing container-ecosystem-adjacent tooling without an api server. |
Also these things are connected, because it'd make sense to support attaching bootc configmaps which contain podman-play-kube definitions! |
Yeah, I definitely see the argument for trying to keep this clean and using OCI artifacts, which seem like a better fit. That said, UX-wise having configs just be a regular container image too is very tempting. Building them is trivial in any container image build infra and it's more intuitive to write since the files live as files as they would on the target system rather than compiled into one big YAML file. So 👍 to supporting both. Re. multi-arch, since bootc knows that it's just config files (given that you passed it to e.g. |
Yes though we could easily add |
Just for the record, quadlet supports |
Do you plan to support other APIs? Also, the Kubernetes doc says that ConfigMap are limited to 1MiB:
|
I think bootc's role is just scoped to host management, so Pod/Deployment/DaemonSet are clearly out of scope, right? Those things should be handled by podman. Whether we do mapping around PersistentVolume for the host seems more open, but my instincts say that's just confusing and we should point administrators to systemd |
Files have a lot of metadata such as file mode, paths, owners, etc. Rather than encode all of this into an arbitrary JSON blob and stuff it inside a ConfigMap, why not use the metadata part of the ConfigMap to store these values as annotations? Here's a rough idea of some annotations that could be used for this purpose:
|
(From an organizational perspective note bootc is part of the containers/ GH org, not coreos/)
Right, this one already exists in the WIP code...I think maybe what we should do is make a shared hackmd doc and refine it into a spec, then update the first comment here or so? As far as the other metadata...maybe. So far, there hasn't been a use case for it in Kubernetes, right? I think specifically having executables in configmaps is probably something we should think of as an anti-pattern - we have container images for that. This reduces the need significantly for |
Would it make sense to enable only the configmap to be updated without updating the bootc image as well or would these actions/lifecycles be bound together somehow? Would you be able to use an image manifest to store the config maps with the These could then be customized per-architecture and distributed as an image index so a configuration file could be applied to the bootc installations regardless of the architecture. |
Here are some of the ways we've discussed so far in how configs could be packaged:
ISTM like those are completely orthogonal to how they're bound to the host (in the case of systemd-confext, it could be none at all, but... it could also be "managed" like the other options would be). Even in the managed option, there's e.g. configured in |
I've mentioned this before OOB, but just to surface it here, from the CoreOS side we're very interested in this capability because there the default entrypoint is not to do a custom build, but the day-2 config management is still extremely useful. IOW, we want this so as to reduce the need for either having to reprovision (current Ignition model) or having to maintain a CI pipeline (bootc model). |
We had some in-person discussions related to this last week. Just wanted to jot down some of what was mentioned:
|
This overlaps with #7 some.
Here, the basic idea is something like:
(OR with support for OCI Artifacts we support
bootc config add [--root=/etc] registry:quay.io/examplecorp/config-server-base:latest
)Where
config.yml
is a standard Kubernetes ConfigMap. By default, we "mount" the keys to/etc
. Then,bootc upgrade
looks for updates to all provided configmaps - if any change, it triggers the same upgrade logic as the base image.We also fetch and handle secret objects in the same way. It'd be cool though to support something like handling encrypted secrets (and configmaps) which need to be decrypted via a key (which could be in a TPM or so).
We also need to think carefully about file permissions; mode 0644 for all configmap files and 0600 for secrets may make sense. In addition we could support special annotations to override these.
(This should also work to be invoked immediately after
bootc install
to have it ready on the first boot, i.e. we also have a--root
argument or so)The text was updated successfully, but these errors were encountered: