-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best practices for delivering container content that executes on the host #354
Comments
To be clear, the result of this could be some documentation; or it could be code. I think if we do nothing though, people are going to do manifestly bad things. |
Yet another reason I'd like these binaries in |
See also https://issues.redhat.com/browse/SDN-695 for some older thoughts specifically on the deploying-CNI-plugins angle. We definitely need something simpler than what we're doing now. oh... this is an FCOS bug and that link probably isn't public. Well, the suggestion was to have some sort of config somewhere with like
and then something would know how to pull the binaries out of those images and ensure they got installed correctly. re |
Yeah, |
One thing we could do to generalize this is to have first-class support in ostree (and rpm-ostree) for transient package installs; this is strongly related to live updates except here we'd want the package install to skip the "persistence" step. On the libostree side it'd be like This would allow e.g. CNI to use what appears to be |
mrunalp suggested using |
Re: the config file, that file has to be dynamic. Who generates it? |
Yeah, I don't think it could actually be a single config file. It would be more like, the CNO takes every object of a certain CRD type and combines them together to generate the list of plugins to install. In particular, one of the use cases was that we wanted it to be easier for third-party network plugins to install their CNI binaries without needing to know what directories we've configured cri-o to use, so in that case, no OpenShift component would know what plugin it wanted to install, so there couldn't be a single config file generated by an OpenShift component. |
(The original idea partly quoted above was that everyone would just add elements to an array in the |
Just tom make sure I understand the idea correctly:
I may be wrong, but I think that things would have increased chances of working right (e.g. the RHEL7 vs RHEL8 openssl binary example) if you added the libraries, i.e. if you also had a Going further, and staying in a "container" spirit, I believe that you would practically need to do a So in the end, I think that what confuses me is:
So maybe to help me understand better, how do you see things running:
|
No. This model is intended for binaries that need to execute in the host's mount namespace or otherwise interact with things that run on the host. For example, kubelet, etc. No one should use |
Basically, this is not 'in a "container" spirit'; this is abusing containers as a distribution mechanism for host binaries. |
You're right about this; basically what one needs to do here if one wants to support multiple operating systems is to have e.g. a rhel7/ and rhel8/ subdirectory in your container, inspect the host's That said, see #354 (comment) for the "use RPMs" path. |
This also relates to #401 which is more about persistent extensions - things one needs when the host boots, before kubelet, etc. |
Support read-only queries, but deny any attempts to make changes with a clear error. Now...in the future, we probably do want to support making "transient" overlays. See: coreos/fedora-coreos-tracker#354 (comment) Closes: ostreedev#1921
Support read-only queries, but deny any attempts to make changes with a clear error. Now...in the future, we probably do want to support making "transient" overlays. See: coreos/fedora-coreos-tracker#354 (comment) Closes: ostreedev#1921
Support read-only queries, but deny any attempts to make changes with a clear error. Now...in the future, we probably do want to support making "transient" overlays. See: coreos/fedora-coreos-tracker#354 (comment) Closes: ostreedev#1921
Support read-only queries, but deny any attempts to make changes with a clear error. Now...in the future, we probably do want to support making "transient" overlays. See: coreos/fedora-coreos-tracker#354 (comment) Closes: ostreedev#1921
Support read-only queries, but deny any attempts to make changes with a clear error. Now...in the future, we probably do want to support making "transient" overlays. See: coreos/fedora-coreos-tracker#354 (comment) Closes: ostreedev#1921
See: coreos/fedora-coreos-tracker#354 Basically we want to support things like containers that contain binaries designed to execute on the host. These should really be "lifecycle bound" to the container image. Let's at least have an obvious place to drop them that goes away on reboot.
We emphasize containers, but there are needs to execute code on the host. Package layering is one; it has some advantages and major disadvantages.
In OpenShift today, we have a pattern of privileged containers that lay down and execute some binaries on the host. This avoids the reboots inherent in layering (also doesn't require RPMs).
Recently however, this failed because running binaries targeting e.g. RHEL7 on a RHEL8 host may fail if they link to e.g. openssl. A general best practice here is really that the binaries need to be targeting the same userspace as the host. With e.g. statically linked Go/Rust type code one can avoid most issues but not all (and you really want to dynamically link openssl).
See this Dockerfile which pairs with this PR.
Further, I think we should move these binaries into e.g.
/run/bin
or so - if there's a higher level process that pulls the containers to a node on reboot (e.g. a systemd unit created via Ignition, or in the OpenShift case the kubelet), then having the binaries in/run
helps ensure that if e.g. the container is removed, at least the binaries will go away on reboot.That said...we may consider even shipping something like a
/host/usr/bin/coreos-host-overlay install <name> <rootfs>
tool that assumes the host rootfs is mounted at/host
and handles the case where e.g. a container delivering host binaries is upgraded (or removed) before reboot.(This problem set quickly generalizes of course to something like "transient RPMs")
The text was updated successfully, but these errors were encountered: