-
Notifications
You must be signed in to change notification settings - Fork 43
Network configuration hotplug support #241
Comments
@mcastelino Is there the exact plan for this feature? In the case of Looking forward to this feature. |
@mcastelino Do we have any plan for this feature? If need some extra help feel free to contact us. :) |
@guangxuli @amshinde is planning to add this feature. |
@guangxuli -- is this something that you can take a look at implementing? This would help enhance virtcontainers greatly and we'd appreciate your contribution here. @mcastelino @amshinde and I can help review and guide as needed. WDYT? |
@guangxuli it'd be so great if you could contribute for such a feature. From a high level perspective, I think we need to extend the API in func AddNetwork(podID string) error {
if podID == "" {
return errNeedPodID
}
lockFile, err := rwLockPod(podID)
if err != nil {
return err
}
defer unlockPod(lockFile)
p, err := fetchPod(podID)
if err != nil {
return err
}
// Add the new network
return p.addNetwork()
} And you will need to appropriately implement: func (p *Pod) addNetwork() error {
...
} so that it will call into the hypervisor interface to hotplug some new network interface into the existing network namespace. |
After taking a closer look at this and discussing this with @sboeuf, I think the following design could be a possible solution:
@sameo @mcastelino What do you think of the above approach? |
Thanks for summarizing our discussion here @amshinde. |
As a side note, we need to eventually move towards hot plugging all network devices including at startup : #665 |
@amshinde We need the ns monitoring to handle "connect" right? We can still move to a hotplug model for the actual network connection first. This may help parallelize some of the flow in the runtime. |
@mcastelino It'd be great to have the hotplug being done also for the simple case of adding our first interface when creating the pod, but this is a separate issue IMO. The problem with adding additional network interfaces is that this does not come from a direct call through the runtime. There is no command that translates from |
@mcastelino @egernst @sboeuf @amshinde thanks for all your responses. okay, we will go deep into the details of the implementation of shim/runtime. etc in a few days. TBH, we aren't familiar with the code process details, :) even so we would do our best effort to driven this feature and participate to the code implementation. |
@sboeuf yes the single biggest challenge will be the fact that docker network connect does not have an associated OCI Lifecycle event, hence we will need an active component to monitor the network connection. However as this a pod level event, should the shim be the one monitoring it. If the shim is the one monitoring it, this this becomes very docker centric. Ideally the "sandbox" should be the entity monitoring for pod level events. |
@mcastelino We can create shim process with a parameters that will tell shim process whether monitor network connection or not. Only the shim that belongs to the first container of the pod monitors the network connection. |
@mcastelino you're right, I forgot that we're falling into a container semantic instead of a pod semantic here. This will work well for the Docker case, but this would not work for a k8s case. Seems like we need a dedicated watcher (separate process) to monitor the network namespace from a pod level. |
This issue was moved to kata-containers/runtime#161 |
Currently the network interfaces are auto-detected at container/POD launch. This does not allow use cases like
docker network connect
, where network interfaces can be dynamically added to the container.This is currently the only way to support multiple network interfaces with a docker container (outside of swarm).
Also firewall rules and routes may be added to the container post creation.
Note: The runtime is a passive component. Hence hotplug needs to be implemented by an active component running in the network namespace of the container/POD.
To implement network hotplug support
This includes
- interface creation/configuration
- route add/delete
- firewall rules setup
- Hence the shim will need a interface into the CC proxy to send updated configuration
Note: This is different from today where all setup is through the runtime -> proxy -> hyperstart
The first step in this implementation would be to not implement dynamic network hotplug, but to perform all device attaches to QEMU via QMP. This will allow for the hotplug of any device in the future.
The text was updated successfully, but these errors were encountered: