Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question] How do packets arrive into the application containers? #1742

Open
mrafatpanah opened this issue Jul 11, 2024 · 3 comments
Open

[question] How do packets arrive into the application containers? #1742

mrafatpanah opened this issue Jul 11, 2024 · 3 comments

Comments

@mrafatpanah
Copy link

mrafatpanah commented Jul 11, 2024

Hello. I'm new to using SONiC.
I read many documents, such as Architecture, Configuration, Design specs, etc but I don't clearly understand how a packet arrives in application containers such as BGP or LDP.
What happens when a packet enters an interface until it leaves the switch?
I don't understand how SDK and kernel interact. For example, when a new route arrives in one of the interfaces how will it convert to a kernel netlink message?

@mrafatpanah mrafatpanah changed the title [question] How packets arrives into the application containers? [question] How do packets arrive into the application containers? Jul 11, 2024
@nazariig
Copy link
Collaborator

Hi, @mrafatpanah

Regarding your questions:

  1. I don't clearly understand how a packet arrives in application containers such as BGP or LDP

By default docker container provides an isolation of network namespace from the host, so it can't be easily accessed.
In SONiC networking-wise containers (bgp, lldp, etc.) share their own namespace with the host, so all interfaces are accessible from within. With this feature, you can operate netdevs in the same why as you would be on the host

admin@sonic:~$ docker inspect bgp | jq .[].HostConfig.NetworkMode
"host"
  1. What happens when a packet enters an interface until it leaves the switch?

The packet is being processed by IP stack in kernel until the forwarding decision is taken.

Usually the flow is:

  1. Packet is received at the physical port
  2. Packet is trapped to the CPU
  3. Packet is placed to the relevant netdev in kernel
  4. Packet is processed by IP stack
  5. Forwarding decision is taken by kernel (e.g., answer to ping request)
  6. Packet is sent over the netdev
  7. Packet is sent out from the physical port

For packets not intended for control plane, the forwarding decision is done exclusively by dataplane - packet is processed only by ASIC

Note: netdevs are created in kernel by SAI/SDK and represent the relevant physical interfaces of the switch

  1. When a new route arrives in one of the interfaces how will it convert to a kernel netlink message?

Usually the flow is:

  1. Route messages are delivered over BGP protocol to FRR routing stack (bgpd, zebra, etc.)
  2. FRR generates netlink message - RTM_NEWROUTE
  3. fpmsyncd listens to netlink messages and once received, generates an event for routeorch
  4. routeorch programs routes via sairedis to syncd
  5. syncd programs routes via SAI/SDK to HW

@nazariig
Copy link
Collaborator

@mrafatpanah please ask these kind of questions using sonic-buildimage

@harikrishnamsys
Copy link

@mrafatpanah

Regarding your questions:

  • I don't clearly understand how a packet arrives in application containers such as BGP or LDP

In a typical Docker setup, containers are isolated in their own network namespaces, which means they don’t have direct access to the host's network interfaces. However, in SONiC, the network containers like BGP, LLDP, and others operate differently. These containers share the host’s network namespace, allowing them to access and manage all the host's network interfaces directly.

This shared namespace feature enables network containers to interact with the host’s interfaces (netdevs) in the same way as if the operations were being performed directly on the host itself. As a result, you can configure and manage network devices from within the container as easily as you would on the host machine, without the need for additional networking configurations like port mappings or virtual bridges.

In essence, SONiC removes the network isolation typically provided by Docker to ensure seamless access between the host and container for managing network functions, making network operations more straightforward and consistent across both the host and containers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants