English | 简体中文
Pods can have one or multiple underlay CNI networks, and enable connect to the underlay network. Please refer to underlay network case for more details.
Please refer to the following examples for installation:
On VMs and Public Cloud environments, it could use the Spiderpool to directly access underlay network of the VPC :
-
Create a cluster on Alibaba Cloud with ipvlan-based networking
-
For VMware vSphere platform, you can run ipvlan CNI without enabling "promiscuous" mode for vSwitch to ensure forwarding performance. Refer to the installation guide for details.
Pods can own one overlay CNI interfaces and multiple underlay CNI interfaces of the Spiderpool, and enable connect to both overlay and underlay networks. Please refer to dual CNIs case for more details.
Please refer to the following examples for installation:
AI clusters typically use multi-path RDMA networks to provide communication for GPUs. Spiderpool can enable RDMA communication capabilities for containers.
During the Spiderpool installation, you can choose a method for TLS certificate generation. For more information, refer to the article.
For instructions on how to uninstall Spiderpool, please refer to the uninstall guide.
For instructions on how to upgrade Spiderpool, please refer to the upgrade guide.
-
Applications can share an IP pool. See the example for reference.
-
Stateless applications can have a dedicated IP address pool with fixed IP usage range for all Pods. Refer to the example for more details.
-
For stateful applications, each Pod can be allocated a persistent fixed IP address. It also provides control over the IP range used by all Pods during scaling operations. Refer to the example for details.
-
Underlay networking support is available for kubevirt, allowing fixed IP addresses for virtual machines. Refer to the example for details.
-
Applications deployed across subnets can be assigned different subnet IP addresses for each replica. Refer to the example for details.
-
The Subnet feature separates responsibilities between infrastructure administrators and application ones. It automates IP pool management for applications with fixed IP requirements, enabling automatic creation, scaling, and deletion of fixed IP pools. This greatly reduces operational burden. See the example for practical use cases.
In addition to supporting native Kubernetes application controllers, Spiderpool's Subnet feature complements third-party application controllers implemented using operators. Refer to the example for more details.
-
Default IP pools can be set at either the cluster-level or tenant-level. IP pools can be shared throughout the entire cluster or restricted to specific tenants. Check out the example for details.
-
IP pool based on node topology caters to fine-grained subnet planning requirements for each node. Refer to the example for details.
-
Custom routing can be achieved through IP pools, Pod annotations, and other methods. Refer to the example for details.
-
Multiple IP pools can be configured by applications to provide redundancy for IP resources. Refer to the example for details..
-
Global reserved IP addresses can be specified to prevent IPAM from allocating those addresses, thereby avoiding conflicts with externally used IPs. Refer to the example for details.
-
Efficient performance in IP address allocation and release is ensured. Refer to the report for details..
-
Well-designed IP reclamation mechanisms promptly allocate IP addresses during cluster or application recovery processes. Refer to the example for details.
-
Spiderpool offers the ability to assign IP addresses from different subnets to multiple network interfaces of a Pod. This feature ensures coordinated policy routing among all interfaces, guaranteeing consistent data paths for outgoing and incoming requests and mitigating packet loss. Moreover, it allows for customization of the default route using a specific network interface's gateway.
For Pods with multiple underlay CNI network interfaces, you can refer to the example.
For Pods with one overlay network interface and multiple underlay interfaces, you can refer to the example.
-
Support for shared and exclusive modes of RDMA network cards enables applications to utilize RDMA communication devices via maclan, ipvlan, and SR-IOV CNI. For more details, see the SR-IOV example and Macvlan example.
-
coordinator plugin facilitates MAC address reconfiguration based on the IP address of the network interface, ensuring a one-to-one correspondence between them. This approach prevents the need to update ARP forwarding rules in network switches and routers, thus eliminating packet loss. Read the article for further information.
-
Spiderpool enables access to ClusterIP through kube-proxy and eBPF kube-proxy replacement for plugins such as Macvlan CNI, vlan CNI, ipvlan CNI, SR-IOV CNI, ovs CNI. This allows seamless communication between Pods and the host machine, thereby resolving Pod health check issues. Refer to the example for details.
-
Spiderpool assists in IP address conflict detection and gateway reachability checks, ensuring uninterrupted Pod communication. Refer to the example for details.
-
The network of Multi-cluster could be connected by a same underlay network, or Submariner .
-
Spiderpool dynamically creates BOND interfaces and VLAN sub-interfaces on the host machine during Pod startup. This feature assists in setting up master interfaces for Macvlan CNI and ipvlan CNI. Check out the example for implementation details.
-
Convenient generation of Multus NetworkAttachmentDefinition instances with optimized CNI configurations. Spiderpool ensures correct JSON formatting to enhance user experience. See the example for details.