Skip to content

Commit

Permalink
Address comments
Browse files Browse the repository at this point in the history
Address comments and add more validation UT cases.

Signed-off-by: wgrayson <[email protected]>
  • Loading branch information
GraysonWu committed Aug 2, 2022
1 parent 8320584 commit 090b3a6
Show file tree
Hide file tree
Showing 9 changed files with 277 additions and 98 deletions.
8 changes: 4 additions & 4 deletions build/yamls/antrea.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2767,7 +2767,7 @@ data:
# Provide the address of Kubernetes apiserver, to override any value provided in kubeconfig or InClusterConfig.
# Defaults to "". It must be a host string, a host:port pair, or a URL to the base of the apiserver.
kubeAPIServerOverride: ""
kubeAPIServerOverride: "https://192.168.77.100:6443"
# Provide the address of DNS server, to override the kube-dns service. It's used to resolve hostname in FQDN policy.
# Defaults to "". It must be a host string or a host:port pair of the DNS server (e.g. 10.96.0.10, 10.96.0.10:53,
Expand Down Expand Up @@ -2817,7 +2817,7 @@ data:
# feature to be enabled.
# Note that this option is experimental. If kube-proxy is removed, option kubeAPIServerOverride must be used to access
# apiserver directly.
proxyAll: false
proxyAll: true
# A string array of values which specifies the host IPv4/IPv6 addresses for NodePort. Values can be valid IP blocks.
# (e.g. 1.2.3.0/24, 1.2.3.4/32). An empty string slice is meant to select all host IPv4/IPv6 addresses.
# Note that the option is only valid when proxyAll is true.
Expand Down Expand Up @@ -3712,7 +3712,7 @@ spec:
kubectl.kubernetes.io/default-container: antrea-agent
# Automatically restart Pods with a RollingUpdate if the ConfigMap changes
# See https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
checksum/config: 0814cc9f3baa94e76e83a108b04d05200485610c7f5950c584503af7151a9e86
checksum/config: 5204c8793a312441190994144e04cce286931362a4d92d3d692a945ad333fb65
labels:
app: antrea
component: antrea-agent
Expand Down Expand Up @@ -3952,7 +3952,7 @@ spec:
annotations:
# Automatically restart Pod if the ConfigMap changes
# See https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
checksum/config: 0814cc9f3baa94e76e83a108b04d05200485610c7f5950c584503af7151a9e86
checksum/config: 5204c8793a312441190994144e04cce286931362a4d92d3d692a945ad333fb65
labels:
app: antrea
component: antrea-controller
Expand Down
31 changes: 15 additions & 16 deletions docs/antrea-network-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
- [Node Selector](#node-selector)
- [toServices egress rules](#toservices-egress-rules)
- [ServiceAccount based selection](#serviceaccount-based-selection)
- [ACNP appliedTo NodePort Service](#acnp-appliedto-nodeport-service)
- [Apply to NodePort Service](#apply-to-nodeport-service)
- [ClusterGroup](#clustergroup)
- [ClusterGroup CRD](#clustergroup-crd)
- [kubectl commands for ClusterGroup](#kubectl-commands-for-clustergroup)
Expand Down Expand Up @@ -484,9 +484,9 @@ Specific Pods from specific Namespaces can be selected by providing both a
The `appliedTo` field can also reference a ClusterGroup resource by setting
the ClusterGroup's name in `group` field in place of the stand-alone selectors.
The `appliedTo` field can also reference a Service by setting the Service's name
and namespace in `service` field in place of the stand-alone selectors. Only a
and Namespace in `service` field in place of the stand-alone selectors. Only a
NodePort Service can be referred by this field. More details can be found in the
[ACNPAppliedToNodePortService](#acnp-appliedto-nodeport-service) section.
[ApplyToNodePortService](#apply-to-nodeport-service) section.
IPBlock cannot be set in the `appliedTo` field.
An IPBlock ClusterGroup referenced in an `appliedTo` field will be ignored,
and the policy will have no effect.
Expand Down Expand Up @@ -1293,30 +1293,29 @@ spec:

In this example, the policy will be applied to all Pods whose ServiceAccount is `sa-1` of `ns-1`.
Let's call those Pods "appliedToPods".
The egress `to` section will select all Pods whose ServiceAccount is in `ns-2` namespace and name as `sa-2`.
The egress `to` section will select all Pods whose ServiceAccount is in `ns-2` Namespace and name as `sa-2`.
Let's call those Pods "egressPods".
After this policy is applied, traffic from "appliedToPods" to "egressPods" will be dropped.

Note: Antrea will use a reserved label key for internal processing `serviceAccount`.
The reserved label looks like: `internal.antrea.io/service-account:[ServiceAccountName]`. Users should avoid using
this label key in any entities no matter if a policy with `serviceAccount` is applied in the cluster.

### ACNP appliedTo NodePort Service
### Apply to NodePort Service

Antrea ClusterNetworkPolicy features a `service` field in `appliedTo` field to enable the ACNP could be enforced
on the traffic from external client to a NodePort Service.
Antrea ClusterNetworkPolicy features a `service` field in `appliedTo` field to enforce the ACNP rules on the
traffic from external clients to a NodePort Service.

`service` uses `namespace` and `name` to select the Service with a specific name under a specific namespace and
`service` uses `namespace` and `name` to select the Service with a specific name under a specific Namespace;
only a NodePort Service can be referred by `service` field.

`service` field cannot be used with any other fields and a policy or a rule can't be applied to NodePort Service
and other peers at the same time.
There are a few **restrictions** on configuring a policy/rule that applies to NodePort Services:

Since `service` field is used to control the external access of a NodePort Service, then

1. If a `appliedTo` with `service` is used at policy level, then this policy can only contain ingress rules.
2. If a `appliedTo` with `service` is used at rule level, then this rule can only be an ingress rule.
3. If an ingress rule is applied to a NodePort Service, then this ingress can only use `ipBlock` in its `from` field.
1. `service` field cannot be used with any other fields in `appliedTo`.
2. a policy or a rule can't be applied to both a NodePort Service and other entities at the same time.
3. If a `appliedTo` with `service` is used at policy level, then this policy can only contain ingress rules.
4. If a `appliedTo` with `service` is used at rule level, then this rule can only be an ingress rule.
5. If an ingress rule is applied to a NodePort Service, then this rule can only use `ipBlock` in its `from` field.

An example policy using `service` in `appliedTo` could look like this:

Expand All @@ -1339,7 +1338,7 @@ spec:
cidr: 1.1.1.0/24
```

In this example, the policy will be applied to the NodePort Service `svc-1` in Namespace `ns-1`
In this example, the policy will be applied to the NodePort Service `svc-1` in Namespace `ns-1`,
and drop all packets from CIDR `1.1.1.0/24`.

## ClusterGroup
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -674,7 +674,7 @@ func TestValidate(t *testing.T) {
groups := v1beta2.GroupMemberSet{}
groupAddress1, groupAddress2 := "225.1.2.3", "225.1.2.4"

groups["ns1/pod1"] = newAppliedToGroupMemberPod("pod1", "ns1")
groups["Pod:ns1/pod1"] = newAppliedToGroupMemberPod("pod1", "ns1")
controller.ruleCache.appliedToSetByGroup["appliedToGroup01"] = groups
controller.ruleCache.rules.Add(rule1)
controller.ruleCache.rules.Add(rule2)
Expand Down
140 changes: 70 additions & 70 deletions pkg/agent/controller/networkpolicy/reconciler.go
Original file line number Diff line number Diff line change
Expand Up @@ -162,24 +162,24 @@ type lastRealized struct {
// the fqdn selector of this policy rule. It must be empty for policy rule
// that is not egress and does not have toFQDN field.
fqdnIPAddresses sets.String
// groupIDAddresses tracks the last realized set of groupIDs resolved for the
// serviceGroupIDs tracks the last realized set of groupIDs resolved for the
// toServices of this policy rule or services of TargetMember of this policy rule.
// It must be empty for policy rule that is neither an egress rule with toServices
// field nor an ingress rule that is applied to Services.
groupIDAddresses sets.Int64
serviceGroupIDs sets.Int64
// groupAddresses track the latest realized set of multicast groups for the multicast traffic
groupAddresses sets.String
}

func newLastRealized(rule *CompletedRule) *lastRealized {
return &lastRealized{
ofIDs: map[servicesKey]uint32{},
CompletedRule: rule,
podOFPorts: map[servicesKey]sets.Int32{},
podIPs: nil,
fqdnIPAddresses: nil,
groupIDAddresses: nil,
groupAddresses: nil,
ofIDs: map[servicesKey]uint32{},
CompletedRule: rule,
podOFPorts: map[servicesKey]sets.Int32{},
podIPs: nil,
fqdnIPAddresses: nil,
serviceGroupIDs: nil,
groupAddresses: nil,
}
}

Expand Down Expand Up @@ -504,7 +504,7 @@ func (r *reconciler) add(rule *CompletedRule, ofPriority *uint16, table uint8) e
if r.fqdnController != nil {
lastRealized.fqdnIPAddresses = nil
}
lastRealized.groupIDAddresses = nil
lastRealized.serviceGroupIDs = nil
return err
}
// Record ofID only if its Openflow is installed successfully.
Expand Down Expand Up @@ -551,13 +551,9 @@ func (r *reconciler) computeOFRulesForAdd(rule *CompletedRule, ofPriority *uint1
for svcKey, members := range membersByServicesMap {
toAddresses := make([]types.Address, 0)
if isRuleAppliedToService {
addressSet := sets.NewInt64()
for _, member := range members.Items() {
curAddresses, curAddressSet := r.svcRefToOFAddresses(*member.Service)
toAddresses = append(toAddresses, curAddresses...)
addressSet = addressSet.Union(curAddressSet)
}
lastRealized.groupIDAddresses = addressSet
svcGroupIDs := r.getSvcGroupIDs(members)
toAddresses = svcGroupIDsToOFAddresses(svcGroupIDs)
lastRealized.serviceGroupIDs = svcGroupIDs
} else {
ofPorts := r.getOFPorts(members)
toAddresses = ofPortsToOFAddresses(ofPorts)
Expand Down Expand Up @@ -647,16 +643,11 @@ func (r *reconciler) computeOFRulesForAdd(rule *CompletedRule, ofPriority *uint1
lastRealized.fqdnIPAddresses = addressSet
}
if len(rule.To.ToServices) > 0 {
var addresses []types.Address
addressSet := sets.NewInt64()
for _, svcRef := range rule.To.ToServices {
curAddresses, curAddressSet := r.svcRefToOFAddresses(svcRef)
addresses = append(addresses, curAddresses...)
addressSet = addressSet.Union(curAddressSet)
}
svcGroupIDs := r.svcRefsToGroupIDs(rule.To.ToServices)
addresses := svcGroupIDsToOFAddresses(svcGroupIDs)
ofRule.To = append(ofRule.To, addresses...)
// If the rule installation fails, this will be reset.
lastRealized.groupIDAddresses = addressSet
lastRealized.serviceGroupIDs = svcGroupIDs
}
}
}
Expand Down Expand Up @@ -762,18 +753,14 @@ func (r *reconciler) update(lastRealized *lastRealized, newRule *CompletedRule,
membersByServicesMap, servicesMap := groupMembersByServices(newRule.Services, newRule.TargetMembers)
for svcKey, members := range membersByServicesMap {
newOFPorts := r.getOFPorts(members)
newGroupIDAddressSet := sets.NewInt64()
ofID, exists := lastRealized.ofIDs[svcKey]
toAddresses := make([]types.Address, 0)
newGroupIDSet := r.getSvcGroupIDs(members)
var toAddresses []types.Address
if isRuleAppliedToService {
for _, member := range members.Items() {
curAddresses, curAddressSet := r.svcRefToOFAddresses(*member.Service)
toAddresses = append(toAddresses, curAddresses...)
newGroupIDAddressSet = newGroupIDAddressSet.Union(curAddressSet)
}
toAddresses = svcGroupIDsToOFAddresses(newGroupIDSet)
} else {
toAddresses = ofPortsToOFAddresses(newOFPorts)
}
ofID, exists := lastRealized.ofIDs[svcKey]
// Install a new Openflow rule if this group doesn't exist, otherwise do incremental update.
if !exists {
ofRule := &types.PolicyRule{
Expand All @@ -797,22 +784,21 @@ func (r *reconciler) update(lastRealized *lastRealized, newRule *CompletedRule,
}
lastRealized.ofIDs[svcKey] = ofRule.FlowID
} else {
addedTo := ofPortsToOFAddresses(newOFPorts.Difference(lastRealized.podOFPorts[svcKey]))
deletedTo := ofPortsToOFAddresses(lastRealized.podOFPorts[svcKey].Difference(newOFPorts))
var addedTo, deletedTo []types.Address
if isRuleAppliedToService {
originalGroupIDAddressSet := sets.NewInt64()
if lastRealized.groupIDAddresses != nil {
originalGroupIDAddressSet = lastRealized.groupIDAddresses
}
addedGroupIDAddress := newGroupIDAddressSet.Difference(originalGroupIDAddressSet)
removedGroupIDAddress := originalGroupIDAddressSet.Difference(newGroupIDAddressSet)
for a := range addedGroupIDAddress {
addedTo = append(addedTo, openflow.NewServiceGroupIDAddress(binding.GroupIDType(a)))
originalGroupIDSet := sets.NewInt64()
if lastRealized.serviceGroupIDs != nil {
originalGroupIDSet = lastRealized.serviceGroupIDs
}
for r := range removedGroupIDAddress {
deletedTo = append(deletedTo, openflow.NewServiceGroupIDAddress(binding.GroupIDType(r)))
addedTo = svcGroupIDsToOFAddresses(newGroupIDSet.Difference(originalGroupIDSet))
deletedTo = svcGroupIDsToOFAddresses(originalGroupIDSet.Difference(newGroupIDSet))
} else {
originalOfPortsSet := sets.NewInt32()
if lastRealized.podOFPorts[svcKey] != nil {
originalOfPortsSet = lastRealized.podOFPorts[svcKey]
}
lastRealized.groupIDAddresses = newGroupIDAddressSet
addedTo = ofPortsToOFAddresses(newOFPorts.Difference(originalOfPortsSet))
deletedTo = ofPortsToOFAddresses(originalOfPortsSet.Difference(newOFPorts))
}
if err := r.updateOFRule(ofID, addedFrom, addedTo, deletedFrom, deletedTo, ofPriority); err != nil {
return err
Expand All @@ -821,6 +807,7 @@ func (r *reconciler) update(lastRealized *lastRealized, newRule *CompletedRule,
delete(staleOFIDs, svcKey)
}
lastRealized.podOFPorts[svcKey] = newOFPorts
lastRealized.serviceGroupIDs = newGroupIDSet
}
} else {
if r.fqdnController != nil && len(newRule.To.FQDNs) > 0 {
Expand Down Expand Up @@ -894,23 +881,14 @@ func (r *reconciler) update(lastRealized *lastRealized, newRule *CompletedRule,
}
}
}
originalGroupIDAddressSet, newGroupIDAddressSet := sets.NewInt64(), sets.NewInt64()
if lastRealized.groupIDAddresses != nil {
originalGroupIDAddressSet = lastRealized.groupIDAddresses
}
originalGroupIDSet, newGroupIDSet := sets.NewInt64(), sets.NewInt64()
if len(newRule.To.ToServices) > 0 {
for _, svcRef := range newRule.To.ToServices {
_, groupIDSets := r.svcRefToOFAddresses(svcRef)
newGroupIDAddressSet = newGroupIDAddressSet.Union(groupIDSets)
}
addedGroupIDAddress := newGroupIDAddressSet.Difference(originalGroupIDAddressSet)
removedGroupIDAddress := originalGroupIDAddressSet.Difference(newGroupIDAddressSet)
for a := range addedGroupIDAddress {
addedTo = append(addedTo, openflow.NewServiceGroupIDAddress(binding.GroupIDType(a)))
}
for r := range removedGroupIDAddress {
deletedTo = append(deletedTo, openflow.NewServiceGroupIDAddress(binding.GroupIDType(r)))
if lastRealized.serviceGroupIDs != nil {
originalGroupIDSet = lastRealized.serviceGroupIDs
}
newGroupIDSet = r.svcRefsToGroupIDs(newRule.To.ToServices)
addedTo = svcGroupIDsToOFAddresses(newGroupIDSet.Difference(originalGroupIDSet))
deletedTo = svcGroupIDsToOFAddresses(originalGroupIDSet.Difference(newGroupIDSet))
}
if err := r.updateOFRule(ofID, addedFrom, addedTo, deletedFrom, deletedTo, ofPriority); err != nil {
return err
Expand All @@ -920,7 +898,7 @@ func (r *reconciler) update(lastRealized *lastRealized, newRule *CompletedRule,
lastRealized.fqdnIPAddresses = newFQDNAddressSet
}
// Update the groupID address set if rule installation succeeds.
lastRealized.groupIDAddresses = newGroupIDAddressSet
lastRealized.serviceGroupIDs = newGroupIDSet
// Delete valid servicesKey from staleOFIDs.
delete(staleOFIDs, svcKey)
}
Expand Down Expand Up @@ -1094,6 +1072,19 @@ func (r *reconciler) getIPs(members v1beta2.GroupMemberSet) sets.String {
return ips
}

func (r *reconciler) getSvcGroupIDs(members v1beta2.GroupMemberSet) sets.Int64 {
var svcRefs []v1beta2.ServiceReference
for _, m := range members {
if m.Service != nil {
svcRefs = append(svcRefs, v1beta2.ServiceReference{
Name: m.Service.Name,
Namespace: m.Service.Namespace,
})
}
}
return r.svcRefsToGroupIDs(svcRefs)
}

// groupMembersByServices groups the provided groupMembers based on their services resolving result.
// A map of servicesHash to the grouped members and a map of servicesHash to the services resolving result will be returned.
func groupMembersByServices(services []v1beta2.Service, memberSet v1beta2.GroupMemberSet) (map[servicesKey]v1beta2.GroupMemberSet, map[servicesKey][]v1beta2.Service) {
Expand Down Expand Up @@ -1143,16 +1134,25 @@ func ofPortsToOFAddresses(ofPorts sets.Int32) []types.Address {
return addresses
}

func (r *reconciler) svcRefToOFAddresses(svcRef v1beta2.ServiceReference) ([]types.Address, sets.Int64) {
var addresses []types.Address
addressSet := sets.NewInt64()
for _, groupCounter := range r.groupCounters {
for _, groupID := range groupCounter.GetAllGroupIDs(k8s.NamespacedName(svcRef.Namespace, svcRef.Name)) {
addresses = append(addresses, openflow.NewServiceGroupIDAddress(groupID))
addressSet.Insert(int64(groupID))
func (r *reconciler) svcRefsToGroupIDs(svcRefs []v1beta2.ServiceReference) sets.Int64 {
groupIDs := sets.NewInt64()
for _, svcRef := range svcRefs {
for _, groupCounter := range r.groupCounters {
for _, groupID := range groupCounter.GetAllGroupIDs(k8s.NamespacedName(svcRef.Namespace, svcRef.Name)) {
groupIDs.Insert(int64(groupID))
}
}
}
return addresses, addressSet
return groupIDs
}

func svcGroupIDsToOFAddresses(groupIDs sets.Int64) []types.Address {
// Must not return nil as it means not restricted by addresses in Openflow implementation.
addresses := make([]types.Address, 0, len(groupIDs))
for _, groupID := range groupIDs.List() {
addresses = append(addresses, openflow.NewServiceGroupIDAddress(binding.GroupIDType(groupID)))
}
return addresses
}

func groupMembersToOFAddresses(groupMemberSet v1beta2.GroupMemberSet) []types.Address {
Expand Down
2 changes: 1 addition & 1 deletion pkg/agent/controller/networkpolicy/reject.go
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ func (c *Controller) rejectRequest(pktIn *ofctrl.PacketIn) error {
return fmt.Errorf("error when generating reject response for the packet from: %s to %s: neither source nor destination are on this Node", dstIP, srcIP)
}
if packetOutType == RejectServiceRemoteToExternal {
dstMAC = "aa:bb:cc:dd:ee:ff"
dstMAC = openflow.GlobalVirtualMAC.String()
}
// When in AntreaIPAM mode, even though srcPod and dstPod are on the same Node, MAC
// will still be re-written in L3ForwardingTable. During rejection, the reject
Expand Down
9 changes: 8 additions & 1 deletion pkg/apis/controlplane/sets.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,21 +30,28 @@ type groupMemberKey string
type GroupMemberSet map[groupMemberKey]*GroupMember

// normalizeGroupMember calculates the groupMemberKey of the provided
// GroupMember based on the Pod/ExternalEntity's namespaced name and IPs.
// GroupMember based on the Pod/ExternalEntity/Service's namespaced name and IPs.
// For GroupMembers in appliedToGroups, the IPs are not set, so the
// generated key does not contain IP information.
func normalizeGroupMember(member *GroupMember) groupMemberKey {
// "/" is illegal in Namespace and name so is safe as the delimiter.
const delimiter = "/"
var b strings.Builder
if member.Pod != nil {
b.WriteString("Pod:")
b.WriteString(member.Pod.Namespace)
b.WriteString(delimiter)
b.WriteString(member.Pod.Name)
} else if member.ExternalEntity != nil {
b.WriteString("ExternalEntity:")
b.WriteString(member.ExternalEntity.Namespace)
b.WriteString(delimiter)
b.WriteString(member.ExternalEntity.Name)
} else if member.Service != nil {
b.WriteString("Service:")
b.WriteString(member.Service.Namespace)
b.WriteString(delimiter)
b.WriteString(member.Service.Name)
}
for _, ip := range member.IPs {
b.Write(ip)
Expand Down
Loading

0 comments on commit 090b3a6

Please sign in to comment.