-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify certain NetworkPolicy behavior #10151
Clarify certain NetworkPolicy behavior #10151
Conversation
Deploy preview for kubernetes-io-master-staging ready! Built with commit 370ad2d https://deploy-preview-10151--kubernetes-io-master-staging.netlify.com |
Deploy preview for kubernetes-io-master-staging ready! Built with commit 3cd0001 https://deploy-preview-10151--kubernetes-io-master-staging.netlify.com |
3cd0001
to
5ab0dc1
Compare
|
||
Likewise, `ipBlock` `egress` rules generally select traffic based on the destination IP | ||
provided by the source pod, and in particular, connections to a `Service` IP that then | ||
gets redirected to a cluster-external IP may not be handled the same way as connections |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the ingress section above makes reasonable sense to me. This part is perhaps less obvious to me than the ingress case.
For egress podSelector / namespaceSelector, the enforcement is typically done post service DNAT, and so it's perhaps intuitive that the same would be true for ipBlock as well. I'm not 100% convinced either way.
For both cases, I'm slightly concerned that we may be proscribing a bit too heavily, and a general "Some plugins provide subtly different behaviors, so refer to each plugin's documentation for more information" type statement might be more appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not saying it should work this way, but it seems like it probably inevitably will work this way for any plugins that don't have their own integrated kube-proxy; they can't enforce the rule before the connection passes through kube-proxy because they don't know what the final destination IP will be, but they probably can't enforce the rule after the connection passes through kube-proxy either, because they can no longer tell what pod the connection came from at that point (and there may not be any hooks left to let the plugin grab the packet again after it has gone through POSTROUTING anyway).
I'm fine with trying to avoid proscribing. That's what the "in general" was about but I can waffle even more than that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the other question is how ipBlock egress rules interact with the service CIDR. I was assuming that if you are allowed to egress to a pod directly, you are also allowed to egress to the pod via any service IP that points to it, without needing to have an ipBlock allowing egress to that service IP or to the service CIDR in general. And in fact, I was assuming that you couldn't block access to service IPs even by doing
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.30.0.0/16
but maybe that's either wrong or implementation-defined?
5ab0dc1
to
20e5943
Compare
Updated the text:
|
|
||
When in doubt, use `kubectl describe` to see how Kubernetes has interpreted the policy. | ||
|
||
__ipBlock__: This selects particular IP addresses to allow as ingress sources or egress destinations. Normally these would be cluster-external IPs, since pod IPs are ephemeral and unpredictable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you highlight that this should be in the form of a CIDR .
We validate the CIDR here and leverage the net pkg ParseCIDR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably need to talk more about syntax in general... the document is currently more focused on semantics...
I also noticed that we no longer have the canonical description of how connections are matched against policies anywhere. (eg, I don't think any current documents mention the fact that healthchecks are always allowed through even when pods are isolated)
/milestone 1.12 |
@zparnold The changes here are not 1.12 specific; the features discussed have existed (in beta) for several releases, and the clarifications here apply to the beta implementations. So according to https://kubernetes.io/docs/contribute/start#choose-which-git-branch-to-use this should be against master, not release-1.12 |
Ok |
20e5943
to
d2aea82
Compare
OK, changed the description of ipBlock to say that it selects "particular IP CIDR ranges" rather than "particular IP addresses". Other than that, there isn't really a lot to say about syntax, since most of the other fields are more obviously constrained. I also rewrote the description of the sample NetworkPolicy to more clearly emphasize what the ipBlock section was doing (and to remove some redundancy that I think hurt readability). |
/retest |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: zparnold The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I don't know if the formatting is ideal.
@kubernetes/sig-network-pr-reviews