Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

netpol: allowedIngressPorts and interNamespaceAccessLabels config added with defaults retaining 0.9.1 current behavior #1842

Conversation

consideRatio
Copy link
Member

@consideRatio consideRatio commented Oct 16, 2020

PR changes summarized

Breaking change

  • proxy.networkPolicy is deprecated, and proxy.chp.networkPolicy and proxy.traefik.networkPolicy is introduced.

Restores access to proxy pod's http/https ports by default (bugfix)

Since #1670 the proxy pod would not accept incoming traffic from pods in other namespaces, due to a bugfix that caused a breaking change. This can be observed in #1863 for example I think.

     - ports:
         - port: http
         {{- if or $manualHTTPS $manualHTTPSwithsecret }}
         - port: https
         {{- end }}
-    - from:
+      from:
         - podSelector:
             matchLabels:
               hub.jupyter.org/network-access-proxy-http: "true"

The bug fixed was that hub.jupyter.org/network-access-proxy-http: "true" was entirely ignored and had no meaning, but now that it got meaning, we ended up restricting access too much. As an example, the proxy would reject incoming traffic from a nginx-ingress-controller pod that terminated TLS in another namespace.

In this PR I introduce the networkPolicy configuration allowedIngressPorts as suggested by @manics, and configures it for proxy.chp.networkPolicy and proxy.traefik.networkPolicy to be set to [http, https] by default, while the networkPolicy configuration for hub/singleuser will not allow any of this by default.

This provides us with a configuration concept applicable to all networkPolicy resources we deploy, while retaining ability to work with even finer grained control with network access labels.

Conditional use of access labels across namespaces (enhancement)

This PR let's the respective networkPolicy configuration configure interNamespaceAccessLabels. If this configuration option is set to accept, the access labels valid for the respective networkPolicy configuration, such as hub.jupyter.org/network-access-proxy-api: "true", will be accepted even on pods in other namespaces, and on entire namespaces, to allow these pods as sources for incoming network traffic.

+        # source 1 - labelled pods
         - podSelector:
             matchLabels:
               hub.jupyter.org/network-access-proxy-api: "true"
+        {{- if eq .Values.proxy.chp.networkPolicy.interNamespaceAccessLabels "accept" }}
+          namespaceSelector:
+            matchLabels: {}   # without this, the podSelector would only consider pods in the local namespace
+        # source 2 - pods in labeled namespaces
+        - namespaceSelector:
+            matchLabels:
+              hub.jupyter.org/network-access-proxy-api: "true"
+        {{- end }}

Motivation

Network policies are enabled by default as of #1271, and we need ability to configure access in a way that won't force users and maintainers to debug k8s native network policy configuration which is notoriously hard to understand correctly. Closes #1863.

allowedIngressPorts config

We needed to come up with a solution after #1670, which as described below ended up restricting ingress, for example from a nginx-ingress-controller pod in another namespace, towards the proxy pod. At the same time, we should have a flexible and cohesive configuration of networkPolicies.

interNamespaceAccessLabels config

interNamespaceAccessLabels is ensuring that users are not forced to add fragile workarounds if they need to access something from another namespace in the future, and by avoiding fragile workarounds, we avoid complicated issue reports following the fragile configurations future failures.

Related

NetworkPolicy documentation

This section of the k8s documentation is very relevant details on the rules that we can set in a NetworkPolicy from the k8s docs, distinguishing between: a) podSelector, b) namespaceSelector, c) podSelector AND namespaceSelector, and d) ipblock.

@consideRatio consideRatio force-pushed the pr/network-policies-across-namespaces branch from 2a36f98 to 818605c Compare October 16, 2020 11:59
@consideRatio consideRatio changed the title netpol: allow internamespace communication netpol: allow inter-namespace communication Oct 16, 2020
@manics
Copy link
Member

manics commented Oct 16, 2020

If you're running multiple JupyterHubs segregated by namespace you wouldn't want cross-namespace traffic.
Let me think about this a bit more....

@consideRatio
Copy link
Member Author

consideRatio commented Oct 16, 2020

If you're running multiple JupyterHubs segregated by namespace you wouldn't want cross-namespace traffic.
Let me think about this a bit more....

You still require a dedicated label on the pods in other namespaces, so, right now it is secure assuming there isn't someone in another namespace that can create pods and actively wants to attack the JupyterHub pods somehow.

We would be more locked down if we actively ask users to to three things instead of one:

  1. Ask them to label the pods that are to allowed access with hub.jupyter.org/network-access-proxy-http: "true".

  2. Ask them to add an ingress rule to the proxy netpol.

            - namespaceSelector:
                matchLabels:
                  my-namespace-label: something
              podSelector:
                matchLabels:
                  hub.jupyter.org/network-access-proxy-http: "true"
  3. Ask them to label the namespace where they have a pod residing that needs access.


I think we would add too much complexity for users if we ask them to do all three parts, so they may very well end up just adding a more basic rule, like on allowing the entire namespace.

        - namespaceSelector:
            matchLabels:
              my-namespace-label: something

I'm open to the idea of allowing this to be an opt-out option, so that one can do the 3-step solution instead, but I'd like to avoid introducing additional configuration options if possible.

@manics
Copy link
Member

manics commented Oct 16, 2020

The labels were originally added to make the definitions of policies between JupyterHub components easier: #534 (comment)

I don't think it's as useful to enforce those labels outside of the JupyterHub namespace. In general an application owner should have control over who can access a service, but this effectively hands over control to the owner of the other namespace. This is a potential problem when namespaces are run by people with different privileges, or where it's used to reduce the impact of an exploit in a service.

How about if we say the labels are only intended for use within the same namespace, and for cross namespace traffic labels are irrelevant since it doesn't add much security. Then if you want to allow ingress from a namespace other-ns it should be enough to add other-ns to the allowed list and not require labels to be added to the other app. This is a lot easier for those other developers, they simply ask for their namespace to be added to the Z2JH allowed list, no changes required in their end.

If the other app owner wants to limit a pod to only connect to JupyterHub they can do that by defining their own egress network policy.

@manics
Copy link
Member

manics commented Oct 16, 2020

Do you have any examples of how other Helm charts configure network policies? For instance if all charts follow the Z2JH approach then if you had say 50 applications that need to interact with each other each of those 50 applications would need 49 annotations like <other-application-X>/network-access: "true". This sounds a bit wrong to me, but if that's what other charts do then we could go along with it.

What ingress did you have problems with? I've always used nginx-ingress in its own namespace without a specific exemption in the Z2JH network policy- I just assumed ingress had some sort of special access. Maybe something's changed recently?

@consideRatio
Copy link
Member Author

Do you have any examples of how other Helm charts configure network policies? For instance if all charts follow the Z2JH approach then if you had say 50 applications that need to interact with each other each of those 50 applications would need 49 annotations like /network-access: "true". This sounds a bit wrong to me, but if that's what other charts do then we could go along with it.

Hmmm I mean right now, all traffic is banned from all other namespaces, and this is a mechanism to open up communication, so unless the policy is to allow communication from all other namespaces, it doesn't become much easier other than creating a namespace label that will allow traffic for all pods within the namespace. That would make sense to add as well, because why not.

If we don't have this PR merged, then I'd do this to resolve it is...

Option 1

proxy:
  networkPolicy:
    ingress:
      from:
        - namespaceSelector:
            matchLabels: {}
          podSelector:
            matchLabels:
              hub.jupyter.org/network-access-proxy-http: "true"

Then update the nginx-ingress helm chart, like I'd do otherwise as well.

controller:
  podLabels:
    hub.jupyter.org/network-access-proxy-http: "true"

Option 2

proxy:
  networkPolicy:
    ingress:
      from:
        - namespaceSelector:
            matchLabels:
              hub.jupyter.org/network-access-proxy-http: "true"
kubectl label ns hub.jupyter.org/network-access-proxy-http="true"

Summary of current situation

We need to have a suggestion on how users should configure access to various jupyterhub pods that otherwise would block access from any pod outside the jupyterhub namespace. The concrete case that made me realize this, is that I had a nginx-ingress-controller in another k8s namespace that directed traffic to the proxy-public service / the proxy pod, but that traffic was denied by the network policy rules. It was a surprise to me that a ingress rule without a namespaceSelector rule explicitly part of the traffic to the local namespace.

The proposal of this PR is to simply allow the various JupyterHub pods to allow traffic incoming traffic from pods in all namespaces with a dedicated label, such as hub.jupyter.org/network-access-proxy-http: "true".

Another option to this, would be to ask users to add a custom ingress rule they manage themselves, like below. This rule would require both a namespace being labeled correctly, and have a pod within it labelled correctly, to send traffic to a specific port, on on the proxy pod.

proxy:
  networkPolicy:
    ingress:
      from:
        - namespaceSelector:
            matchLabels:
              my-specific-namespace-label: my-specific-namespace-label-value
          podSelector:
            matchLabels:
              my-specific-pod-label: my-specific-pod-label-value
          ports:
            - port: http  # or https depending on other configuration

To learn how to specify networkPolicies is quite tricky, I'd say it is the most complicated resource to get right of all the resources we manage in this Helm chart.


I've always used nginx-ingress in its own namespace without a specific exemption in the Z2JH network policy- I just assumed ingress had some sort of special access. Maybe something's changed recently?

I'm not sure, to me it sounds weird that a in-cluster-pod in another namespace should be allowed access on the merit it can be considered functioning as a loadbalancer, because how would it proove it was acting as a loadbalancer to the networkpolicycontroller that enforce the rules?

Perhaps I locked down the proxy policy in a recent update to the networkpolicy resources without thinking about it? Hmm... yes...

image

In the past, we had a rule saying that as long as anyone wants to speak specifically with the proxy pod's HTTP or HTTPS port, that was fine, because specifying only port doesn't block ingress from other namespaces I assume.

I didn't realize that I changed the behaviour when I made the change in #1670 - a testiment to networkPolicy resources are complicated to manage in my mind.

@manics
Copy link
Member

manics commented Oct 16, 2020

My suggestion for allowing ingress from other namespaces is for the Z2JH config to list those other namespaces it's allowing ingress from (which may include all) and not bother checking labels for pods outside the current namespace. i.e. the resulting network ingress policy is the union of:

  • current ingress policy (specific label for pod in the same namespace)
  • ingress from any pods in a list of allowed namespaces regardless of labels

I think this avoids reducing security when namespaces are used for segregation of users with different privileges, and avoids users having to add labels to their own pods to access jupyterhub which can be tricky as each helm chart may do it differently.

@consideRatio
Copy link
Member Author

consideRatio commented Oct 16, 2020

Ideas of options summarized

Allowlist of namespaces to accept traffic from

An upside of this is that the configuration to do this option is entirely within our Helm chart's control to document. Another upside is that only a explicitly trusted namespace is allowed to send traffic here. A downside is that it will allow traffic from more pods than just the pods it need to allow traffic from within this namespace.

A predefined label on pods to accept traffic from, even if coming from outside the namespace

It will limit traffic very explicitly to a minimal scope is the main benefit in my mind, so that a virus in some other pod in this namespace won't risk sending malicious traffic without even needing to label itself correctly. But, this it is also a bit cumbersome considering it requires knowledge on how to label the pods which may deployed with various mechanisms and and configuration options. Further, this could allow a separate namespace with maliciously labeled pods to send traffic to the hub/proxy/singleuser servers.


I think both of these options are individually viable, and it would also be viable to allow both of these options simultaneously by default, or let them be toggle-able by a configuration flag. @yuvipanda and @minrk who have also edited these files, what do you think?

Configuration to choose between single-namespace and inter-namespace access labels

This PR is currently equivalent of this idea without exposing a configuration to toggle to pods in other namespaces.

     # allowed pods (hub.jupyter.org/network-access-proxy-http) --> proxy (http/https port)
     - ports:
         - port: http
         {{- if or $manualHTTPS $manualHTTPSwithsecret }}
         - port: https
         {{- end }}
       from:
         - podSelector:
             matchLabels:
               hub.jupyter.org/network-access-proxy-http: "true"
+          {{- if .Values.someconfiguration }}
+          namespaceSelector:
+            matchLabels: {}   # allow the label above to be set on a pod in any namespace
+          {{- end }}

Addition of a namespace label to provide access for all pods

Note that we cannot target a namespace by name, only by labels set on the namespaces. I suggest that we make use of our already meaning infused labels.

     # allowed pods (hub.jupyter.org/network-access-proxy-http) --> proxy (http/https port)
     - ports:
         - port: http
         {{- if or $manualHTTPS $manualHTTPSwithsecret }}
         - port: https
         {{- end }}
       from:
         - podSelector:
             matchLabels:
               hub.jupyter.org/network-access-proxy-http: "true"
+        - namespaceSelector:
+            matchLabels:
+              hub.jupyter.org/network-access-proxy-http: "true"

@consideRatio
Copy link
Member Author

This section of the k8s documentation is very relevant details on the rules that we can set in a NetworkPolicy from the k8s docs, distinguishing between: a) podSelector, b) namespaceSelector, c) podSelector AND namespaceSelector, and d) ipblock.

@yuvipanda
Copy link
Collaborator

I currently have the following lines to enable network policy and also allow ingress from a set of namespaces:

https://github.com/2i2c-org/pilot-hubs/blob/master/hub/values.yaml#L71
https://github.com/2i2c-org/pilot-hubs/blob/master/support/values.yaml#L6

I wrote #1380 a while ago to make this possible.

I think Option 1 you described is the way to go, @consideRatio.

@consideRatio
Copy link
Member Author

consideRatio commented Oct 17, 2020

@yuvipanda thank you for your input! I just updated #1842 (comment) with two concrete examples of: a) making it possible to out of Option 1, and b) adding a new way to allow all pods in namespaces with a certain label.

Do you think we should write such configuration, or just go with them by default without exposing configuration to explain and maintain?


@yuvipanda as a separate note, in your pilot-hubs configuration, you have a from rule like this added with two separate sources.

      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              name: support
        - podSelector:
            matchLabels:
              hub.jupyter.org/network-access-proxy-http: "true"
        ports:
        - port: http

With the two separate sources of traffic that you allow in this rule, either any pod from the support namespace, or from a pod in the same namespace labelled with the access label. The second source is already allowed by default currently. I suspect that you wanted to use a "podSelector AND namespaceSelector" source rather than two separate sources. Related reference documentation are available here.

@consideRatio consideRatio changed the title netpol: allow inter-namespace communication netpol: allow inter-namespace ingress with pod and namespace labels which currently are only applicable on local pods Oct 17, 2020
@consideRatio consideRatio changed the title netpol: allow inter-namespace ingress with pod and namespace labels which currently are only applicable on local pods netpol: allow inter-namespace ingress with pod and namespace labels Oct 17, 2020
@consideRatio
Copy link
Member Author

@manics @yuvipanda I updated to also allow labelled namespaces and not only labelled pods in other namespaces.

In my mind, merge or not now depends on: do we see it as a problem that we expose hub/singleuser-server/proxy for communication from other pods in other namespaces assuming they have the correct label?

I would argue that this is not an escalation compared to in 0.9.1 where all http/https traffic is allowed to the proxy pod from other namespaces until #1670 was merged, and the proxy pod expose the singleuser servers and the hub server.

I'm open to the idea that we make the inter-namespace ingress for hub/proxy http(s)/proxy api/singleuser-server configurable by a default, which we set to false by default for all pods except proxy pods http(s) ingress. The downside of this though, is it becomes quite a bit more complicated to correctly communicate this to users who may assume adding a label like hub.jupyter.org/network-access-hub: "true" would be enough.

@consideRatio consideRatio force-pushed the pr/network-policies-across-namespaces branch from 92c57b7 to 70e2f86 Compare October 17, 2020 15:41
@consideRatio consideRatio added this to the 0.10.0 milestone Oct 18, 2020
@consideRatio
Copy link
Member Author

@manics okay so I implemented interNamespaceIngressForAPIPort options for the networkPolicy resources.

But... Note that we have an autohttps pod which is not covered by any policy, and will route traffic to the proxy pod, and onwards to the hub pod and user pods.

Hmm... In general, if you have no policy targetting a pod, it can ingress/egress like it wants. But if you have default network policy targetting all pods regarding ingress specifically, and no rules allowing traffic, then it implies they are denied all ingress unless another policy allows it.

Hmmmm hmmm hmmm... I think we should really cover the autohttps pod with these network policies as well at least... Perhaps not create a default deny rule to influence pods in the namespace we don't know about though.

@manics
Copy link
Member

manics commented Oct 19, 2020

Cheers!

After sleeping on it I wonder if we could simplify the options to just one per component? Your intention was to have the defaults "just work" for the majority of people, my wish was to have an easy way to lock-down the setup with additional ingress done through the <component>.netpol.ingress: [rules]. To me this suggests the fine grained configuration offered by multiple options isn't needed. What do you think of:

hub:
  networkPolicy:
    interNamespaceLabeledIngress: allow|deny (or enabled|disabled, true|false, allow|block, ...)
    ingress: []

proxy:
  networkPolicy:
    # Covers both HTTP and API 
    interNamespaceLabeledIngress: allow|deny (or enabled|disabled, true|false)
    ingress: []

singleuser:
  networkPolicy:
    interNamespaceLabeledIngress: allow|deny (or enabled|disabled, true|false)

where interNamespaceLabeledIngress controls both sources 1 and 2, and the default is whatever you prefer.

@consideRatio
Copy link
Member Author

consideRatio commented Oct 19, 2020

@manics thanks for iterating with me on this!

I'll go with your suggestion of having a interNamespaceLabeledIngress which toggle the ability to add a label to any pod in any namespace, or a label to an entire namespace. In other words, a toggle between requireEitherPodOrNamespaceLabel and deny.

Currently aiming for...

  • Create a NetworkPolicy resource for autohttps (coupled with proxy.traefik.networkPolicy)
  • Deprecate proxy.networkPolicy in favor of proxy.chp.networkPolicy
  • Add interNamespaceLabeledIngress options with allow|block, where allow means requireEitherPodOrNamespaceLabel.
  • Update schema.yaml

Currently considering...

  • Decide to have or not to have a mechanism, truthy by default for the autohttps and proxy netpol, which allow all ingress to HTTP(s) port no matter what. This would make this PR less breaking, by not forcing users to configure any label at all, and it would not escalate permissions since 0.9.1 either which permitted access to both autohttps and proxy pod by default.
  • If we opted for such mechanism, how do we name the option, and where do we locate it?

I added proxy.chp.allowHttpIngressUnconditionally defaulting to true and proxy.traefik.allowHttpIngressUnconditionally defaulting to true, which bypasses the need for labels and simply opens up access to the pods on their HTTP and HTTPS entrypoints.

Iteration on names

@manics I'm not sure about the naming, for example, block indicates block, but we are not actually explicitly blocking, only saying that access with labels can only be used in the local namespace rather than in all namespaces".

I'm also not sure about the allowHttpIngressUnconditionally option for proxy.chp and proxy.traefik, but I'm confident I think we must locate these under proxy.chp and proxy.traefik respectively to to be able to retain some logical comprehension of the configuration though.

@manics
Copy link
Member

manics commented Oct 19, 2020

With #1852 it might be better to rename allowHttpIngressUnconditionally to something like allowAllIngressPorts?

@consideRatio
Copy link
Member Author

I'm unhappy about exposing the proxy API port by default, I think that should be locked down.

@manics
Copy link
Member

manics commented Oct 19, 2020

Good point. Perhaps allowedIngressPorts, default [80, 443] (or 8443 or whatever?)? And if you've added ports using #1852 you also have to add them yourself to allowedIngressPorts? Or do you think that should be automatic?

@consideRatio
Copy link
Member Author

Good point. Perhaps allowedIngressPorts, default [80, 443] (or 8443 or whatever?)? And if you've added ports using #1852 you also have to add them yourself to allowedIngressPorts? Or do you think that should be automatic?

I think this is an excellent idea!

@consideRatio consideRatio force-pushed the pr/network-policies-across-namespaces branch 2 times, most recently from 11d20af to aa3b70a Compare October 20, 2020 09:36
@minrk
Copy link
Member

minrk commented Oct 23, 2020

The other use case that I can see coming up is folks deploying users in their own namespaces. These deployments definitely need cross-namespace access from singleuser pods -> hub API.

I wonder if our lack of using the release name in our resources is coming up here again. Would it be simpler or more confusing to have two labels, such as:

  1. hub.jupyter.org/network-access-hub - within namespace only (current behavior)
  2. hub.jupyter.org/network-access-hub-{{.Release.Name}} - across namespaces

or more a descriptive hub.jupyter.org/cross-namespace-network-access-hub

The idea being that rather than having network policies with our own customized configuration options, we have two simpler/more static ingress policies. Is that a good idea, or no? The main downside I can see of this being unconditional is that it provides a way for pods in another namespace who know a release name to grant themselves access. Maybe that's not ideal, and what we have here is the best option.

See the [security
documentation](https://zero-to-jupyterhub.readthedocs.io/en/latest/administrator/security.html#kubernetes-network-policies)
for more details on this.

- The Helm chart configuration `proxy.networkPolicy` has been removed,
Copy link
Member

@minrk minrk Oct 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is my only concern here. I don't love it because it seems to be baking too much of the implementation into the configuration. When we finally manage to land the traefik proxy implementation (#1162), this configuration structure will change again when nothing relevant to users should really need to. What do users need to know about the separation of autohttps and chp that we couldn't represent without separating them?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will still have a dedicated proxy for TLS termination i think, given that traefik doesnt support HA with lets encrypt.

This helm chart has a separate pod for TLS termination atm, and combining the config of two pods into one has caused it to be confusion in general. I'm think im for doing this even if it was a standalone refactoring without any other purpose to reduce that confusion.

If you feel strongly about it, i can revert it back, locating chp netpol config under proxy.netpol again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, this change seems like it would increase confusion to have two network policies applying to what users should see as one resource (and indeed will be one resource if we manage to land the traefik proxy). I am thinking of the example of

Q: I set proxy.chp.networkPolicy, why is it not having any effect?
A: that's because you have enabled autohttps, which means that proxy.traefik.networkPolicy governs proxy-public, not proxy.chp.networkPolicy. If you change how you configure https, your network policy will do what you expect, or you need to move the configuration to proxy.traefik.networkPolicy

Or is that not an accurate description of the behavior here?

@consideRatio
Copy link
Member Author

Thank you for reviewing this @minrk!

Hmmmmm, if we get away with simpler logic in all cases, i'm for simpler logic. My guiding compass feel very strongly about avoiding users needing to add their custom netpol rule injections at this point, because it is just so hard to understand for everybody involved and will become fragile with changes.

If we can get away with simpler logic partially relates to if inter-namespace traffic can accept always going through the proxy pod, or if they need direct access without tls termination etc to the hub pod etc.

The main downside I can see of this being unconditional is that it provides a way for pods in another namespace who know a release name to grant themselves access.

I had a very long discussion with @manics related to the security implications in a now resolved thread. I think that access to the outer proxy (autohttps, or proxy chp) seems reasonable to allow by active choice for someone powerful to label pods in some namespace.

Now we lock down inter namespace communication by default except for the autohttps and proxy pods http/https ports, but leave the proxy api port blocked by default.


It is a complicated API to design, and i can make changes as wished but i feel strongly about the following:

  • that we ensure to avoid making users need to specify k8s native ingress rules
  • that we get a solution merged without too much further delay, in other words clear decisions is what i want at this point and is willing to compromise on the perfect choice, so we fix a known issue and unblock 0.9.0-beta.1 for release.

@manics
Copy link
Member

manics commented Oct 23, 2020

Other than the last discussion about whether to separate CHP and the http pod or to have a single proxy config I think this PR is a good solution. It provides:

  1. a clear way to enable inter-namespace ingress to the pods/ports we think are reasonable by adding labels to pods or namespaces (interNamespaceAccessLabels: accept)
  2. a way to disable default inter-namespace ingress (interNamespaceAccessLabels: ignore)
  3. a way to add your own NetworkPolicies independent of the previous two points- so you can set the above to allow no ingress and add you own rules, or you can have the default rules and add to them without needing to redefine them

(1) and (3) cover @consideRatio's wishes, (2) and (3) cover mine. My preference is to default to more secure interNamespaceAccessLabels: ignore but I'm not opposed to changing that to accept if the docs are clear.

@minrk
Copy link
Member

minrk commented Oct 23, 2020

Thanks for your comments, and for thinking so carefully about this! What I was trying to get at is that I think defining network policies separately for traefik and chp means that users who need to touch this configuration need to know more about how our https handling works than they should have to.

  1. I do not think that cross-namespace access to the CHP API is something that's important to enable except for extremely advanced users, so separating those for traefik and CHP doesn't seem useful to me
  2. access to the public entrypoint is the main thing that folks are likely to want to control, if they want to control anything, and which pod that targets is different depending on https values (manual or no ssl goes directly to chp while autohttps goes to traefik). I don't think the user's network policy configuration should change based on whether SSL is terminated at CHP, traefik, or outside.

That's why I think we should have only a single configuration to set the accessibility of the "external entrypoint", and it should be up to us to make sure that's applied to the right target, not the user.

Other than that detail, I think this PR is a good solution, and I would just like to consolidate the two network policy configurations into one (or assign them more practical and less implementation-detail names if there's a reason they need to be separate that I haven't understood such as public and api to match the service names)

@consideRatio
Copy link
Member Author

I would just like to consolidate the two network policy configurations into one (or assign them more practical and less implementation-detail names if there's a reason they need to be separate that I haven't understood such as public and api to match the service names)

There is a lot of things to say, and when it comes down to it everything is possible, but I find consolidating them to be too complicated to document for users and maintainers to motivate a change. Here is a list of thoughts.

  • For a tight restriction, we need a network policy for the autohttps pod, because if not it will expose the proxy pod through the autohttps traffic.
  • For a tight restriction within the namespace, we need a network policy for the proxy pod as well so we can say that only the hub can access the proxy pod.
  • Since we need to have two network policies, configuring them at a single location will make a very hard to understand configuration API both for users and maintainers in my mind.
  • Thankfully, the current configuration defaults, which are like 0.9.1, is that you don't need to do anything because access to both autohttps/proxy is open to normal HTTP and HTTPS traffic by default
  • The advanced used that wants to limit the 0.9.1 config further, can tweak allowedIngressPorts: [http, https] from proxy.chp|traefik.networkPolicy, which still will allow the hub pod for example to access the proxy pod's API and the autohttps pod to send traffic to the CHP proxy pods HTTP port.

@minrk perhaps we can have a chat about this to arrive at a resolution to go with? I'd like to hash out a concrete action plan and validate that you prefer that to the current PR suggestion.

@consideRatio
Copy link
Member Author

@minrk thank you for the video chat! I wanted to summarize a relevant point that we can pursue in the future.

Idea to followup this PR

  1. We want to target all individual pods that accepts network traffic with network policies to tighten the accepted communication. We do that in networkpolicy resources associated with the pods rather than the k8s services. This is how this PR is done.
  2. We also want to target JupyterHub as an installed application with a network policy, and this PR accomodates this by still being pod centered but letting the autohttps pod and proxy pod by default allow ingress on their http and https ports by default no matter what labels are used, but allows this to be tightened.
  3. We can followup this PR by adding a network policy dedicated to represent the proxy-public k8s service! After we have restricted access to a pod, additional network policies will just open up access by adding more ways to allow access. We can therefore make a new one dedicated to the proxy-public service, targetting the autohttps or proxy pod dynamically. With such networkpolicy in place, we can set proxy.chp.netorkPolicy.allowedIngressPorts=[] and proxy.traefik.netorkPolicy.allowedIngressPorts=[] from their current defaults of [http, https] and instead rely on this proxy-public representative networkpolicy open up the http/https ports on the outermost pod.

@consideRatio
Copy link
Member Author

I'll merge this now given agreement that it can be good enough for now with @minrk in a video chat and that @manics seem positive in #1842 (comment) about the current state.

With it merged, I plan to update the changelog and cut 0.10.0-beta.1 finally!

@consideRatio consideRatio merged commit cffbe6c into jupyterhub:master Oct 23, 2020
@minrk
Copy link
Member

minrk commented Oct 23, 2020

👍 thanks for talking me through it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

502 Bad Gateway in Hub after upgrading chart (0.9.0-alpha.1 -> 0.9.0-n4xx)
4 participants