-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
host-port-registry: Add HAProxy ports #568
host-port-registry: Add HAProxy ports #568
Conversation
4b46253
to
37c0353
Compare
* enhancements/network/host-port-registry.md: Add ports 80, 443, 1936, 10443, and 10444 for HAProxy.
37c0353
to
d23eedd
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sjenning The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -96,6 +99,8 @@ Ports are assumed to be used on all nodes in all clusters unless otherwise speci | |||
| 10257 | kube-controller-manager | apiserver || metrics, healthz, control plane only | | |||
| 10259 | kube-scheduler | apiserver || metrics, control plane only | | |||
| 10357 | cluster-policy-controller | apiserver || healthz, control plane only | | |||
| 10443 | haproxy | net edge | 3.0 | HAProxy internal `fe_no_sni` frontend; localhost only; baremetal only; only on workers running router pod replicas | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/workers/nodes/ throughout, because the recent conflicts came when the router was scheduled on control-plane nodes. Maybe to small a nit to be worth a follow-up PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Router pods select workers. Are you sure it wasn't a compact cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the failure was on a compact cluster. My point is that router pods will use these node ports wherever they happen to be running, which is usually, but not always, a compute node. So "workers running router pod replicas" excludes compact clusters, but "nodes running router pod replicas" would always be correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does "workers running router pod replicas" exclude compact clusters? On a compact cluster, control-plane nodes are also compute nodes (they are labeled as both node-role.kubernetes.io/worker
and node-role.kubernetes.io/master
). Unless the user explicitly sets spec.nodePlacement.nodeSelector
on the ingresscontroller, router pod replicas have the following node selector:
spec:
nodeSelector:
kubernetes.io/os: linux
node-role.kubernetes.io/worker: ""
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No one is supposed to be selecting for node role worker, it’s a meaningless label for end users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still prefer "nodes running router pod replicas" as sufficiently specific. I agree that the default node-role.kubernetes.io/worker
selector makes "worker nodes"... not wrong. But I don't see a need to specify that (e.g. it might be wrong if the user has overridden the default selector, while "nodes running router pod replicas" is true regardless). And I think "ah, in your compact cluster, that same node is both a control-plane node and a worker node" feels like something of a gotcha (I would have described it as "a generically-schedulable control-plane node" or something). Still, I think most folks reading "worker nodes" will not be too confused by that wording, so probably not worth driving this to ground.
On the "it's a meaningless label" front, router pods are the only one setting this in 4.7.0-fc.0 CI:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/1339884459649929216/artifacts/e2e-aws-upgrade/pods.json | jq -r '.items[] | .metadata.namespace + " " + .metadata.name + " " + ((.spec.nodeSelector // {}) | keys | sort | tostring)' | grep node-role.kubernetes.io/ | grep -v node-role.kubernetes.io/master
openshift-ingress router-default-655bf89f-8fcdg ["kubernetes.io/os","node-role.kubernetes.io/worker"]
openshift-ingress router-default-655bf89f-bhvnn ["kubernetes.io/os","node-role.kubernetes.io/worker"]
Although I understand that something that excluded masters made sense before kubernetes/kubernetes#65618 got fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still prefer "nodes running router pod replicas" as sufficiently specific.
Fair enough. #573
No one is supposed to be selecting for node role worker, it’s a meaningless label for end users.
We've had that default since 4.0. We could add a check in 4.7 to block upgrades if the default node selector is used and the user has set a toleration on node-role.kubernetes.io/master
, and then remove node-role.kubernetes.io/worker
from the default node selector in 4.8.
enhancements/network/host-port-registry.md
: Add ports 80, 443, 1936, 10443, and 10444 for HAProxy.@wking