Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

host-port-registry: Add HAProxy ports #568

Conversation

Miciah
Copy link
Contributor

@Miciah Miciah commented Dec 16, 2020

  • enhancements/network/host-port-registry.md: Add ports 80, 443, 1936, 10443, and 10444 for HAProxy.

@wking

@Miciah Miciah force-pushed the host-port-registry-add-HAProxy-ports branch 2 times, most recently from 4b46253 to 37c0353 Compare December 16, 2020 04:28
* enhancements/network/host-port-registry.md: Add ports 80, 443, 1936,
10443, and 10444 for HAProxy.
@Miciah Miciah force-pushed the host-port-registry-add-HAProxy-ports branch from 37c0353 to d23eedd Compare December 16, 2020 04:29
@sjenning
Copy link
Contributor

/lgtm
/approve

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 16, 2020
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sjenning

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 16, 2020
@openshift-merge-robot openshift-merge-robot merged commit 3b1489c into openshift:master Dec 16, 2020
@@ -96,6 +99,8 @@ Ports are assumed to be used on all nodes in all clusters unless otherwise speci
| 10257 | kube-controller-manager | apiserver || metrics, healthz, control plane only |
| 10259 | kube-scheduler | apiserver || metrics, control plane only |
| 10357 | cluster-policy-controller | apiserver || healthz, control plane only |
| 10443 | haproxy | net edge | 3.0 | HAProxy internal `fe_no_sni` frontend; localhost only; baremetal only; only on workers running router pod replicas |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/workers/nodes/ throughout, because the recent conflicts came when the router was scheduled on control-plane nodes. Maybe to small a nit to be worth a follow-up PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Router pods select workers. Are you sure it wasn't a compact cluster?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the failure was on a compact cluster. My point is that router pods will use these node ports wherever they happen to be running, which is usually, but not always, a compute node. So "workers running router pod replicas" excludes compact clusters, but "nodes running router pod replicas" would always be correct.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does "workers running router pod replicas" exclude compact clusters? On a compact cluster, control-plane nodes are also compute nodes (they are labeled as both node-role.kubernetes.io/worker and node-role.kubernetes.io/master). Unless the user explicitly sets spec.nodePlacement.nodeSelector on the ingresscontroller, router pod replicas have the following node selector:

spec:
  nodeSelector:
    kubernetes.io/os: linux
    node-role.kubernetes.io/worker: ""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No one is supposed to be selecting for node role worker, it’s a meaningless label for end users.

Copy link
Member

@wking wking Dec 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still prefer "nodes running router pod replicas" as sufficiently specific. I agree that the default node-role.kubernetes.io/worker selector makes "worker nodes"... not wrong. But I don't see a need to specify that (e.g. it might be wrong if the user has overridden the default selector, while "nodes running router pod replicas" is true regardless). And I think "ah, in your compact cluster, that same node is both a control-plane node and a worker node" feels like something of a gotcha (I would have described it as "a generically-schedulable control-plane node" or something). Still, I think most folks reading "worker nodes" will not be too confused by that wording, so probably not worth driving this to ground.

On the "it's a meaningless label" front, router pods are the only one setting this in 4.7.0-fc.0 CI:

$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/1339884459649929216/artifacts/e2e-aws-upgrade/pods.json | jq -r '.items[] | .metadata.namespace + " " + .metadata.name + " " + ((.spec.nodeSelector // {}) | keys | sort | tostring)' | grep node-role.kubernetes.io/ | grep -v node-role.kubernetes.io/master
openshift-ingress router-default-655bf89f-8fcdg ["kubernetes.io/os","node-role.kubernetes.io/worker"]
openshift-ingress router-default-655bf89f-bhvnn ["kubernetes.io/os","node-role.kubernetes.io/worker"]

Although I understand that something that excluded masters made sense before kubernetes/kubernetes#65618 got fixed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still prefer "nodes running router pod replicas" as sufficiently specific.

Fair enough. #573

No one is supposed to be selecting for node role worker, it’s a meaningless label for end users.

We've had that default since 4.0. We could add a check in 4.7 to block upgrades if the default node selector is used and the user has set a toleration on node-role.kubernetes.io/master, and then remove node-role.kubernetes.io/worker from the default node selector in 4.8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants