Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eda-api deployment healthchecks fail on v6-primary cluster #244

Open
thehonker opened this issue Aug 23, 2024 · 0 comments
Open

eda-api deployment healthchecks fail on v6-primary cluster #244

thehonker opened this issue Aug 23, 2024 · 0 comments

Comments

@thehonker
Copy link

thehonker commented Aug 23, 2024

On a dual-stack-v6-primary k8s (one that has v6 cluster/service subnets specified first to kubelet), healthchecks for eda-api fail, leading to the pod never being listed as a service backend and eventually being killed.

Our case is RKE2 with this configuration for cluster/service cidrs:

cluster-cidr: "fd10:ceff:1067::/56,172.20.0.0/16"
service-cidr: "fd12:ceff:1067::/112,172.22.0.0/16"

I believe this could be fixed by changing 0.0.0.0 to [::] in this template for the gunicorn and daphne listeners:

https://github.com/ansible/eda-server-operator/blob/main/roles/eda/templates/eda-api.deployment.yaml.j2

This would also increase v6 support for EDA overall.

However, there may need to be some logic applied to select v4 or v6 in this template - if a k8s is v4 or v6 only for example, or if it's dual stack v4 primary, etc.

The failing healthchecks can be seen to be attempting the v6 address of the pod.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant