Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal communication on multi replica headless statefulset not possible with transparent proxy enabled #4209

Open
justin-sto opened this issue Jul 25, 2024 · 0 comments
Labels
type/bug Something isn't working

Comments

@justin-sto
Copy link

justin-sto commented Jul 25, 2024

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Overview of the Issue

In order to increase security on our systems, we enabled transparent proxy. However, when doing that, some of our pods are not able to start up anymore.
In this specific case, we are facing issues with Elasticsearch.
Elasticsearch is using multiple replicas and needs to elect a leader when it starts up. However, after enabling transparent proxy, this does not work anymore.

We already enabled dialed direct and I also tried to configure service intentions for their own service (so es-master is allowed to communicate with es-master)

Reproduction Steps

Deploy Consul with transparent proxy enabled
Deploy Elasticsearch
Enable Dialed Direct option:

spec:
  transparentProxy:
    dialedDirectly: true

Expected behavior

The pods should be able to communicate with their respective partner

Environment details

If not already included, please provide the following:

  • consul-k8s version: 1.19.0
  • values.yaml used to deploy the helm chart:
global:
  name: consul
  gossipEncryption:
    autoGenerate: true
  tls:
    enabled: true
    enableAutoEncrypt: true
  acls:
    manageSystemACLs: true
dns:
  enabled: true
server:
  replicas: 3
ui:
  enabled: true
  service:
    type: "ClusterIP"
  metrics:
    enabled: true
    provider: "prometheus"
    baseURL: http://prometheus-prometheus:9090
connectInject:
  enabled: true
  transparentProxy:
    defaultEnabled: true

Additionally, please provide details regarding the Kubernetes Infrastructure, as shown below:

  • Kubernetes version: v1.28.5
  • Cloud Provider AKS

Any other information you can provide about the environment/deployment.

Additional Context

The issue is basically the same as here:
#1155

@justin-sto justin-sto added the type/bug Something isn't working label Jul 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant