-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consul-connect-inject-init failing with transparent proxy #568
Comments
Hey @andriktr could you also provide your Helm values so we can reproduce this issue?
Sorry about that! this is a regression that was introduced in that release that we're currently working on fixing. |
Could you also share the status of your pods and logs from the servers? We suspect it might be related to the health of the servers and them not being in the consistent state that's causing this flakey ACL behavior. |
Hello @ishustava, From which pods should I collect the logs? Consul server cluster itself deployed to outside of k8s on 3 servers:
consul-test-vm-1
consul-test-vm-3
Seems the only real warning on consul servers is
Not sure does it have impact for the issue. |
Hey @ishustava, do you have any insights or suggestions regarding this issue? |
@ishustava Thanks for clarification will wait for the fix then. Hopefully it will come soon 😉. |
Closing as the fix is now merged with #576 |
@ishustava I have used the image “hashicorpdev/consul-k8s:2dfffed”, and turn off the |
@chenjinkai sorry you're having some issues. Would you be okay opening up a separate issue with the same information you commented above? That will make it easier to track your specific problem. |
@lkysow ok |
Hello,
We upgraded our consul cluster and k8s agent till 1.10.0.
We now tried to use a transparent proxy which is enabled by default, however when we try to deploy app the
consul-connect-inject-init
container is failing with following errorsAfter the ~7 restarts
consul-connect-inject-init
container logs output changes to the following:Sure I double checked and a k8s service which is required for all connect services since transparent proxy feature was added exist for my app. Here is a full yaml for my deployment
I also noticed that each container restart generate a new token for the pod:
K8S node agents show the following in the logs
The service status in consul is:
I don't see any useful info on our consul server agent (which are outside of K8S).
Also if I'm disabling
transparent proxy
by adding annotationconsul.hashicorp.com/transparent-proxy": "false"
to my deployment everything works fine.Obviously it something related to the ACL , but currently can't catch what is wrong.
Thanks in advance.
The text was updated successfully, but these errors were encountered: