Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NodeLocalDNS exposing prometheus metrics on 0.0.0.0 #7740

Closed
D3DeFi opened this issue Jun 24, 2021 · 4 comments · Fixed by #7748
Closed

NodeLocalDNS exposing prometheus metrics on 0.0.0.0 #7740

D3DeFi opened this issue Jun 24, 2021 · 4 comments · Fixed by #7748
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@D3DeFi
Copy link

D3DeFi commented Jun 24, 2021

What would you like to be added:
Possibility to configure on which IP should NodeLocalDNS Prometheus metrics be exported.

Currently, nodelocaldns exposes metrics like this:

Since it is DS with hostNetwork: true, metrics backend can be reached from any interface present on the host.

Although, I am not sure about this, my tip would be that eventual prometheus would try to scrape such metrics at .spec.hostIP:9253.

Why is this needed:
security problem exposing part of the information publicly if Kubernetes nodes are reachable on public IP addresses.

Current situation:

LISTEN    0         4096          169.254.25.10:53               0.0.0.0:*       users:(("node-cache",pid=19539,fd=7))                                          
LISTEN    0         4096          169.254.25.10:9254             0.0.0.0:*       users:(("node-cache",pid=19539,fd=5))                                          
LISTEN    0         4096                      *:9253                   *:*       users:(("node-cache",pid=19539,fd=6))                                          
LISTEN    0         4096                      *:9353                   *:*       users:(("node-cache",pid=19539,fd=3)) 

Would something like this be considered a good enough patch?

prometheus {{ nodelocaldns_metrics_ip | default("") }}:9253 
@D3DeFi D3DeFi added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 24, 2021
@cristicalin
Copy link
Contributor

@D3DeFi indeed prometheus uses .spec.hostIP when composing the address to scrape so using {{ pnodelocaldns_metrics_ip}} or anything else here would break the scraping.

You could use the host IP to bind the service so if you have multiple interfaces on the host you don't intentionally leak information on more than one interface. Then, depending on the CNI, you can set up a host based firewall or an EndpointPolicy if you are using calico to prevent access to the metrics port except from the prometheus scraper.

Note also that the pod healtcheck also happens on this port and that is also done on the .spec.hostIP.

@D3DeFi
Copy link
Author

D3DeFi commented Jun 25, 2021

@cristicalin thank you for explaining.

You could use the host IP to bind the service so if you have multiple interfaces on the host you don't intentionally leak information on more than one interface.

Could you please elaborate a little bit more on this part? You mean limiting actual service manifest file via .spec.clusterIP? However, is there some way to do this via KubeSpray that I may have overlooked or do I need to create my own manifests for nodelocaldns and simply ignore ones provided by KubeSpray?

@cristicalin
Copy link
Contributor

Kubespray today does not allow you to do this with our current setup of nodelocaldns. A change would be needed to expose the .spec.clusterIP to the pod via an environment variable and to extend the nodelocaldns-config to use the environment variable in the prometheus :<port> stanza.

@floryut @oomichi @champtar , any thoughts on this? I can submit a PR implementing this if you think it's the right way to go.

@champtar
Copy link
Contributor

Breaking change + a config variable to get back the old behavior is fine with me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants