-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus metrics for service proxy bits aren't displayed #232
Comments
cc: @jdconti |
Seems to be the same with "latest": # curl -qs localhost:8080/metrics | egrep -v '^$|^#'
go_gc_duration_seconds{quantile="0"} 9.119e-05
go_gc_duration_seconds{quantile="0.25"} 0.000123861
go_gc_duration_seconds{quantile="0.5"} 0.0001371
go_gc_duration_seconds{quantile="0.75"} 0.000168882
go_gc_duration_seconds{quantile="1"} 0.000357096
go_gc_duration_seconds_sum 6.503632985
go_gc_duration_seconds_count 43308
go_goroutines 62
go_memstats_alloc_bytes 6.264448e+06
go_memstats_alloc_bytes_total 1.96433705136e+11
go_memstats_buck_hash_sys_bytes 1.937464e+06
go_memstats_frees_total 1.171639526e+09
go_memstats_gc_sys_bytes 894976
go_memstats_heap_alloc_bytes 6.264448e+06
go_memstats_heap_idle_bytes 4.374528e+06
go_memstats_heap_inuse_bytes 8.830976e+06
go_memstats_heap_objects 41441
go_memstats_heap_released_bytes_total 720896
go_memstats_heap_sys_bytes 1.3205504e+07
go_memstats_last_gc_time_seconds 1.512082124041794e+09
go_memstats_lookups_total 4.856946e+06
go_memstats_mallocs_total 1.171680967e+09
go_memstats_mcache_inuse_bytes 28800
go_memstats_mcache_sys_bytes 32768
go_memstats_mspan_inuse_bytes 171000
go_memstats_mspan_sys_bytes 229376
go_memstats_next_gc_bytes 1.0227824e+07
go_memstats_other_sys_bytes 5.701824e+06
go_memstats_stack_inuse_bytes 3.571712e+06
go_memstats_stack_sys_bytes 3.571712e+06
go_memstats_sys_bytes 2.5573624e+07
process_cpu_seconds_total 1477.21
process_max_fds 1.048576e+06
process_open_fds 13
process_resident_memory_bytes 4.3532288e+07
process_start_time_seconds 1.51191813098e+09
process_virtual_memory_bytes 5.9666432e+07 aka:
|
Yes, we had to take out whatever Prometheus metrics were published. If i recall correctly it was to due incompatibilities with upgrading docker/client-go to 5 librabry. Eitherway they should be enabled. |
@murali-reddy ah gotcha. So I was going to look into implementing #68 since I have a cluster with a working prometheus-operator instance, but then ran into this. I'll wait until after the docker client bindings have been updated to give it a go. |
For reference, here's the PR where we broke metrics: #101
|
@SEJeff You should see below prom metrics:
For now this is all can be exposed from service proxy perspective, as docker/libnetwork support only these metrics at the moment (Ofcouse we can submit a PR upstream if any this else is relevent). |
Our kube-router pods are running
cloudnativelabs/kube-router:v0.0.16
. This release should include both #65 and #74, but pulling the prometheus stats doesn't show it.We're running kube-router as such:
# docker exec -it $(docker ps | grep -v pause | awk '/kube-router/{print $1}') sh -c 'ps aux | grep kube-route[r]' 1 root 736:19 /usr/local/bin/kube-router --run-router=true --run-firewall=false --run-service-proxy=true --cluster-cidr=10.129.128.0/18 --nodes-full-mesh=false --enable-overlay=false --kubeconfig=/etc/kubernetes/kubeconfig
I've also verified it wasn't a rogue process running on port 8080:
Lastly, this is the configuration we're using:
Am I doing something wrong, or is this not hooked up correctly?
The text was updated successfully, but these errors were encountered: