Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus metrics for service proxy bits aren't displayed #232

Closed
SEJeff opened this issue Nov 30, 2017 · 6 comments
Closed

Prometheus metrics for service proxy bits aren't displayed #232

SEJeff opened this issue Nov 30, 2017 · 6 comments
Assignees
Labels

Comments

@SEJeff
Copy link
Contributor

SEJeff commented Nov 30, 2017

Our kube-router pods are running cloudnativelabs/kube-router:v0.0.16. This release should include both #65 and #74, but pulling the prometheus stats doesn't show it.

# curl -qs localhost:8080/metrics | egrep -v '^$|^#'
go_gc_duration_seconds{quantile="0"} 8.9631e-05
go_gc_duration_seconds{quantile="0.25"} 0.000110529
go_gc_duration_seconds{quantile="0.5"} 0.000117136
go_gc_duration_seconds{quantile="0.75"} 0.000126899
go_gc_duration_seconds{quantile="1"} 0.002273716
go_gc_duration_seconds_sum 213.375479315
go_gc_duration_seconds_count 1.672008e+06
go_goroutines 98
go_memstats_alloc_bytes 8.104944e+06
go_memstats_alloc_bytes_total 1.0807626516616e+13
go_memstats_buck_hash_sys_bytes 2.252632e+06
go_memstats_frees_total 6.3603753104e+10
go_memstats_gc_sys_bytes 1.292288e+06
go_memstats_heap_alloc_bytes 8.104944e+06
go_memstats_heap_idle_bytes 1.0379264e+07
go_memstats_heap_inuse_bytes 1.232896e+07
go_memstats_heap_objects 57962
go_memstats_heap_released_bytes_total 4.857856e+06
go_memstats_heap_sys_bytes 2.2708224e+07
go_memstats_last_gc_time_seconds 1.5120808451498053e+09
go_memstats_lookups_total 9.729794e+06
go_memstats_mallocs_total 6.3603811066e+10
go_memstats_mcache_inuse_bytes 38400
go_memstats_mcache_sys_bytes 49152
go_memstats_mspan_inuse_bytes 264936
go_memstats_mspan_sys_bytes 409600
go_memstats_next_gc_bytes 1.4321696e+07
go_memstats_other_sys_bytes 7.754144e+06
go_memstats_stack_inuse_bytes 4.554752e+06
go_memstats_stack_sys_bytes 4.554752e+06
go_memstats_sys_bytes 3.9020792e+07
process_cpu_seconds_total 44177
process_max_fds 1.048576e+06
process_open_fds 20
process_resident_memory_bytes 3.9399424e+07
process_start_time_seconds 1.50713638247e+09
process_virtual_memory_bytes 7.8249984e+07

We're running kube-router as such:

# docker exec -it $(docker ps | grep -v pause | awk '/kube-router/{print $1}') sh -c 'ps aux | grep kube-route[r]'
    1 root     736:19 /usr/local/bin/kube-router --run-router=true --run-firewall=false --run-service-proxy=true --cluster-cidr=10.129.128.0/18 --nodes-full-mesh=false --enable-overlay=false --kubeconfig=/etc/kubernetes/kubeconfig

I've also verified it wasn't a rogue process running on port 8080:

# lsof -nPi tcp:8080 -s tcp:LISTEN
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
kube-rout 7391 root    6u  IPv6  59238      0t0  TCP *:8080 (LISTEN)

Lastly, this is the configuration we're using:

    {
      "name":"kubernetes",
      "type":"bridge",
      "bridge":"kube-bridge",
      "isDefaultGateway":true,
      "ipam": {
        "type":"host-local"
      }
    }

Am I doing something wrong, or is this not hooked up correctly?

@SEJeff
Copy link
Contributor Author

SEJeff commented Nov 30, 2017

cc: @jdconti

@SEJeff
Copy link
Contributor Author

SEJeff commented Nov 30, 2017

Seems to be the same with "latest":

#  curl -qs localhost:8080/metrics | egrep -v '^$|^#'
go_gc_duration_seconds{quantile="0"} 9.119e-05
go_gc_duration_seconds{quantile="0.25"} 0.000123861
go_gc_duration_seconds{quantile="0.5"} 0.0001371
go_gc_duration_seconds{quantile="0.75"} 0.000168882
go_gc_duration_seconds{quantile="1"} 0.000357096
go_gc_duration_seconds_sum 6.503632985
go_gc_duration_seconds_count 43308
go_goroutines 62
go_memstats_alloc_bytes 6.264448e+06
go_memstats_alloc_bytes_total 1.96433705136e+11
go_memstats_buck_hash_sys_bytes 1.937464e+06
go_memstats_frees_total 1.171639526e+09
go_memstats_gc_sys_bytes 894976
go_memstats_heap_alloc_bytes 6.264448e+06
go_memstats_heap_idle_bytes 4.374528e+06
go_memstats_heap_inuse_bytes 8.830976e+06
go_memstats_heap_objects 41441
go_memstats_heap_released_bytes_total 720896
go_memstats_heap_sys_bytes 1.3205504e+07
go_memstats_last_gc_time_seconds 1.512082124041794e+09
go_memstats_lookups_total 4.856946e+06
go_memstats_mallocs_total 1.171680967e+09
go_memstats_mcache_inuse_bytes 28800
go_memstats_mcache_sys_bytes 32768
go_memstats_mspan_inuse_bytes 171000
go_memstats_mspan_sys_bytes 229376
go_memstats_next_gc_bytes 1.0227824e+07
go_memstats_other_sys_bytes 5.701824e+06
go_memstats_stack_inuse_bytes 3.571712e+06
go_memstats_stack_sys_bytes 3.571712e+06
go_memstats_sys_bytes 2.5573624e+07
process_cpu_seconds_total 1477.21
process_max_fds 1.048576e+06
process_open_fds 13
process_resident_memory_bytes 4.3532288e+07
process_start_time_seconds 1.51191813098e+09
process_virtual_memory_bytes 5.9666432e+07

aka:

    Image ID:		docker-pullable://cloudnativelabs/kube-router@sha256:0ad5b9d4a184886aa2c85913f8c2707ed197c99016893ca31f603e8feb44e8b4

@murali-reddy
Copy link
Member

murali-reddy commented Dec 1, 2017

Yes, we had to take out whatever Prometheus metrics were published. If i recall correctly it was to due incompatibilities with upgrading docker/client-go to 5 librabry. Eitherway they should be enabled.

@SEJeff
Copy link
Contributor Author

SEJeff commented Dec 1, 2017

@murali-reddy ah gotcha. So I was going to look into implementing #68 since I have a cluster with a working prometheus-operator instance, but then ran into this. I'll wait until after the docker client bindings have been updated to give it a go.

@bzub
Copy link
Collaborator

bzub commented Dec 11, 2017

For reference, here's the PR where we broke metrics: #101

Caveats
We lose built-in Real Server (Destination) statistics for our metrics endpoint. These will need to be implemented again, later. Service (Virtual Server) stats are available, though.

@bzub bzub added the bug label Dec 11, 2017
@bzub bzub self-assigned this Dec 11, 2017
@murali-reddy
Copy link
Member

murali-reddy commented Dec 24, 2017

@SEJeff You should see below prom metrics:

# TYPE kube_router_service_active_connections gauge
kube_router_service_active_connections{namespace="default",service_name="frontend",service_vip="100.66.234.77"} 0
kube_router_service_active_connections{namespace="default",service_name="kubernetes",service_vip="100.64.0.1"} 90
kube_router_service_active_connections{namespace="default",service_name="redis-master",service_vip="100.69.35.19"} 6
kube_router_service_active_connections{namespace="default",service_name="redis-slave",service_vip="100.65.214.211"} 0
kube_router_service_active_connections{namespace="kube-system",service_name="kube-dns",service_vip="100.64.0.10"} 0
# HELP kube_router_service_pps_in Incoming packets per second
# TYPE kube_router_service_pps_in gauge
kube_router_service_pps_in{namespace="default",service_name="frontend",service_vip="100.66.234.77"} 0
kube_router_service_pps_in{namespace="default",service_name="kubernetes",service_vip="100.64.0.1"} 11128
kube_router_service_pps_in{namespace="default",service_name="redis-master",service_vip="100.69.35.19"} 10283
kube_router_service_pps_in{namespace="default",service_name="redis-slave",service_vip="100.65.214.211"} 0
kube_router_service_pps_in{namespace="kube-system",service_name="kube-dns",service_vip="100.64.0.10"} 0
# HELP kube_router_service_pps_out Outoging packets per second
# TYPE kube_router_service_pps_out gauge
kube_router_service_pps_out{namespace="default",service_name="frontend",service_vip="100.66.234.77"} 0
kube_router_service_pps_out{namespace="default",service_name="kubernetes",service_vip="100.64.0.1"} 11128
kube_router_service_pps_out{namespace="default",service_name="redis-master",service_vip="100.69.35.19"} 10283
kube_router_service_pps_out{namespace="default",service_name="redis-slave",service_vip="100.65.214.211"} 0
kube_router_service_pps_out{namespace="kube-system",service_name="kube-dns",service_vip="100.64.0.10"} 0

For now this is all can be exposed from service proxy perspective, as docker/libnetwork support only these metrics at the moment (Ofcouse we can submit a PR upstream if any this else is relevent).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants