Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request] Configure prometheus response time buckets #3898

Closed
JorritSalverda opened this issue Mar 15, 2019 · 29 comments
Closed

[Feature request] Configure prometheus response time buckets #3898

JorritSalverda opened this issue Mar 15, 2019 · 29 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@JorritSalverda
Copy link
Contributor

JorritSalverda commented Mar 15, 2019

FEATURE REQUEST

Currently the buckets in metric timeline serie nginx_ingress_controller_response_duration_seconds_bucket are using the Prometheus default buckets, set at

Buckets: append([]float64{.001, .003}, prometheus.DefBuckets...),

The default buckets are

DefBuckets = []float64{.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10}

This works for a lot of apis and websites, however we're running a number of very slow apis that have response times well above 10 seconds. With these buckets it seems they all take 10 seconds to respond, while it's actually much longer.

It would be nice to be able to configure the actually used buckets in the configmap (or per ingress if possible, but I think the current histogram is global.

For validation you might want to check if the list increases in value and is limited to a maximum number to keep the label cardinality within reasonable limits.

@aledbf aledbf added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 15, 2019
@aledbf
Copy link
Member

aledbf commented Mar 15, 2019

@JorritSalverda before adding this feature we need to make histograms optional. I hope I can spend time in the next weeks to refactor the metrics to allow this

@JorritSalverda
Copy link
Contributor Author

Cool stuff! Thx

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2019
@tklovett
Copy link

This feature is critical in order to use NGINX Ingress request duration metrics to monitor one of our high-latency services. This service frequently takes >10s to respond, and that is normal and acceptable. Currently, NGINX latency metrics for this service our unusable.

@tklovett
Copy link

/remove-lifecycle stale

please

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 19, 2019
@Crevil
Copy link

Crevil commented Aug 30, 2019

We are currently hitting this limit as well. Is there anything we can do to help moving this forward? I can see some work have been done since @aledbf posted the initial response to this, but I'm unsure whether these changes are what he implied?

Any way, anything that we can do, let me know. We'd love to help move this.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 28, 2019
@Crevil
Copy link

Crevil commented Dec 2, 2019

/remove-lifecycle stale

As stated before we are happy to provide the implementation for this if it would help moving this forward.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2019
@zigmund
Copy link

zigmund commented Dec 4, 2019

We have another case. Response time of one of our services is 110 ms for p99. And almost all requests counted in 100-250ms bucket and we cannot see if it became slower to 200ms, for example.

It will be nice to have configurable buckets to add more accuracy in our case.

@trnl
Copy link
Contributor

trnl commented Jan 28, 2020

We will benefit from it as well. Now our ingress controllers each produce around 150k metrics!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2020
@yashbhutwala
Copy link

ping! any movement on this? this would be an awesome feature, happy to help!! 😄

@yashbhutwala
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2020
@audip
Copy link
Contributor

audip commented Jun 23, 2020

Hey @aledbf , I'm happy to work on it and submit a PR with the fix. Can you describe what you mean by make histograms optional? Are you referring to having a configuration option to enable/disable these histogram?

https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/metric/collectors/socket.go#L129

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 21, 2020
@yashbhutwala
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 21, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 20, 2020
@freeseacher
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 20, 2020
@sziegler-skyhook
Copy link

+1, Our organization could also use this feature.

@trnl
Copy link
Contributor

trnl commented Feb 1, 2021

At the end, we implemented the Prometheus Lua plugin and completely disabled the metrics provided by the controller itself. It's missing some things like reloads, but we can leave with it.

It's using https://github.com/knyar/nginx-lua-prometheus. Our plugin is like this:

local ngx = ngx

local _M = {}

local function convert_status(value)
    return value and value:sub(1,1) .. "xx" or "_"
end

function _M.init_worker()
    prometheus = require("plugins.prometheus.prometheus").init("prometheus_metrics")
    local buckets = {0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.75, 1, 1.5, 2.5, 5, 10, 20}

    http_requests = prometheus:counter("nginx_http_requests", "Number of HTTP requests", {"host", "namespace", "ingress", "status"})
    http_request_time = prometheus:histogram("nginx_http_request_time", "HTTP request time", {"host", "namespace", "ingress"}, buckets)
    http_request_bytes_received = prometheus:counter("nginx_http_request_bytes_received", "Number of HTTP request bytes received", {"host", "namespace", "ingress"})
    http_request_bytes_sent = prometheus:counter("nginx_http_request_bytes_sent", "Number of HTTP request bytes sent", {"host", "namespace", "ingress"})
    http_connections = prometheus:gauge("nginx_http_connections", "Number of HTTP connections", {"state"})
    http_upstream_requests = prometheus:counter("nginx_http_upstream_requests", "Number of HTTP upstream requests", {"namespace", "ingress", "service", "status"})
    http_upstream_response_time = prometheus:histogram("nginx_http_upstream_response_time", "HTTP upstream response time", {"namespace", "ingress", "service"}, buckets)
    http_upstream_header_time = prometheus:histogram("nginx_http_upstream_header_time", "HTTP upstream header time", {"namespace", "ingress", "service"}, buckets)
    http_upstream_bytes_received = prometheus:counter("nginx_http_upstream_bytes_received", "Number of HTTP upstream bytes received", {"namespace", "ingress", "service"})
    http_upstream_bytes_sent = prometheus:counter("nginx_http_upstream_bytes_sent", "Number of HTTP upstream bytes sent", {"namespace", "ingress", "service"})
    http_upstream_connect_time = prometheus:histogram("nginx_http_upstream_connect_time", "HTTP upstream connect time", {"namespace", "ingress", "service"}, {0.005, 0.01, 0.02, 0.1})
    http_upstream_up = prometheus:gauge("nginx_http_upstream_up", "Upstream peer status", {"namespace", "ingress", "service", "peer"})
end

function _M.log()
    local host = ngx.var.trace_host ~= "off" and ngx.var.server_name or "_"
    local namespace = ngx.var.namespace or "_"
    local ingress = ngx.var.ingress_name or "_"
    local service = ngx.var.service_name or "_"

    http_requests:inc(1, {host, namespace, ingress, convert_status(ngx.var.status)})
    http_request_time:observe(ngx.now() - ngx.req.start_time(), {host, namespace, ingress})
    http_request_bytes_sent:inc(tonumber(ngx.var.bytes_sent) or 0, {host, namespace, ingress})
    http_request_bytes_received:inc(tonumber(ngx.var.bytes_received) or 0, {host, namespace, ingress})

    if (ngx.var.upstream_status) then
        http_upstream_requests:inc(1, {namespace, ingress, service, convert_status(ngx.var.upstream_status)})
        http_upstream_response_time:observe(tonumber(ngx.var.upstream_response_time) or 0, {namespace, ingress, service})
        http_upstream_connect_time:observe(tonumber(ngx.var.upstream_connect_time) or 0, {namespace, ingress, service})
        http_upstream_bytes_sent:inc(tonumber(ngx.var.upstream_bytes_sent) or 0, {namespace, ingress, service})
        http_upstream_bytes_received:inc(tonumber(ngx.var.upstream_bytes_received) or 0, {namespace, ingress, service})
    end
end

return _M

You can find more details about the plugins here:
https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/lua/plugins/README.md

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2021
@freeseacher
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2021
@strongjz
Copy link
Member

/assign @longwuyuan

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 15, 2021
@zigmund
Copy link

zigmund commented Nov 15, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 15, 2021
@iamNoah1
Copy link
Contributor

/triage accepted
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Dec 15, 2021
@iamNoah1
Copy link
Contributor

completed with #7171

@iamNoah1
Copy link
Contributor

/close

@k8s-ci-robot
Copy link
Contributor

@iamNoah1: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests