-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no data on Functions explorer dashboard with Rust app #66
Comments
Hello, Thanks for the detailed bug report. I don't think I can take a deeper look at it before the end of the week, but a preliminary check I can do is:
Is there any specific reason you chose the Can you try to use the |
Hi @gagbo! Thanks for checking back. I have used the /// use metrics_exporter_prometheus::PrometheusBuilder;
let prom = PrometheusBuilder::new();
prom
.install()
.expect("failed to install recorder/exporter"); I was not sure if that feature exposes the Prometheus endpoint at a port and wanted not to have any interference here. So switching to After switching back to
|
That's weird too (that using the For the record, the To go back on topic, can you instead try to change your prometheus exporter initialization: /// use metrics_exporter_prometheus::PrometheusBuilder;
let prom = PrometheusBuilder::new();
prom
+ .set_buckets(&[0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0])
+ .expect("Failed to set histogram buckets")
.install()
.expect("failed to install recorder/exporter"); and see if it solves at least1 the "function_calls_durations as Footnotes
|
I have adapted your changes and the metric endpoint now gives
|
Yeah, that's better, at least the duration related metrics should work. I still have 2 questions:
|
With applied code from #66 (comment). |
I see that the output of the scrapping endpoint looks more like what I'm expecting, so this part is solved. I'm trying to understand what happened to the dashboard, but I think I'll have to test this and reproduce to understand the issue.
Also, the sampling interval used to show the graph is dynamic, and changes with the timespan shown. Do you still have the issue if you change the "Last 6 hours" in the top-right corner to something significantly shorter like "Last 5 minutes"? I hope that it will give enough points (depending on your |
Oh, actually I now have found some lines in the Latency (95th and 99th Percentile) panel but still no data in the other ones. The other ones even show an error but only when going back to 7 days.
|
The issue might be specifically triggered by your setup then, it will be hard for me to reproduce it. Can you try to:
and see if that works? I don't understand how you can have |
Yes this made it work! The legend says |
Thanks for bearing with me! We always assumed that |
It's me who need to be thankful! Then I just adapt my dashboard and my team mates are ready to enjoy the new dashboard :) awesome! |
Oh right, because your function calls probably always last less than the smallest bucket which is So if you want to have more granularity there, you should change the buckets in your exporter config: /// use metrics_exporter_prometheus::PrometheusBuilder;
let prom = PrometheusBuilder::new();
prom
- .set_buckets(&[0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0])
+ .set_buckets(&[0.0005, 0.001, 0.0025, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5])
.expect("Failed to set histogram buckets")
.install()
.expect("failed to install recorder/exporter"); There's no hard requirement on the values in the buckets, just:
|
Thanks for making this clear to me. In production I will have longer runtimes so this should be fine :). |
I stumbled over another issue. Metrics data is only shown when selecting "Last 7 days" in Grafana even then there a data points for the last days/hours. For another fn call I could not working it at all. |
What is the I'm running into a similar issue just now (haven't had time to look into that before sorry), and it might be the solution. It also looks like a commit tried to solve that kind of issue recently, but I don't know whether you tried running this version of the dashboard or if it was indeed the problem |
My scrape interval is 1m :). Straight using https://github.com/autometrics-dev/autometrics-shared/blob/62f4dfd0a52e23874b3925039c17c879ca7aea49/dashboards/Autometrics%20Function%20Explorer.json makes my dashboard show no data at all. |
That's too bad about this, especially if you get no data at all, there's something we didn't think of when making the queries I'd guess. For the record, I just learnt that using Source of the screenshot with the explanation Other than that, I wasn't able to reproduce the issue locally, so I'll go watch on our staging if the issue lives so I can debug that hands on EDIT: the only way I was able to reproduce it, was to have a too short timespan for the ranges in queries. So I think that configuring the scrape timeout in your grafana "Prometheus data source" is going to fix most of the issues. Gory details of my current root cause hypothesis below:
All this to say "if your Prometheus data source already has |
I stand corrected: The issue with no data being shown seems to be fixed with the updated query mentioned earlier this thread. Still for one function I have a datapoint with count zero. Not sure if this is because I have forgotten to set the Prometheus buckets. The actual function call duration is around 12sec so I thought default settings are fine. metrics endpoint output
Prometheus exporter setup PrometheusBuilder::new()
.install()
.expect("failed to install recorder/exporter"); |
When going with the following, no data is shown but the function appears in my dropdown. PrometheusBuilder::new()
.set_buckets(&[
0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0,
])
.expect("Failed to set histogram buckets")
.install()
.expect("failed to install recorder/exporter");
|
No, don't worry about this. To help with metrics discovery, the library initializes the counters of autometricized functions at start in Debug builds. That allows the dashboards to show data even if nothing happened.
Did you check the settings for the Prometheus source in Grafana? I still think that changing the scrape interval to |
Our Prometheus uses a scrape interval of 60s. |
First of all, thanks for making life easier :) I just gave it a first try and annotated a
fn
in my Rust app with autometrics. But in the dashboard no data is shown. Also all quantile metrics are set to 0 even the fn got some calls.Version:
autometrics = { version = "0.5.0", features = ["metrics"] }
The text was updated successfully, but these errors were encountered: