-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(charts): prevent displaying stats before requests are made #1853
fix(charts): prevent displaying stats before requests are made #1853
Conversation
Stat history would be charted by the client as a stream of consciousness from the runners until they are stopped. The client will no longer show unquantified stats (represented by the number of requests). Issue: locustio#1852
Thanks! This is, on the whole, a great improvement! (I dont really use the UI because I have an external reporting solution, and thus I havent really tried to improve it) two things:
|
One possible way to indicate the time diff between test runs would be to listen for the test_stopped or test_started event and plot those somehow. If you do go down that route, it would be nice to plot spawning_complete as well :) |
echarts does have a {
"current_response_time_percentile_50": 3,
"current_response_time_percentile_95": 12,
"errors": [],
"fail_ratio": 1.0,
"state": "running",
"stats": [
{
"avg_content_length": 0.0,
"avg_response_time": 9.050847440507383,
"current_fail_per_sec": 8.5,
"current_rps": 8.5,
"max_response_time": 44.0,
"median_response_time": 6,
"method": null,
"min_response_time": 1.0,
"name": "Aggregated",
"ninetieth_response_time": 17,
"num_failures": 186,
"num_requests": 186,
"safe_name": "Aggregated"
}
],
"total_rps": 8.5,
"user_count": 10
} One thought was to mark test runs based on the action of user clicking "New test", this would be based entirely in the client side but this might be fine because starting a new test run clears the old run's stats on the server anyway.
I think some fake plots could be inserted to keep the lines disconnected (dependent on being able to distinguish between runs above)
This might be related to getting very fast response times (i.e.: from localhost). I'll investigate further. |
Probably not (although I’m not sure). But the events should work.
Yes, that would also be acceptable imo.
👍
I dont think it is a real data point. Maybe it is added just to get the same timespan as the rps graph. |
test case test_get_current_response_time_percentile_outside_cache_window verifies incorrect behaviour returning None instead of 0 when time is outside window of cached times. Issue: locustio#1852
For very fast response times the percentile calculation would not find the time in the initial cached response times window and would return None instead of 0 (the default return was missing). I'm still tinkering with the separation of test runs on the client UI. |
Hmm... I'm not sure the last commits were an improvement, because the gap in response times between test runs that was there before at least gave you the chance to infer that something strange happened at that time (test stop and then restart), but now it looks like one big continous test run :) Also, now it looks like response times start at zero which is not correct. And maybe there is something wrong with the way RPS is grouped, because response times show up earlier than RPS (this is probably an old issue though, so feel free to ignore it) |
Creates dummy placeholders in the stats history to space between the marker and the line data. Issue: locustio#1852
…window This prevents the data from being displayed on the UI chart during the response time calculation's indeterminate state. Issue: locustio#1852
The issue was caused by initial runner stats of 0 being populated, my assumption was that response times outside the window of response times should match this rather than vanish (e.g.: 0 => null => 1) because that is how the disjointed graph would end up being drawn. Also added line marker to distinguish between test runs on the client. |
There might be differing views on the correct representation of the {ready -> spawning -> running} states, that is, there are a number of runners executing but they have not yet made requests, or their requests have not yet completed:
This PR implements option 2 but I believe it addresses the issue raised in #1702. Possibly fixes #1702 |
Looks amazing now! One tiny thing: Shouldnt it say |
It depends on whether you consider the lines as end-of-run or start-of-run markers 😄 I wanted to avoid cluttering single-run graphs so I excluded the Run# marker before the first test run. This meant the first line marker would start with Here are some of other ideas I considered:
I'm happy to implement either. In the future we might be able to label a run ( "Warm up", "Scale-down test", etc ) so either a leading or trailing marker would be useful. |
The line is shown at the time when the second test was started, so it should be 2, 3,... imho. The user should be able to infer that it refers to the start of a run, and not the end of one, so a longer text is not necessary. The missing Idk if there is the option of showing the Omitting the label entirely (your option 2) is acceptable, but I prefer doing it as above. |
Mark start of first run on the chart once we start the second run. Hook stop/start markers of test runs into update loop of stats. Issue: locustio#1852
Agreed. Hooked run markers into the stats update loop to make the x-axis times manageable. |
Great! |
Stat history would be charted by the client as a stream of consciousness
from the runners until they are stopped.
The client will no longer show unquantified stats (represented by the
number of requests).
Issue: #1852
State checking logic on the client has also been flattened to simplify the flow.
Considerations:
total
stats are used to determine the number of requests, this is to allow a transition fromready
state torunning
state to be displayed.