Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Step Load Pattern #1002

Merged
merged 10 commits into from
Dec 9, 2019
Merged

Support Step Load Pattern #1002

merged 10 commits into from
Dec 9, 2019

Conversation

delulu
Copy link
Contributor

@delulu delulu commented Apr 19, 2019

This PR adds a new feature related to the issue: #1001

The main idea is to allow you to trigger a load test in Step Load pattern, and you can specify a Step Clients and a Step Duration to monitor your server's performance with different user load.

For example if you specify the swarming plan with
Total Clients of 50, Hatch Rate of 1, Step Clients of 5 and Step Duration of 5m
click the button and it will trigger a load test with increasing the user load by 5 every 5 mins.

At last you will get the metrics charts as below after 1h' running:

image

image

@myzhan
Copy link
Contributor

myzhan commented Jun 17, 2019

I think hatch rate is already a step with 1s as interval.

@reedstrm
Copy link

How much of the desired feature can be had by allowing control of the length of the hatch-step? Instead of #/sec label it "# spawned per step" and allow setting the "hatch-step length". In the example case, hatch-rate 5, hatch interval 5m should yield most of the same behavior.

@reedstrm
Copy link

Seems this code does not work with a single local locust instance:
[2019-06-17 16:15:59,368] sensei/ERROR/stderr: AttributeError: 'LocalLocustRunner' object has no attribute 'start_stepload'

@delulu
Copy link
Contributor Author

delulu commented Jun 24, 2019

Seems this code does not work with a single local locust instance:
[2019-06-17 16:15:59,368] sensei/ERROR/stderr: AttributeError: 'LocalLocustRunner' object has no attribute 'start_stepload'

@reedstrm thank you for trying it out, yes, currently start_stepload is only supported in distributed mode.

@delulu
Copy link
Contributor Author

delulu commented Jun 24, 2019

How much of the desired feature can be had by allowing control of the length of the hatch-step? Instead of #/sec label it "# spawned per step" and allow setting the "hatch-step length". In the example case, hatch-rate 5, hatch interval 5m should yield most of the same behavior.

I want to make sure this change is backward compatible, so I didn't modify the old interface and added two params for step load support.

@delulu
Copy link
Contributor Author

delulu commented Jun 24, 2019

I think hatch rate is already a step with 1s as interval.

Actually hatch rate is kinda of a booting rate, and the users/second line chart is kinda of linear. while step load is to increase the user load by step and the users/second line chart is kinda of staircase curve.

For more reference please go to Proposal #1001.

@max-rocket-internet
Copy link
Contributor

I think what's proposed in this PR with the extra options is a little hard to understand?

For example, the graph shown doesn't look any different to something you can produce with the current implementation? Maybe at least update the docs to make it clear

@delulu
Copy link
Contributor Author

delulu commented Oct 24, 2019

I think what's proposed in this PR with the extra options is a little hard to understand?

For example, the graph shown doesn't look any different to something you can produce with the current implementation? Maybe at least update the docs to make it clear

actually it's a new way to run load test by step, for more details please go to proposal #1001

@cyberw
Copy link
Collaborator

cyberw commented Oct 30, 2019

Hi @delulu !

I dont see the need for three command line flags (you could just remove --step-load).

Could you make another test case that validates the step pattern (number of clients at a certain point in time for example)?

@heyman What do you think of this change? I think it is a nice feature, but I'm unsure about the implementation, and I think we might need to rethink the web gui as we add more parameters.

locust/main.py Show resolved Hide resolved
@cyberw
Copy link
Collaborator

cyberw commented Oct 30, 2019

does LocustRunner.stop() really still work after this change? It specifically looks at hatching_greenlet on line 196. I think the correct way of doing this is using only the hatching_greenlet variable, and not adding a new one.

@heyman
Copy link
Member

heyman commented Oct 30, 2019

I added a comment in the related #1001 issue: #1001 (comment)

(I would really like Github to add a feature for merging issues/PRs)

Copy link
Contributor Author

@delulu delulu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does LocustRunner.stop() really still work after this change? It specifically looks at hatching_greenlet on line 196. I think the correct way of doing this is using only the hatching_greenlet variable, and not adding a new one.

For this new change, I prefer not to change the current hatching way, but add a new greenlet "stepload_greenlet" to leverage "hatching_greenlet" to do the work.

as for your concern about the stop behavior, the "stepload_greenlet" will always check runner's status when trigger the next step load on line 355, so the stopping from webui will still work.

locust/main.py Show resolved Hide resolved
locust/main.py Show resolved Hide resolved
@delulu
Copy link
Contributor Author

delulu commented Nov 4, 2019

@cyberw I've update the test case for step load validation, please have a check.

@delulu
Copy link
Contributor Author

delulu commented Nov 4, 2019

merged with latest master branch and resolved conflicts.

@delulu delulu force-pushed the stepload branch 2 times, most recently from 66da62f to 8cecfee Compare November 4, 2019 10:22
@codecov
Copy link

codecov bot commented Nov 4, 2019

Codecov Report

Merging #1002 into master will decrease coverage by 0.66%.
The diff coverage is 41.89%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1002      +/-   ##
==========================================
- Coverage   79.13%   78.46%   -0.67%     
==========================================
  Files          20       20              
  Lines        1912     1960      +48     
  Branches      299      307       +8     
==========================================
+ Hits         1513     1538      +25     
- Misses        323      342      +19     
- Partials       76       80       +4
Impacted Files Coverage Δ
locust/web.py 89.05% <100%> (+0.67%) ⬆️
locust/runners.py 78.33% <66.66%> (-1.94%) ⬇️
locust/main.py 34.42% <8.33%> (-0.64%) ⬇️
locust/contrib/fasthttp.py 88.57% <0%> (-1.15%) ⬇️
locust/core.py 87.71% <0%> (+0.84%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3ea6822...14db39d. Read the comment docs.

@cyberw
Copy link
Collaborator

cyberw commented Nov 28, 2019

Personally I still prefer always having the UI option present (but hidden, maybe under an "advanced" selection together with host name) instead of explicitly passing a --step-load to enable it in the gui.

But I'm not a gui person anyway, so I'm ok with merging if you resolve the conflicts.

@heyman Your opinion on this? I agree that we may want a more flexible format in the future, but this does solve an important problem in a quite simple way which will be useful to a lot of ppl.

@delulu
Copy link
Contributor Author

delulu commented Dec 6, 2019

@heyman I've replied to your comment, please take a look at #1001 (comment).

@delulu
Copy link
Contributor Author

delulu commented Dec 6, 2019

@cyberw I'll resolve the conflicts, but for the check failure of codecov/patch, I don't know why this check failed, could you show me a way to fix it?

@cyberw
Copy link
Collaborator

cyberw commented Dec 6, 2019

@cyberw I'll resolve the conflicts, but for the check failure of codecov/patch, I don't know why this check failed, could you show me a way to fix it?

Ignore it :)

@cyberw
Copy link
Collaborator

cyberw commented Dec 7, 2019

LGTM. I will merge soon if nobody objects (@heyman ?). I would like to improve the UI, but we can do that later.

@cyberw cyberw merged commit 73382f4 into locustio:master Dec 9, 2019
@cyberw
Copy link
Collaborator

cyberw commented Dec 9, 2019

Thanks for your contribution!

@cyberw
Copy link
Collaborator

cyberw commented Jan 9, 2020

@delulu I just realized this is not documented (outside the help function). Would you consider adding some documentation?

@delulu
Copy link
Contributor Author

delulu commented Jan 14, 2020

@delulu I just realized this is not documented (outside the help function). Would you consider adding some documentation?

sure, will add it : -)

@delulu
Copy link
Contributor Author

delulu commented Jan 19, 2020

Here is the PR #1235 for the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants