Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce total iterations-per-second limiting timers #1241

Closed
wants to merge 10 commits into from

Conversation

cyberw
Copy link
Collaborator

@cyberw cyberw commented Jan 23, 2020

No description provided.

@codecov
Copy link

codecov bot commented Jan 23, 2020

Codecov Report

Merging #1241 into master will increase coverage by 0.04%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1241      +/-   ##
==========================================
+ Coverage   78.49%   78.54%   +0.04%     
==========================================
  Files          20       20              
  Lines        2037     2037              
  Branches      321      321              
==========================================
+ Hits         1599     1600       +1     
  Misses        361      361              
+ Partials       77       76       -1
Impacted Files Coverage Δ
locust/core.py 92.59% <0%> (+0.41%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4a2da70...2486161. Read the comment docs.

@cyberw cyberw force-pushed the introduce-total-ips-limiting-timers branch from 7cfcbd8 to 0894a0f Compare January 23, 2020 14:06
@heyman
Copy link
Member

heyman commented Jan 23, 2020

I'm currently not convinced it would be a good idea to incorporate this is Locust itself.

My main objection is that it doesn't fit with the mental model of Locust where one declares the behaviour of Users, and then swarm the system with X number of users. This change would make the simulated users' wait time depend on the execution time of other users. I'm also not a big fan of changes that work differently when running Locust distributed.

I can see that there could be use-cases for this feature, but I'm not convinced they're common enough. And since this should function well living in a separate package, I'm currently -1 on including this feature in Locust itself. Would like to hear others' thoughts though.

@cyberw
Copy link
Collaborator Author

cyberw commented Jan 23, 2020

My main objection is that it doesn't fit with the mental model of Locust where one declares the behaviour of Users, and then swarm the system with X number of users. This change would make the simulated users' wait time depend on the execution time of other users.

I kind of see what you mean, but I dont think this is a major break from that model.

I'm also not a big fan of changes that work differently when running Locust distributed.

I think I can fix this somehow (achieving initial IPS distribution between slaves, but not not IPS checking, as that would be way harder)

I can see that there could be use-cases for this feature, but I'm not convinced they're common enough. And since this should function well living in a separate package, I'm currently -1 on including this feature in Locust itself. Would like to hear others' thoughts though.

I would like to mention that every major load testing tool I have ever tried implements this as a core feature (LoadRunner, NeoLoad, Artillery, K6, JMeter, Tsung, Yandex.Tank, etc)

@cyberw cyberw force-pushed the introduce-total-ips-limiting-timers branch from 5ce3fc7 to 4a2da70 Compare January 24, 2020 14:12
@cyberw
Copy link
Collaborator Author

cyberw commented Jan 24, 2020

The merge commit also moves MasterLocustRunner.target_user_count to be available in the base LocustRunner class.

Copy link
Member

@heyman heyman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still skeptical about having this as a part of Locust itself. Mainly because of the objections raised in my previous comment. That is doesn't fit the mental model of Locust model, it'll work differently distributed/non-distributed, and it'll introduce an "awareness"/dependency between simulated locust users.

I'd much rather have it live as a separate package, since I really can't see any drawback from that. We could point to the package from the relevant parts of the documentation.

@cyberw
Copy link
Collaborator Author

cyberw commented Feb 5, 2020

I can move it to contrib if that makes more sense? It still relies on a few changes in core (particularly the check to see if target ips has not been reached), so implementing it completely separate from locust would be hard. I can try to fix it so that distribution works predictably, but not the dependency/awareness part. Imho that is not a big issue though, and the simplicity to do this kind of thing (communication between users/greenlets) is really one of the things that makes locust/gevent so good.

@heyman
Copy link
Member

heyman commented Feb 5, 2020

It still relies on a few changes in core (particularly the check to see if target ips has not been reached)

Do you mean the the code for emitting a warning message? Couldn't that be done in an event listener?

I don't mind the introduction of target_user_count, if it enables this feature to be implemented (and possibly other ones as well) in users' own code, or separate packages.

Imho that is not a big issue though, and the simplicity to do this kind of thing (communication between users/greenlets) is really one of the things that makes locust/gevent so good.

In a distributed environment, communication/awareness between simulated users isn't possible. But I do agree that the simplicity/hackability of Locust is one of the main USPs, and I think the fact that one can implement this kind of feature is quite a few line of code is a testament to that. However, I also want to keep it that way, which is why I think it's important to also say no to features a lot. Especially when the use-case for them are rare (I do believe I haven't seen any feature request for this feature before), they can be implemented in separate packages, and doesn't align well with the mental model of simulated users.

I do apologize that it makes me sound so negative, but I'm sure that Locust would have been in a much worse state today, if we hadn't been pushing back on a lot of feature requests. This doesn't necessarily mean I'm right in this case though, and I'd still be interested to hear others opinion.

@cyberw
Copy link
Collaborator Author

cyberw commented Feb 5, 2020

It still relies on a few changes in core (particularly the check to see if target ips has not been reached)

Do you mean the the code for emitting a warning message? Couldn't that be done in an event listener?

I'm mainly thinking about the addition to core (that checks if ips target was missed and fails the run if it did)

I don't mind the introduction of target_user_count, if it enables this feature to be implemented (and possibly other ones as well) in users' own code, or separate packages.

This PR doesnt introduce target_user_count, it just moves it to the base runner class. But maybe that is what you meant?

Imho that is not a big issue though, and the simplicity to do this kind of thing (communication between users/greenlets) is really one of the things that makes locust/gevent so good.

In a distributed environment, communication/awareness between simulated users isn't possible. But I do agree that the simplicity/hackability of Locust is one of the main USPs, and I think the fact that one can implement this kind of feature is quite a few line of code is a testament to that. However, I also want to keep it that way, which is why I think it's important to also say no to features a lot. Especially when the use-case for them are rare (I do believe I haven't seen any feature request for this feature before), they can be implemented in separate packages, and doesn't align well with the mental model of simulated users.

I do apologize that it makes me sound so negative, but I'm sure that Locust would have been in a much worse state today, if we hadn't been pushing back on a lot of feature requests. This doesn't necessarily mean I'm right in this case though, and I'd still be interested to hear others opinion.

Ok, we'll wait for someone else to weigh in then. The original request for this was #646 (although you could argue that it is partially solved by introducing per-locust pacing). I think it was the most upvoted ticket for locust ever, and I bet some of those 33 upvotes were hoping for a global rps rate :)

@heyman
Copy link
Member

heyman commented Feb 5, 2020

The original request for this was #646 (although you could argue that it is partially solved by introducing per-locust pacing).

Yes, IIRC the example for that request were an API-client that used something like setInterval for calling some endpoint at a steady interval, which fits perfectly well with the mental model of simulated users, and - like you say - is solved by the pacing wait function.

I do think that that #646 has a very misleading title which is why I proposed that we change it.

@efology
Copy link
Contributor

efology commented Feb 26, 2020

well instead of doing a global limit, you could still do per user limit by introducing the concept of "next tick" per user.
For example, if you are running 1 user at 1 rps, the ideal tick period is 1 second. Assuming the response time was 0.2 seconds, you would sleep from time.time() to next_tick (1 second into future from previous tick). If the response time is larger then next_tick distance in time, just count number of missed ticks as statistics.

What that would do is have adaptive request rate per user, and by extension globally, but it would also eliminate the effect that I have seen with constant_pacing where if a response time was too long, wait_time for next request is 0, and you get temporary spike in request rate since a number of users may be experiencing the same issue and are sending next request immediately and causing request rate spike.

Also you wouldn't have to do global rate control, which is in general messy if you want to be able to scale freely with different slave/master approaches.

@havenqi
Copy link

havenqi commented Mar 23, 2020

To make locust more powerful and easy to use, out-of-box functions are always good to have, though plugins are easy to use. I think we could think about prioritizing some functions and putting them into the core package from that point of view. limiting rps can be one of them. And add paragraphs in doc to let people know what is the best practices to use it.

@kurtmckee
Copy link
Contributor

It looks like the introduce-total-ips-limiting-timers branch can be deleted.

@cyberw cyberw deleted the introduce-total-ips-limiting-timers branch March 23, 2022 17:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants