-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug / Feature request: Time intensive custom_messages
functions trigger heartbeat timeout
#2608
Comments
Hmm. Good issue. Maybe having a single ”custom message handler greenlet” handle events is the best, but I’m unsure. With the new distributor-pattern used by locust-plugins there can be a LOT of messages flying back and forth, so we probably dont want to start new greenlets every time. Or maybe your second approach is best. If you have time to try implementing it I’m all ears. |
So unfortunately I'm on vacation until mid of march, therefore I can't implement it until then. If this is still open when I return, then I'll take a look at it and try to do a PR :) |
What about using a gevent.Pool there. We could have a base setting of size one, but if needed one could adjust this value. |
I dont know what would be the best solution. It is kind of easy to workaround by just starting a greenlet inside the event handler yourself (pooled or not) so I think there's no rush to fix it. With distributors the master is often handling >1000 custom messages/s so I want to be sure we dont introduce any extra overhead for the "normal" case when the handler is fast. |
Prerequisites
Description
Currently the functions of
custom_messages
are blocking the greenlet responsible for reading from the grpc connection between master and its workers.When the function for a
custom_message
is time intensive, then it can happen that theheartbeat_timeout_checker
is triggered, and therefore a worker is killed.A solution for this issue would be to encapsulate the functions for
custom_messages
in a greenlet to ensure that the heartbeat and other message types locust depends on can be received all the time.MasterRunner line of code
WorkerRunner line of code
Instead of
I would propose to to do
to ensure that it can run concurrently.
Still this solution will likely be breaking for some current users of Locust.
Another solution, which would probably take a bit more time to implement, but would be a lot better is, to implement some kind of store for the
custom_messages
and its functions instead of using a dictionary.When adding a new listener a keyword like
concurrent
ornon_blocking
could tell this store to run the function as a greenlet or in a blocking way, therefore being fully backwards compatible for all users.I'm looking forward hearing from you! :)
Command line
locust -f mylocustfile.py --headless
Locustfile contents
Python version
3.11
Locust version
2.23.1
Operating system
MacOS
The text was updated successfully, but these errors were encountered: