Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to Stop Locust Client from within the test script #1192

Closed
uprichard opened this issue Dec 11, 2019 · 7 comments
Closed

Ability to Stop Locust Client from within the test script #1192

uprichard opened this issue Dec 11, 2019 · 7 comments

Comments

@uprichard
Copy link

We have tests that for each client need to execute some POST's and then perform X amount of GET's and then perform some more PUTs before stopping. So this is not the normal feature request for adding back the 'stop after number of requests"

Currently when running in standalone mode I have a task that has a loop in it that executes the GET's X times. It then performs the final set of POST's and does a 'raise StopLocust' that will correctly stop the client

The issue is that this will not work in a master / slave setup as the 'StopLocust' will stop the client(s) correctly but the slave process will continue running until the RUN_TIME is reached.

So looking for a way that I could stop a given client that works in both standalone and master/client. This approach could also be used by others who need to stop after X requests, they could keep a counter on each call and when hit their value, stop the client.

Thanks for the great tool

@TanmayNakhate
Copy link

TanmayNakhate commented Dec 24, 2019

I support this as well. We are also trying to implement same behavior.
Although I understand that to handle this behavior, I will have call MasterLocustRunner.quit() but we have to provide certain parameter to its object which is difficult as of now. I am trying to find out if I am able to solve it but I am not yet successful.
My stack overflow question is here (below)... but there are very few people who view / answer .. :(

StackOverFlow Question

@cyberw
Copy link
Collaborator

cyberw commented Jan 4, 2020

Hmm... I'm not sure there is a way to do this atm. @heyman do you know a way to do this?

@TanmayNakhate
Copy link

TanmayNakhate commented Jan 10, 2020

Hmm... I'm not sure there is a way to do this atm. @heyman do you know a way to do this?

I attempted to print init params for Locust class, which takes 2 params - locust_classes, options.
If we can get the same object used at the time of initialization of first locust object, it may be possible. Although I attempted to create Locust class object manually, I am not succesful.

If it helps , this is the output of print statement of locust init :
locust clasess : [<class 'Locust_ncso_multiuser_task.NcsoLoad'>] #NcsoLoad is my class while Locust_ncso_multiuser_task is test name.

options: Namespace(csvfilebase=None, exit_code_on_error=1, expect_slaves=1, hatch_rate=1, heartbeat_interval=1, heartbeat_liveness=3, host=None, list_commands=False, locust_classes=[], locustfile='testsuite/host/Locust_ncso_ics_multiuser_task.py', logfile=None, loglevel='INFO', master=False, master_bind_host='*', master_bind_port=5557, master_host='127.0.0.1', master_port=5557, no_reset_stats=False, no_web=False, num_clients=1, only_summary=False, port=8089, print_stats=False, reset_stats=False, run_time=None, show_task_ratio=False, show_task_ratio_json=False, skip_log_setup=False, slave=True, stop_timeout=None, web_host='')

@TanmayNakhate
Copy link

Hmm... I'm not sure there is a way to do this atm. @heyman do you know a way to do this?

I attempted to print init params for Locust class, which takes 2 params - locust_classes, options.
If we can get the same object used at the time of initialization of first locust object, it may be possible. Although I attempted to create Locust class object manually, I am not succesful.

If it helps , this is the output of print statement of locust init :
locust clasess : [<class 'Locust_ncso_multiuser_task.NcsoLoad'>] #NcsoLoad is my class while Locust_ncso_multiuser_task is test name.

options: Namespace(csvfilebase=None, exit_code_on_error=1, expect_slaves=1, hatch_rate=1, heartbeat_interval=1, heartbeat_liveness=3, host=None, list_commands=False, locust_classes=[], locustfile='testsuite/host/Locust_ncso_ics_multiuser_task.py', logfile=None, loglevel='INFO', master=False, master_bind_host='*', master_bind_port=5557, master_host='127.0.0.1', master_port=5557, no_reset_stats=False, no_web=False, num_clients=1, only_summary=False, port=8089, print_stats=False, reset_stats=False, run_time=None, show_task_ratio=False, show_task_ratio_json=False, skip_log_setup=False, slave=True, stop_timeout=None, web_host='')

If we can stop locust runners in a better way, it would help as well.
Lets say i am using 10 clients and test_iterations=100. So I am assuming 1000 requests will be sent.
and after for loop runs for 100 times (9 other clients running same loop 100 times in back-end), I am explicitly calling "runners.locust_runner.quit()" , what happens is, it kills all other runners as well, thus breaking flow for all the other clients as well.

@cyberw
Copy link
Collaborator

cyberw commented Jun 6, 2020

Would it be possible/appropriate to stop locust if all Users have terminated in master-worker mode @heyman ?

@cyberw
Copy link
Collaborator

cyberw commented Jun 24, 2020

I had forgotten about this ticket but actually ended up writing a PR for this :) #1448

@cyberw
Copy link
Collaborator

cyberw commented Jun 25, 2020

Fixed by #1448

See https://docs.locust.io/en/latest/writing-a-locustfile.html#environment-attribute

@cyberw cyberw closed this as completed Jun 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants