Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-introduce -n option to specify the number of requests #1085

Closed
gmhegde86 opened this issue Sep 18, 2019 · 41 comments
Closed

Re-introduce -n option to specify the number of requests #1085

gmhegde86 opened this issue Sep 18, 2019 · 41 comments

Comments

@gmhegde86
Copy link

gmhegde86 commented Sep 18, 2019

Description of issue

Until locust v0.8, there was an option of -n to specify the number of requests. However, this was replaced by -t option in v0.9. We have a use case where we need to run a bunch of requests for a certain number of times and then take the median. To achieve this, we would want to limit the number of request to a certain value (using -n option). The time taken for each task would vary, so we cannot depend on time (-t option). Hence, requesting to re-introduce -n option back.

Expected behavior

TBD

Actual behavior

TBD

Environment settings

  • OS:
  • Python version:
  • Locust version:

Steps to reproduce (for bug reports)

TBD - please provide example code

@cgoldberg
Copy link
Member

I think that's a less common use case... I'm -1 on supporting both options.

@cyberw
Copy link
Collaborator

cyberw commented Sep 26, 2019

I think it was quite useful (I was actually looking for it the other day, and had to go thru the git history to find out that it had been removed). It is only a few lines of code...

@genev
Copy link

genev commented Sep 26, 2019

I am also in favor of bringing this back. My use case is for setting and comparing against a baseline, which in my case is easier to do with # of requests versus just runtime.

@cyberw cyberw added the hacktoberfest See https://hacktoberfest.digitalocean.com for more info label Oct 19, 2019
@cyberw cyberw added feature request and removed hacktoberfest See https://hacktoberfest.digitalocean.com for more info labels Oct 31, 2019
@cyberw
Copy link
Collaborator

cyberw commented Oct 31, 2019

Hmm.. I think max number of requests makes some sense, but I think max iterations is more useful. I was in the process of resurrecting this functionality, but now I am unsure...

@cyberw
Copy link
Collaborator

cyberw commented Nov 5, 2019

@heyman If I reimplemented this to count the number of task iterations instead of number of requests, does that sound ok to you?

As per our discussion earlier, I think it would make sense to calculate and distribute the desired number of requests to each slave at the beginning of the test (so we can ensure the right number iterations are run, at the expense of getting a "ramp down" at the end of the test)

I'm thinking the new parameter name could be -i/--iterations.

@hennr
Copy link

hennr commented Nov 26, 2019

Hello @cyberw,

I stumbled upon the -n option coming from a different use case: writing / testing new scripts.
It would really help to have an option which runs all tasks plus the on_start and on_stop mehtod once. --iterations 1 sounds like a good way to have this.
I currently do so by increasing the wait_time values and running locust with --no-web -c 1 -r 1 --run-time=1s which may succeed dependening on the server's response time.

This is a different use case for sure but it may be taken into account as well.
Thanks in advance!

@manishubana
Copy link

@heyman If I reimplemented this to count the number of task iterations instead of number of requests, does that sound ok to you?

As per our discussion earlier, I think it would make sense to calculate and distribute the desired number of requests to each slave at the beginning of the test (so we can ensure the right number iterations are run, at the expense of getting a "ramp down" at the end of the test)

I'm thinking the new parameter name could be -i/--iterations.

Iteration count makes more sense. Please provide this feature.

@TanmayNakhate
Copy link

We are using locust ( partially ) in our python framework.
We use @seq_task to handle biz requests with run_time. Only drawback I see is, issue in sending requests specific number of time. This option will surely help.

@TanmayNakhate
Copy link

TanmayNakhate commented Jan 8, 2020

We are using locust ( partially ) in our python framework.
We use @seq_task to handle biz requests with run_time. Only drawback I see is, issue in sending requests specific number of time. This option will surely help.

This is how I handled the problem of sending fixed number of requests :

`
@seq_task(1)
def user_workflow(self):

      for i in range(1, int(UserSingleton.test_iterations+1), 1):        #test_iterations is parameter i define at runtime
        self.create_ics()
        self.get_service()
      Singleton.logger().info("Test Number reached.")
      runners.locust_runner.quit()                                    #stops the locust runners
      Singleton.logger().info("Stopping Locust.")
      LocustCommon.induced_shutdown()                     #My implementation of shutting down master node: Unable to execute this because this is locust library code and need locust object where we have to initialize many params (MasterLocust.quit())

`

@rk4n3
Copy link

rk4n3 commented Feb 19, 2020

I'd consider this a critical/must-have feature for any test framework (and incidentally is present on any other testing framework I use or have seen).

The particular use-case I have for it at the moment is what is known as a "synthetic health check", which resembles a "ping" in that its a single test iteration, but is being done with a testing framework in order to leverage the sophisticated techniques available for generating a specific sequence of virtual-user action in order to reach the required state in the application.

Bottom line: we need to execute exactly 1 iteration, and have no way of knowing how long it will take, and so can't rely on time-to-execute, nor allow time-to-execute to interfere/disrupt.

While it seems there may be ways to "coerce" termination, it would certainly be preferrable to have this feature easily-accessible as an execution option.

@cyberw
Copy link
Collaborator

cyberw commented Feb 19, 2020

I have created a branch that reintroduces -n, similar to how it was before. But the old feature was weird, counting requests instead of iterations. If someone were to create a PR that introduces the flag but counting iterations I’d be happy to merge it.

@heyman
Copy link
Member

heyman commented Feb 24, 2020

I'm still against reintroducing this feature (for either HTTP requests, or task iterations). The reason for this is that it would be hard to implement it in a good way when running Locust distributed (IIRC, in the previous implemenetation, this feature didn't work when running distributed).

  • Either we could make it so that the master sends out a stop message once it knows that more than N requests have been sent. This would result in more than exactly N requests being sent in total, which feels weird when someone specifies a max requests.
  • Or we could have the master direct the slaves to only perform N/num_slaves. This would have to be done continuously to prevent some slaves from stopping earlier than others. It would be complicated implementation wise, especially when you consider scenarios where new slave nodes connect during a test.

However, if we were to reintroduce the feature, I think alternative 1 would be the preferred solution. We would then have to make it very clear that the max requests limit isn't a hard limit and that the test would result in more than exactly N requests.

@cyberw
Copy link
Collaborator

cyberw commented Feb 24, 2020

Maybe it is two different features?

Your "alternative 1" makes sense for tests where you just want constant load, but specify the stop time in terms of iterations instead of seconds. This fits well with the locust model, so I have no issues with someone adding it, although it must be clearly documented that it only guarantees a lower bound on the iteration count.

But "alternative 2" would also be very useful, it just needs to be documented that the load will drop off at the end and and new slaves cannot connect during the test (but fall back to just not giving them any work in case they do). I think what most people in this ticked have asked for is an actual hard limit on the number of iterations. and alternative 2 is the only one that provides that.

@heyman
Copy link
Member

heyman commented Feb 24, 2020

it just needs to be documented that the load will drop off at the end and and new slaves cannot connect during the test

I don't think that's an acceptable level of quality for us to include it in Locust itself.

In that case I think it would be better to make it so that our 1.0 changes (#1266) makes it fairly easy for users to implement this themselves (by providing a hook where they can retrieve the Environment instance and add a listener that calls envrionment.runner.stop() to the request_success/request_failure event).

@cyberw
Copy link
Collaborator

cyberw commented Feb 24, 2020

I would argue that it is not a lower level of quality at all, it is just a different compromise (accurate on the number of iterations, but not accurate on having constant load at the very end of the test)

But I dont need this feature myself so I wont argue further :)

@heyman
Copy link
Member

heyman commented Feb 24, 2020

I would argue that it is not a lower level of quality at all, it is just a different compromise (accurate on the number of iterations, but not accurate on having constant load at the very end of the test)

But I dont need this feature myself so I wont argue further :)

Ok :). For the record, I think the fact that new slaves can't connect during a test is the worse of the two mentioned issues. Currently, it's possible to deploy an autoscaling Locust cluster on Kubernetes. Accepting this compromise would change that.

@rk4n3
Copy link

rk4n3 commented Feb 24, 2020

My $0.02 on this aspect of the conversation: surely it must be considered that the option is actually an "option" - i.e. if the end user decides to supply the option, caveats would be accepted. Eliminating the option to prevent such caveats is what I suspect alot of us desiring this feature would find "unfair".

Anecdotally, I can attest that I require the feature and have no concern about the caveats mentioned. In particular, we don't actually use Locust's distributed mode, but instead have our own execution implementation that provides that capability across multiple technologies (JMeter, Gatling, Locust, etc ...). Its also worth mentioning that the "overlap" of the feature with a scenario like adding slaves to a test is unlikely, due to the intrinsic nature of the feature itself (if I'm bottling a test up in a specific number of iterations, it doesn't make much sense to also want to add slaves mid-test).

So, for example: I would be perfectly elated to accept the feature with caveats like "not supported in distributed mode, nor with adding slaves mid-test".

@TaurusEight
Copy link

How about a compromise. A feature allowing only a single execution before termination. This would eliminate the concerns about complexity in distributed execution while facilitating the use of Locust for synthetic health checks.

@heyman
Copy link
Member

heyman commented Feb 25, 2020

A feature allowing only a single execution before termination. This would eliminate the concerns about complexity in distributed execution while facilitating the use of Locust for synthetic health checks.

This should be discussed in a separate issue I think, but my take on it is that Locust is made for load testing and not synthetic health checks, focusing on too different use-cases will result in software that isn't good at any use-case.

@cyberw
Copy link
Collaborator

cyberw commented Feb 25, 2020

A feature allowing only a single execution before termination. This would eliminate the concerns about complexity in distributed execution while facilitating the use of Locust for synthetic health checks.

This should be discussed in a separate issue I think, but my take on it is that Locust is made for load testing and not synthetic health checks, focusing on too different use-cases will result in software that isn't good at any use-case.

If a lot of people want that feature, and it doesnt make Locust significantly more complicated then I think it makes sense to include it.

I think it is great that we have a vision for what Locust should be, but if a significant proportion of our users want a feature, and it is not in direct conflict with something else then I think it deserves to be included. Ignoring the community and saying "use something else then" is not a good idea (unless implementing the feature would require too many compromises or complexity of course).

That being said, I think the feature should be "run X iterations" not just "run one iteration".

@heyman
Copy link
Member

heyman commented Feb 25, 2020

In order to not go too much off topic, let's open a new issue if you want to continue discussing synthetic health checks :).

Maybe it would be a good idea to have a separate issue for "run X iterations" as well, since it's different from max number of requests? I'm not even 100% sure what the exact definition of an "iteration" is. I think you mean number of tasks executed?

@cyberw
Copy link
Collaborator

cyberw commented Feb 25, 2020

In order to not go too much off topic, let's open a new issue if you want to continue discussing synthetic health checks :).

Maybe it would be a good idea to have a separate issue for "run X iterations" as well, since it's different from max number of requests. I'm not even 100% sure what the exact definition of an "iteration" is? I think you mean number of tasks executed?

Good question. I hadnt thought of exactly what makes the most sense. I think number of tasks executed is the best unit of execution (because I guess you can't really count "TaskSet iterations" in a meaningful way, right?)

I think we can keep it as the same issue though. I think limiting the number of task executions is a more meaningful feature than limiting the number of requests (terminating in the middle of task executions is not very useful in most cases)

@heyman
Copy link
Member

heyman commented Feb 25, 2020

I think number of tasks executed is the best unit of execution (because I guess you can't really count "TaskSet iterations" in a meaningful way, right?)

I actually think that (if were to re-introduce the feature) it would make more sense to put the limit on number of requests. The reason for this are:

  • The number of performed requests is a clear and existing concept that currently exist in Locust. Obviously tasks and TaskSets are also concepts of Locust, but we don't currently count them, and expose that number anywhere. Instead they are more of a black box that generates load and spits out results in the form of request statistics.
  • One recurring argument for the -n option is that most other load testing tools have it. While I don't think we should care too much about what features other (request centric as opposed to user behaviour centric) tools have, having a request limit would probably be easier to understand for people new to Locust and unfamiliar with the tasks concept.

terminating in the middle of task executions is not very useful in most cases

One could make it so that the --stop-timeout parameter is still respected.

@cyberw
Copy link
Collaborator

cyberw commented Feb 25, 2020

I think number of tasks executed is the best unit of execution (because I guess you can't really count "TaskSet iterations" in a meaningful way, right?)

I actually think that (if were to re-introduce the feature) it would make more sense to put the limit on number of requests. The reason for this are:

  • The number of performed requests is a clear and existing concept that currently exist in Locust. Obviously tasks and TaskSets are also concepts of Locust, but we don't currently count them, and expose that number anywhere. Instead they are more of a black box that generates load and spits out results in the form of request statistics.

I personally prefer counting iterations (because in a user behaviour centric tool, tasks are what you want to run, not individual requests). But I'll take what I can get :)

If we do reintrouce -n as it was before, we shouldnt implement it as naively as it was previously done (stopping the test after a certain number of request_success/request_failure events), because it will tend to overshoot if there are a number of requests "in transit".

  • One recurring argument for the -n option is that most other load testing tools have it. While I don't think we should care too much about what features other (request centric as opposed to user behaviour centric) tools have, having a request limit would probably be easier to understand for people new to Locust and unfamiliar with the tasks concept.

Tasks shouldnt be unfamiliar to users of other tools, in fact a lot of the time I see questions about how to do things at the task level (e.g. https://stackoverflow.com/questions/58962517/how-to-interpret-locustios-output-simulate-short-user-visits/) rather than request level.

terminating in the middle of task executions is not very useful in most cases

One could make it so that the --stop-timeout parameter is still respected.

Sure, but then the -n becomes a very "strange" parameter. "Do N number of requests + the ones that were already in transit + finish all the task runs that were in progress" is not at all as precise as "Do N task runs" :)

@rk4n3
Copy link

rk4n3 commented Feb 25, 2020

I believe iterations is the most relevant/salient construct ... "requests", meaning round-trip invocations to an app-under-test endpoint, is something that is visible/managed in the test's own code and easily scoped under an iteration.

My vote would be that iterations at the task level is the desired construct for the feature, and if a test wants to micro-manage its own requests, that's its pervue in its own task implementation.

I'd also note that its likely some people use the terms "request" and "iteration" interchangably, so we'll have to try to retain clarity around that.

@heyman
Copy link
Member

heyman commented Feb 25, 2020

If we do reintrouce -n as it was before, we shouldnt implement it as naively as it was previously done (stopping the test after a certain number of request_success/request_failure events), because it will tend to overshoot if there are a number of requests "in transit".

I think that - due to all reasons previously stated on why it would be very hard to implement it in a good way when running distributed if we are not allowed to "overshoot" - it would be much better to clearly document that -n is used to trigger a stop condition, and that it will overshoot. I think that people who are using a load testing tool in a way where they can't handle some extra requests are probably doing it wrong :).

@bugsnub
Copy link

bugsnub commented Feb 26, 2020

My simple question is, I have a system that cannot take more than 100 requests and if we fire more than 100 requests on that system, all subsequent calls will fail. So, how can I make locust to stop when it has hot 100 requests. Else my report is a complete mess as it also includes the 101+ requests which are useless to test. If we have a workaround for this, I am happy to say No to -n. But, please provide me something so that I can use locust (in such scenario). Please-Please-Please help !!

@heyman
Copy link
Member

heyman commented Feb 26, 2020

@anshumangoyal Do you run Locust distributed? In that case a work-around is complicated (though still possible). If not you can just wrap the requests in a function where you increase a counter for every request, and when you reach 100 you simply turn all subsequent calls to noops. If and when we merge #1266 you should also be able to grab a reference to the runner and call it's stop() method to end the test immediately.

@rk4n3
Copy link

rk4n3 commented Mar 1, 2020

I think that - due to all reasons previously stated on why it would be very hard to implement it in a good way when running distributed if we are not allowed to "overshoot" - it would be much better to clearly document that -n is used to trigger a stop condition, and that it will overshoot. I think that people who are using a load testing tool in a way where they can't handle some extra requests are probably doing it wrong :).

That's why I advocate implementing the feature without distributed support. The feature doesn't even make that much sense in scenarios where distributed makes most sense, to whit: if one is using distributed, one is trying to scale beyond what single-instance delivers, which in turn doesn't really match the "I want to run only N iterations" concept.

As far as functionality being tightly confined to only what some definition of "load testing" would prescribe, I would rather observe that the core functionality could also be described more as "mechanisms to invoke application behavior, for the purpose of testing" ... whether that's used to drive load or simply hit the app once doesn't seem like any compromise of product purpose.

@bugsnub
Copy link

bugsnub commented Mar 3, 2020

`
import json
from locust import HttpLocust, TaskSequence, seq_task, between
from locust.exception import StopLocust

class ReservationApp(TaskSequence):
def init(self, parent):
super().init(parent)
self.host = self.parent.host

@seq_task(1)
def signup(self):
    body = json.dumps({
        "contact_number": '12345',
        "birthDate": "2002-09-10",
        "email": "[email protected]",
        "firstName": "FirstName",
        "lastName": "LastName",
        "password": "Password"
    })
    headers = {
        "Authorization": "Basic xkjasfnklnlnfslksa",
        "Content-Type": "application/json"
    }
    response = self.client.post('account-service/signup', name='TC-01 Signup', data=body, headers=headers)

@seq_task(2)
def booking(self):
    headers = {
        "Authorization": "Basic xkjasfnklnlnfslksa",
        "Content-Type": "application/json"
    }
    response = self.client.get('account-service/booking', name='TC-02 Startup booking', headers=headers)

@seq_task(3)
def login(self):
    headers = {
        "Authorization": "Basic xkjasfnklnlnfslksa",
        "Content-Type": "application/json"
    }
    body = json.dumps({
        "userName": "UserName",
        "password": "PassWord"
    })
    response = self.client.post('account-service/signin', name='TC-03 Login booking', data=body, headers=headers)

class ReservatonLocust(HttpLocust):
task_set = ReservationApp
host = 'https://dummy-server.dummy.com'
wait_time = between(5, 10)

if name == "main":
ReservatonLocust().run()

`
@heyman can you help me with the change which I have to put in here. This is sample code, but I am not clear where exactly the code goes in. There are three tasks and I want to stop when they all have executed after n number of iterations. I don't want to hit the api after exactly n'th call.

@wasimansari661
Copy link

Can we have both -t and -n to end the test. And if I can fix the RPS to a specific value during the execution.

I am running Locust on distributed mode in Kubernetes cluster and face a lot of challenge to control the end of the testing when the Pod(both Master and Slaves) restarts once the --run-time is reached which makes the whole testing infinite.

I have come with a additional Pod called "Locust-Monitor" to read the logs written by Locust-Master. When we get the "teardown" message in the Locust-Master logs, we end the testing by removing the Locust Master and Slaves using the Kubernets commands.

Any other suggestion would also be encouraged.

cyberw added a commit to SvenskaSpel/locust-plugins that referenced this issue Aug 15, 2020
Also move the checks parameters into __init__.py
@cyberw
Copy link
Collaborator

cyberw commented Aug 15, 2020

I have added the -i parameter to locust-plugins to solve what I think is the main use case for this (see https://github.com/SvenskaSpel/locust-plugins#command-line-options)

It requires setting the parameter for worker processes and makes no effort to "distribute" the number of iterations across workers, so there is definitely room for improvement, but it works well enough for me.

@cyberw
Copy link
Collaborator

cyberw commented Sep 20, 2020

I know this ticket is very old now, but is the implementation in locust-plugins good enough for you? I intend to close this ticket soon...

@rk4n3
Copy link

rk4n3 commented Sep 20, 2020

Looks good to me - thanks :)

@cyberw cyberw added the invalid label Sep 20, 2020
@cyberw cyberw closed this as completed Sep 20, 2020
@cyberw
Copy link
Collaborator

cyberw commented Sep 20, 2020

(marking as invalid, because there is no fix made in locust itself. but I dont want to say "wontfix", because it would sound like there is no solution)

@thejusdutt
Copy link

are you guys introducing this option anytime?

@cyberw
Copy link
Collaborator

cyberw commented Mar 7, 2022

It already has a solution in locust-plugins that works for most cases, as mentioned above. If someone were to take the time to make a PR introducing that feature + add tests then I could be convinced to add it to locust core..

@thejusdutt
Copy link

what is the name of the plug-in exactly? I don't see use of -n anywhere, I install locust-plugins using pip.

@cyberw
Copy link
Collaborator

cyberw commented Mar 7, 2022

its called -i/--iterations. it doesnt actually limit number of requests, but number of task iterations, so it is not exactly the same.

@thejusdutt
Copy link

locust is not recognizing it, locust: error: unrecognized arguments: -i10

@cyberw
Copy link
Collaborator

cyberw commented Mar 7, 2022

your locustfile must import locust_plugins to get the added options. see https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/cmd_line_examples.sh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests