Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow a fixed RPS rate #646

Closed
ghost opened this issue Aug 28, 2017 · 56 comments
Closed

Allow a fixed RPS rate #646

ghost opened this issue Aug 28, 2017 · 56 comments

Comments

@ghost
Copy link

ghost commented Aug 28, 2017

Description of issue / feature request

Please allow specifying a fixed RPS rate.

While the current model is nice for modelling users, it is not very useful for modelling more complex modern web applications which often will exhibit an exact known waiting behavior rather than something controlled by a more unpredictable distribution.

Currently, I see no good way modelling this with locust and we are having huge trouble in our current project working around by guessing settings needlessly to roughly get to the actual RPS the web app knowingly sets off.

Expected behavior

RPS setting is available as alternative (including a way to guarantee e.g. exactly 1 request per second even if the actual request itself has a varying duration, e.g. a 200ms request response and a 500ms request response won't lead to strong variations of the interval)

Actual behavior

I can't find an RPS setting, and requests taking longer seems to make users wait more instead of allowing to have some sort of predictable fixed interval behavior (as is unrealistic for real users of course, but not unrealistic for many automated web clients).

@aldenpeterson-wf
Copy link
Contributor

RPS setting is available as alternative (including a way to guarantee e.g. exactly 1 request per second even if the actual request itself has a varying duration, e.g. a 200ms request response and a 500ms request response won't lead to strong variations of the interval)

Have you looked at min and max wait? Setting them to both be 1000 (ms) would result in locust making 1 request/sec per client which seems to be exactly what you are looking to do.

@ghost
Copy link
Author

ghost commented Aug 29, 2017

@aldenpeterson-wf are you sure it won't result in 1000/(1000ms + request time milliseconds) requests per second? Because that is what we seem to be seeing

@mambusskruj
Copy link

@Jonast you're right! this is 1000/(1000ms + request time milliseconds) exactly.

@ghost
Copy link
Author

ghost commented Aug 30, 2017

@mambusskruj yea and that's not 1 RPS but rather something lower.. so not what we're looking for

@mambusskruj
Copy link

@Jonast the problem is to calculate different wait time based on response time. So, the right calculating is: 1000/(x wait time ms + request time milliseconds) where x will calculating all the time during test process.

@ghost
Copy link
Author

ghost commented Aug 30, 2017

@mambusskruj

It's not really hard to do:

import time

request_ts = time.monotonic()
RPS = 1.0
while True:
    do_request()
    request_ts += (1.0/RPS)
    now = time.monotonic()
    if now < request_ts:
        time.sleep(request_ts - now)

The main problem is really that locust appears to have no option to request such a behavior, hence this ticket.

@ghost
Copy link
Author

ghost commented Sep 17, 2017

@aldenpeterson-wf any chance such an option would be considered? It would be somewhat useful for simulating certain kinds of automated clients..

@heyman
Copy link
Member

heyman commented Sep 18, 2017

I don't think this is a common enough use-case to warrant a feature in Locust.

However, it wouldn't be hard to implement this in the test scripts yourself. Something like this should work (note: not tested):

from time import time

class ConstantWaitTimeTaskSet(TaskSet):
    wait_time = 1000
    
    def __init__self(self, *args, **kwargs):
        super(TaskSet, self).__init__(*args, **kwargs)
        self._time_waited = 0
    
    def wait():
        t = max(0, self.wait_time - self._time_waited) /  1000.0
        self._sleep(t)
        self._time_waited = 0
    
    def get(self, *args, **kwargs):
        start = time()
        response = self.locust.cient.get(*args, **kwargs)
        self._time_waited += (time() - start) * 1000
        return response

(You'd then use self.get() in the tasks to make GET requests. It should also be possible to override HttpLocust.client so that one doesn't have to change how HTTP requests are made)

@ghost
Copy link
Author

ghost commented Sep 18, 2017

@heyman is it not common to test API clients? Since REST APIs are so common, this seems a bit surprising to me.

While I agree it's not that hard to implement, the length of your suggested code makes me already wonder if it wouldn't be better off adding this to locust itself instead of having everyone testing automated clients redo this.

@heyman
Copy link
Member

heyman commented Sep 18, 2017

@Jonast There are a lot of API clients that doesn't make requests at a fixed interval. And out of those that do, I believe it should be at least as common that they have a fixed wait time between calls, and that they do not take the request time of the previous request(s) into consideration.

This is the first time this feature has been requested (I believe). That tells me it can't be extremely common. If you think it's something a lot of people would want, or if you want this in multiple projects, I think it would be a good idea to make it as a separate python package.

Doing this:

from locust_fixed_interval_taskset import FixedIntervalTaskSet

should be (almost) as easy as doing this:

from locust import FixedIntervalTaskSet

@ghost
Copy link
Author

ghost commented Sep 18, 2017

What web app would use a blocking request? Most of them would use setInterval in the background with an asynchronous Ajax request, which would exhibit the behavior I described. I think the more likely explanation is most users didn't really care much about the inaccuracy of their simulation enough to file a ticket, not that this isn't a common requirement..

Edit: anyway, of course I can use a separate python package. I just found it a bit weird that such an entire class of clients cannot be simulated accurately.

@heyman
Copy link
Member

heyman commented Sep 18, 2017

@Jonast Ah, for JavaScript that's probably true in quite a few cases (though it's probably not very uncommon that the next API call is scheduled once the previous response comes in).

I'm also wondering, for such apps - where periodic API-requests are being made at a set interval - does the model of tasks and TaskSets really make sense?

Does anyone else have thoughts on this?

@ghost
Copy link
Author

ghost commented Sep 18, 2017

I can only say that we tested a web app with locust at a really large scale and we essentially 1.) ignored the RPS problem (we ignored that requests took time and accepted that as inaccuracies), 2.) used a single task instead of multiple with task probabilities so we could set the intervals exactly to how the web app behaves.

As far as our use case is concerned, the best thing would be something entirely separate from the task set which can be used in addition, since we still have user prompted tasks for which the given TaskSet infrastructure is perfect in combination with background tasks of the web app for which we needed a fixed RPS model.

@byoda
Copy link

byoda commented Nov 9, 2017

+1 for this feature

In my use case need to able figure out what RPS we can support on a server for a certain API. Instead of simulating user behavior, I want to increase the RPS of a certain API up to a certain level in certain steps from 0 to the desired level

@kainoaseto
Copy link

+1 As well for the reasons listed above, Locust is fairly flexible but it would be nice to have this feature built into the framework.

@myzhan
Copy link
Contributor

myzhan commented Dec 26, 2017

Guys, you can have a look at boomer or locust4j, they have built-in Max RPS support.

See https://docs.locust.io/en/latest/third-party-tools.html

@dterei
Copy link

dterei commented Jan 24, 2018

+1, there are a lot of services where you can't control the load on the system, if your system is slower, then request rate doesn't slow down to match.

@heyman
Copy link
Member

heyman commented Jan 24, 2018

I think the title of this issue is a little bit misleading. The OP seems to want the option to make the wait time between tasks dynamic based on the execution time of the previous task. This can be achieved quite easily by overriding the wait() and execute_next_task() method of the TaskSet (see my example code here: #646 (comment)). One would have to track the task execution time in execute_next_task() and then subtract the execution time from the sleep time in sleep().

Although implementing the above feature might in some cases (depending on the test script) make Locust perform a "fixed RPS", actually making Locust perform a fixed RPS for all test scripts would require something totally different (since Locust tasks are just python methods that can do anything, and it's impossible for us to control the RPS by controlling the wait time between tasks).

Currently I'm not convinced that the use-case is common enough to warrant having the above feature (wait time based on previous task's execution time) implemented in Locust itself, when it can be implemented in the test scripts with a couple of lines of code.

I propose that we rename this issue and that we from here discuss the wait-time feature. (If someone want to propose something else for somehow achieving constant RPS, I think it's better to open a new ticket).

@dterei
Copy link

dterei commented Jan 25, 2018

@heyman Sorry, just to check, am I right though in my understanding that out-of-the-box Locust doesn't support a constant rate scheduler? This is, an open control loop where the same number of requests are guaranteed to be sent every seconds, regardless of the behavior of the system under test? If the scheduler used a fixed inter-arrival time, then it is simply sending a request every (1/RPS) seconds. This is what I am interested in.

@heyman
Copy link
Member

heyman commented Jan 25, 2018

@dterei Yes, that is correct. We're user centric rather than RPS centric. Locust provides you with a framework to define user behaviour using code, and then you select the number of these simulated users that you want to run against the system you're load testing.

In many cases it's much more relevant to be able to say "our system can handle X number of simultaneous users", than "our system kan handle Y requests/second". Even though it's often quite easy to determine Y as well, by just simulating more users until the system that is being tested can no longer handle the load.

@ghost
Copy link
Author

ghost commented Jan 25, 2018

I'd like to reiterate that while your code wouldn't be hard to implement for someone familiar with the internals, it's still more than 10 lines long and definitely more work to use than a simple decorator would be.

Also I can only speak for us, but we already modeled users on the paper with detailed real world data and calculated the RPS before we went looking for a test framework. Then going back to a different kind of modeling and hoping it hits roughly the same RPS we already calculated is just more work, and a bit of an annoying enforced paradigm thing that might be meaning well, but which isn't necessarily useful in all situations (even those with more natural users and not REST APIs, depending on which data the user has before he gets set up with your test framework).

Therefore I think it'd really be the best to just offer both approaches out of the box.

Edit: also, RPS is easier to see from real world test data. Even if it's less accurate than a detailed user model (which I'll immediately admit), approximating the RPS inaccurately with locust that tries to enforce such a model makes it even more inaccurate, just to reiterate my point that it's not necessarily helpful to require this paradigm even in cases where it would provide better results.

@aldenpeterson-wf
Copy link
Contributor

The concept just doesn't fit what Locust is designed to accomplish.

Locust creates "locusts" (ie users) that do various activities. As those activities block, the RPS rates change - if each user op takes 10 seconds, each user will only make 1 op every 10 seconds at best.

A fixed RPS approach effectively means that in Locust, the number of users varies based on the overall RPS rate.

It's valuable, but a very different paradigm than what Locust does.

@ghost
Copy link
Author

ghost commented Jan 25, 2018

As those activities block, the RPS rates change - if each user op takes 10 seconds, each user will only make 1 op every 10 seconds at best.

Sure, but that's a non-issue with properly written activities and easy to see when it becomes a problem. Anyone doing serious tests will have an idea of what they expect to hit, and at least in our tests we easily noticed when there was a bottleneck somewhere thanks to Locust's very helpful graphs - be it in what the script does, or simply the machine being too slow.

It's valuable, but a very different paradigm than what Locust does.

Yes, but I just explained why I think you should have both, even if it's a limited implementation that requires taking care of having no long-blocking tasks. It's still less work to be careful about that than to implement the delay handling manually.

This approach reminds me a bit of GNOME 3: sometimes it's good to nudge your users to do the right thing, but often it also means that people will just try to do what they had initially wanted to do anyway and get worse results (which is pretty much what we did when using locust, we basically ignored this whole thing including a proper delay handling and got worse data), or get frustrated and use something else. I'm not sure anyone gains anything from that.

Edit: clarified a few things

@heyman
Copy link
Member

heyman commented Jan 25, 2018

@Jonast I definitely get that it makes sense in some cases, I'm just not sure it's a common enough use-case to warrant making the product more complex.

Locust has gotten a ton of feature requests over the years. Many that made perfectly sense for the requesters use-case, but if we would have implemented them all, I believe the API and product would have been worse-off due to complexity and cluttering.

Every new feature is a trade-off. The value it adds VS the complexity it adds. In this case I'm not convinced (yet) that the first outweighs the latter. I'm open to discuss it though.

What would we name such feature (API-wise), so that users could understand what it was, or what to look for if they were needing that functionality?

How would such a feature relate to min_wait and max_wait? Would the task execution time of the previous task be subtracted from the random number that is picked between min_wait and max_wait?

What would happen if the task execution time of the previous task were longer than the wait time? I'm guessing the wait would just be zero, but some might argue that the new task should start even before the previous one finished, in which case we would have to spawn a new greenlet for the task, which would open up to a whole new set of potential problems.

@heyman
Copy link
Member

heyman commented Jan 25, 2018

Writing my questions in the previous post got me thinking, and I might have an idea for an API-change that could allow us to implement this feature while keeping the API fairly clean. Writing this from the top of my head, and there might definitely be things I haven't thought about, so I would love others' input.

API proposal:

# Current wait behaviour
class WebsiteUser(HttpLocust):
    wait_function = between(1, 45)

# Making tasks execute at a constant interval (assuming the task execution time < wait time)
class WebsiteUser(HttpLocust):
    wait_function = constant_execution_interval(10)

I'll try to answer my own questions for this proposal.

What would we name such feature (API-wise)

There might be a better name than constant_execution_interval.

How would such a feature relate to min_wait and max_wait?

It wouldn't support having a min and max value. One would only be able to specify a single argument which would be the execution interval.

What would happen if the task execution time of the previous task were longer than the wait time?

The wait time would be 0. If task execution time is longer than the wait time, the execution interval wouldn't be constant, but I think that's an acceptable tradeoff.

Thoughts?

@ghost
Copy link
Author

ghost commented Jan 25, 2018

The wait time would be 0. If task execution time is longer than the wait time, the execution interval wouldn't be constant, but I think that's an acceptable tradeoff.

That looks exactly like what we had looked for! It looks very easy to use as well. As for the trade-off, that's kind of already happening now (when there's a CPU or bandwidth bottleneck with user activity slowing down as a result), so watching for those kind of things is inevitable anyway.

@cgoldberg
Copy link
Member

What would we name such feature (API-wise)

"constant pacing"?

@ukclivecox
Copy link

One reason I liked iago was exactly its ability to send traffic at a constant request rate at which you could then observe your app's behaviour. I'm no expert but I think its quite hard to achieve if you want a distributed load tester to keep its request rate constant irrespective of the delays of the client being tested.

@contactparthshah
Copy link

contactparthshah commented Mar 19, 2018

Could you please tell me when is this feature going to be released and in which version of locust?

@arturshark
Copy link

Hi,
+1 for this feature.
Any updates ?

@cgoldberg
Copy link
Member

updates would be posted here.

@phuctran96
Copy link

Hi,
+1 for this feature. It'd be very helpful! Thanks!

@Reifier
Copy link

Reifier commented Aug 28, 2018

This is how good os projects die I guess. Have to go use vegeta now.

@thedeeno
Copy link

@Reifier unless you have a rejected PR those type of comments are just plain rude. OSS contributors are not some people in the cloud doing your bidding for free.

Get involved or move on. That attitude is extreme.

@cgoldberg
Copy link
Member

If vegeta suits your needs better, that's great. drilling HTTP services with constant request rates is not really a strength of Locust. Locust's strength is modelling more complex user behavior in Python.

I don't see how lack of interest in generating constant RPS workloads would mean the project is dying?

@Reifier
Copy link

Reifier commented Aug 28, 2018

@thedeeno I never said anyone has to do my bidding, I'll just find another solution. So yes, I am moving on.

@Oryon
Copy link

Oryon commented Aug 28, 2018

Coming back to this, it was not that difficult to implement it within the locust file. Sure, it won't give you a fancy GUI with a big red button letting you control the RPS, but its enough.

The key part of the solution is the self._sleep(t) function which waits the specified time without blocking.

time_to_wait = 10

# Get starting time
start = time.time()

# Do some HTTP stuff
self.client.request("GET", url="http://example.org/")

# Get ending time
end = time.time()

# Wait if we were too fast
if end - start > time_to_wait:
  self._sleep(start + time_to_wait - end)

@ukclivecox
Copy link

The problem is coordinated ommission. You don't want your test tool being slowed down by a slow test subject. Solving this in a single and even more complex distributed manner is difficult if you want to maintain a constant request rate.

BTW: We use and like locust - maybe the project owners could create a "Projects using Locust"?

@contactparthshah
Copy link

@Oryon ,

above code wont give you fixed RPS based on my experiment and observations.

Scenario:
Let's say while hitting url, server takes time to respond then I don't think so we will achieve fix RPS and that is what I have seen

@Oryon
Copy link

Oryon commented Aug 28, 2018

@contactparthshah,

Indeed, this code won't provide a fixed RPS in all circumstances. Particularly if the servers are too slow, or have a high response time jitter.

There is no such thing as instantaneous RPS anyway. If what you want is something that provides a fixed RPS on a longer time range, then you can still use self._sleep but by averaging the expected time for multiple requests instead of just one.
This approach also only looks at one single simulated user. Something more sophisticated could use some shared structure (with locking) in order to synchronize the users within a given slave. Doing the same thing across slaves is also possible, but widely more complex.

The previous code snippet is enough for what I want to do though. It works like a charm, and I suspect it might be the same for others.

@ghost
Copy link
Author

ghost commented Aug 28, 2018

In my opinion, the above code/snippet is sufficient for most scenarios because once you hit the server limit, you unavoidably lose connections or slow down (even with "perfect" client code). Since this will be observable in your graphed data by observing the request numbers (you'll see them suddenly dip down/hit a ceiling), the same as you'd see failing requests in the graphs, it's just a different additional indicator to see that you've hit the server limit - the finding of which is the usual reason why you're using locustio in the first place. So I don't see this as a problem if you know what to look out for when doing the test.

Without the snippet / the current locustio behavior however, you'll get a much more gradual slowdown as requests gradually get slower and therefore the clients very gradually go down in their request rate, which is harder to observe and is also unrealistic for a certain kind of fixed interval client - so that's why I made the feature request. (Just to go a full circle with my explanation)

Edit: okay, I'm just thinking, if your server has very sporadic super slow requests in between, this is actually a problem. We were lucky enough that our server didn't behave like that before you truly got to the performance ceiling - but I guess that's a scenario where above code wouldn't really sufficient, unless your modeled, actual client tool would also wait before sending additional requests. But for our case, the snippet would have worked well enough. Maybe if the rate can't be kept up, it would simply need to be graphed as yet another metric, so that this would be more apparent when testing?

@savvagen
Copy link

savvagen commented Jul 2, 2019

I fount and realized 2 ways to generate the fixed RPS count

1. To change the RPS count from wait_function changing (recommended way):

This wait function generates stable 100 RPS in the desired ranges with 100 users
The start wait time should be 1000 ms.

class LoadTests(HttpLocust):
    host = base_uri
    task_set = UserScenario
    wait_function = lambda self: self.fixed_rps_wait_function(100)

    def __init__(self):
        super(LoadTests, self).__init__()
        self.my_wait = 1000

    def fixed_rps_wait_function(self, desired_rps):
        # Will increase and decrease tasks wait time in range of 99.8 - 100.7 rps
        current_rps = runners.global_stats.total.current_rps
        if current_rps < desired_rps - 0.2:
            # the minimum wait is 10 ms
            if self.my_wait > 10:
                self.my_wait -= 4
        elif current_rps > desired_rps + 0.7:
            self.my_wait += 4
        print("Current RPS: {}".format(current_rps))
        print("Default wait is: {}".format(self.my_wait))
        return self.my_wait

2. To change the users count during the test run from hooks (only for master mode, and hard to adapt to the desired number of slaves):

hatching and killing users to get desired RPS count during the LoadTest (desired - 100 RPS using 100 users)
Hooks:


##### Every slave will spin up 2 users
####### The users count and desire_rps should be counted according to the slaves number
def on_report_to_master(client_id, data, **kw):
    # Executes before on_slave_report
    # Validate data statistics on slave
    clients_number = runners.locust_runner.num_clients
    hatch_rate = runners.locust_runner.hatch_rate
    print("Clients number: {}".format(clients_number))
    rps_mid = data['stats_total']['num_reqs_per_sec'].values()
    if len(rps_mid) >= 1:
        rpss = list(rps_mid)
        rpss.sort()
        if max(rpss) < 100:
            clients_number += 2
            runners.locust_runner.start_hatching(clients_number, hatch_rate)
            events.hatch_complete.fire(user_count=clients_number)
        if max(rpss) >= 103:
            clients_number -= 1
            runners.locust_runner.start_hatching(clients_number, hatch_rate)
            events.hatch_complete.fire(user_count=clients_number)


def on_slave_report(client_id, data, **kw):
    # Executes after on_report_to_master
    # Print data statistics on master
    rps_number = runners.global_stats.total.current_rps
    clients_number = runners.locust_runner.num_clients
    hatch_rate = runners.locust_runner.hatch_rate
    print("Users number: {}".format(data['user_count']))

Add hooks to locustfile

###### RPS listeners WORKING ONLY WITH ONE NODE and MASTER mode
events.report_to_master += on_report_to_master
events.slave_report += on_slave_report

@cyberw
Copy link
Collaborator

cyberw commented Jul 9, 2019

Hi! I have implemented a different solution, inspired by Jmeter's constant throughput timer. I think it will be more accurate than looking at locust's current_rps tracker, especially at low load.

rps = float(os.environ["LOCUST_RPS"])


class TaskSetRPS(TaskSet):
    def __init__(self, parent):
        super().__init__(parent)
        self.previous_time = 0.0

    def rps_sleep(self, rps):
        current_time = float(time.time())
        next_time = self.previous_time + runners.locust_runner.num_clients / rps
        if current_time > next_time:
            if runners.locust_runner.state == runners.STATE_RUNNING:
                logging.warning("Failed to reach target rps, even after rampup has finished")
            self.previous_time = current_time
            return

        self.previous_time = next_time
        gevent.sleep(next_time - current_time)


class UserBehavior(TaskSetRPS):
    @task
    def my_task(self):
        self.rps_sleep(rps)
        <the rest of your task>

@algattik
Copy link

algattik commented Jul 9, 2019

I've implemented it as a separate python module and a Docker image.

pip install locust-fixed-interval
from locust_fixed_interval import FixedIntervalTaskSet

class MyTaskSet(FixedIntervalTaskSet):

  def setup(self):
      self.interval = 2.5 

  @task
  def task1(self):
    #...

https://pypi.org/project/locust-fixed-interval/
https://hub.docker.com/r/algattik/locust-fixed-interval
https://github.com/algattik/locust_fixed_interval

@cyberw
Copy link
Collaborator

cyberw commented Sep 20, 2019

I have released my code as part of locust-plugins: https://github.com/SvenskaSpel/locust-plugins/blob/master/locust_plugins/tasksets.py

I have also built support for this into my tool for automated distributed locust runs: https://github.com/SvenskaSpel/locust-plugins (basically it just divides the RPS rate equally between all locust processes)

@cyberw
Copy link
Collaborator

cyberw commented Nov 14, 2019

Solved by #1118. For global RPS control (as opposed to per-locust control) you still need to have some custom code (like the one provided by locust-plugins), but maybe I can add that to locust itself now that I'm a maintainer...

@DavideRossi
Copy link

@dterei Yes, that is correct. We're user centric rather than RPS centric. Locust provides you with a framework to define user behaviour using code, and then you select the number of these simulated users that you want to run against the system you're load testing.

In many cases it's much more relevant to be able to say "our system can handle X number of simultaneous users", than "our system kan handle Y requests/second". Even though it's often quite easy to determine Y as well, by just simulating more users until the system that is being tested can no longer handle the load.

I am late to this, but let me say that this is true for websites. But different kinds of platforms can be load-tested. In my case I deal with IoT systems, where a request does not usually depend on the response time of the previous one.
There are (several) scenarios in which a constant rate makes fully sense.

@Jasnoor1
Copy link

I fount and realized 2 ways to generate the fixed RPS count

1. To change the RPS count from wait_function changing (recommended way):

This wait function generates stable 100 RPS in the desired ranges with 100 users
The start wait time should be 1000 ms.

class LoadTests(HttpLocust):
    host = base_uri
    task_set = UserScenario
    wait_function = lambda self: self.fixed_rps_wait_function(100)

    def __init__(self):
        super(LoadTests, self).__init__()
        self.my_wait = 1000

    def fixed_rps_wait_function(self, desired_rps):
        # Will increase and decrease tasks wait time in range of 99.8 - 100.7 rps
        current_rps = runners.global_stats.total.current_rps
        if current_rps < desired_rps - 0.2:
            # the minimum wait is 10 ms
            if self.my_wait > 10:
                self.my_wait -= 4
        elif current_rps > desired_rps + 0.7:
            self.my_wait += 4
        print("Current RPS: {}".format(current_rps))
        print("Default wait is: {}".format(self.my_wait))
        return self.my_wait

2. To change the users count during the test run from hooks (only for master mode, and hard to adapt to the desired number of slaves):

hatching and killing users to get desired RPS count during the LoadTest (desired - 100 RPS using 100 users)
Hooks:


##### Every slave will spin up 2 users
####### The users count and desire_rps should be counted according to the slaves number
def on_report_to_master(client_id, data, **kw):
    # Executes before on_slave_report
    # Validate data statistics on slave
    clients_number = runners.locust_runner.num_clients
    hatch_rate = runners.locust_runner.hatch_rate
    print("Clients number: {}".format(clients_number))
    rps_mid = data['stats_total']['num_reqs_per_sec'].values()
    if len(rps_mid) >= 1:
        rpss = list(rps_mid)
        rpss.sort()
        if max(rpss) < 100:
            clients_number += 2
            runners.locust_runner.start_hatching(clients_number, hatch_rate)
            events.hatch_complete.fire(user_count=clients_number)
        if max(rpss) >= 103:
            clients_number -= 1
            runners.locust_runner.start_hatching(clients_number, hatch_rate)
            events.hatch_complete.fire(user_count=clients_number)


def on_slave_report(client_id, data, **kw):
    # Executes after on_report_to_master
    # Print data statistics on master
    rps_number = runners.global_stats.total.current_rps
    clients_number = runners.locust_runner.num_clients
    hatch_rate = runners.locust_runner.hatch_rate
    print("Users number: {}".format(data['user_count']))

Add hooks to locustfile

###### RPS listeners WORKING ONLY WITH ONE NODE and MASTER mode
events.report_to_master += on_report_to_master
events.slave_report += on_slave_report

@savvagen
Can u please explain me the wait_function more accurately? How u have calculated this thing?

@guwenyu1996
Copy link

@cyberw I'm trying to use your solution in locust version 1.2.3. It seems runners does not have the attribute runners.locust_runner. Can you specify the version for your code?

@cyberw
Copy link
Collaborator

cyberw commented Sep 8, 2020

Hi @guwenyu1996 ! Unfortunately I havent had time to keep that up to date (and I had some weird issues with the RPS rate). You're on your own for now...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests