Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup/teardown hooks #59

Closed
klausbrunner opened this issue Mar 12, 2013 · 21 comments
Closed

Setup/teardown hooks #59

klausbrunner opened this issue Mar 12, 2013 · 21 comments

Comments

@klausbrunner
Copy link

It'd be extremely useful to have dedicated setup and teardown functionality in Locust (or if there is something like this already, to have it documented).

My rough idea would be:

  • Setup is called exactly once, and must complete, before the test locusts are spawned. Teardown is called once after all of them have stopped.
  • Setup should be fail-fast (if anything in setup fails, abort the test).
  • Setup/teardown is realised using a Locust class like the others, so that state can be kept from setup to teardown if need be. Tasks are marked with setup and teardown annotations as appropriate.
  • Setup/teardown is not included in any statistics.

Thoughts? (Have I missed something that already exists?)

@heyman
Copy link
Member

heyman commented Dec 2, 2013

Hi!

I'm really sorry that you haven't gotten a reply until now.

There's currently no functionality that does exactly what you describe. When I've been in need of some setup code for my locust tests, I've been putting it at the module level of my test scripts. But that's not run every time a test is started/stopped, and there's no tear down.

I'm curious if you have, or have had, a specific use-case where this was needed (I'm sure there are such cases, I would just like to hear about it).

@klausbrunner
Copy link
Author

In my case it's an application that needs to ingest test data (some of which are randomly selected) before running meaningful load tests, and then make sure they're gone afterwards to avoid bloating the data store. Setup can be a fairly lengthy process.

Sure, you can call external scripts before and after test runs, but that's quite inconvenient, especially if the teardown phase needs to know things from the setup phase (e.g. generated IDs). And it just makes sense to have a self-contained test package instead of a bunch of different things glued together.

(I'm no longer using locust, but the lack of a good setup/teardown facility is one reason that made me switch to another solution.)

@GeoSpark
Copy link

I too need this functionality - I am creating and reading files through a RESTful interface, and some clean-up between runs is needed. I figured it needed a companion to the on_start() function, rather than an event, so I have overridden TaskSet::run() and captured the GreenletExit exception:

class MyTaskSet(TaskSet):
    def run(self, *args, **kwargs):
        try:
            super(MyTaskSet, self).run(args, kwargs)
        except GreenletExit:
            if hasattr(self, "on_stop"):
                self.on_stop()
            raise

Of course the ideal solution would be to put my exception-handling code in the relevant place in core.py I could put in a patch or a pull request if you want.

@sfitts
Copy link

sfitts commented Apr 25, 2014

I'd also like this functionality -- in my case in order to create and then delete user accounts (using a single, dummy account isn't possible in my case). The user creation I'm doing in the __init__ of my Locust class and that works fine, what I don't have is the corresponding cleanup. For that what I think I'd like is an EventHook called on test stop, but other solutions could work as well.

@daubman
Copy link
Contributor

daubman commented Apr 26, 2014

For cleanup we've used the quitting event, which might also work for you if you don't mind quitting rather than just stopping (depending on how much cleanup you really need, if you just persistently track created things for the entire runtime, then cleanup on quit might be fine) - we do something like:

import threading
import functools

QUIT_HANDLED = False
quit_lock = threading.Lock()
def _quit(client, delete_sessions):
    global QUIT_HANDLED
    if not QUIT_HANDLED:
        with quit_lock:
            if not QUIT_HANDLED:
                QUIT_HANDLED = True
                #...cleanup code here

#...actual code

class APIUser(Locust):
    task_set = APILikeTaskDistribution

    #         min  sec  ms
    min_wait = 30 * 1000
    avg_wait = 2 * 60 * 1000
    max_wait = 5 * 60 * 1000

    def __init__(self):
        super(APIUser, self).__init__()
        events.quitting += functools.partial(_quit, self.client, True)

But I agree, a more uniform/easy approach to teardown (that works on stop and not just quit) would be a nice feature.

@sfitts
Copy link

sfitts commented Apr 26, 2014

Thanks -- I looked at the quitting event and will likely use it as you suggest. Nice to know that it works for someone else.

That will work fine for the actual deployed version of the tests since we'll shutdown after the run. For development having something at the test level would be more convenient (and there may be other cases where quitting won't work).

@sfitts
Copy link

sfitts commented Apr 28, 2014

On a somewhat related note, does anyone have a technique for performing per-user work (aka per-locust) that must be done before the locust should be considered fully hatched? I tried putting this in the __init__ of the locust (as described above), but that doesn't work since the locusts are constructed asynchronously. I need this for a couple of reasons:

  • I don't want requests made during this time to appear in the test stats (currently they do since the call to reset the stats occurs asynchronously from the running of the inits.
  • I need to throttle the startup activity so that it doesn't overwhelm the server. Currently this is all just one big stampede and isn't controlled by the spawn rate (which I'd like it to be).

Ideally I'd like to set the locusts off in groups of N, with an N second pause between groups. The count of clients and spawn rate sound like they can do this, but they don't really. Instead each client is created and starts running (with no real difference between init work and task running work) and there is an M second pause between starting each client (where M is clients/spawn rate).

@heyman
Copy link
Member

heyman commented Apr 28, 2014

@sfitts: One slightly hacky solution to that would be to acquire a semaphore that you release at the locust.events.hatch_complete event, and wait for that semaphore when the locusts/tasksets start.

Here's a working example:

from locust import HttpLocust, TaskSet, task, events

from gevent.coros import Semaphore
all_locusts_spawned = Semaphore()
all_locusts_spawned.acquire()

def on_hatch_complete(**kw):
    all_locusts_spawned.release()

events.hatch_complete += on_hatch_complete

class UserTasks(TaskSet):
    def on_start(self):
        all_locusts_spawned.wait()

    @task
    def index(self):
        self.client.get("/")

class WebsiteUser(HttpLocust):
    host = "http://127.0.0.1:8089"
    min_wait = 2000
    max_wait = 5000
    task_set = UserTasks

One caveat though. If you're running Locust distributed, there's still a possibility for some requests to happen before all locusts has hatched. That's because there's no synchronisation of the hatch_complete events between the slaves, so for example if one machine is much slower for some reason, it might lag behind in the spawning of the locust instances.

Also, since there is no event to listen for when the test stops, there's no easy way of re-acquiring the semaphore once the test has stopped. Since there's clearly a need for it, we should add starting and stopping events into the next release of locust.

@sfitts
Copy link

sfitts commented Apr 29, 2014

@heyman: Thanks for the suggestion and the time putting together the example. I'm not expecting any kind of distributed coordination, just need to throttle things on a local basis. So something along these lines should work well.

@mwildehahn
Copy link

I'm also looking to support ingesting test data that can be referenced when executing a task.

I have a django app with various models/factories. I'm planning on writing a script that will generate the models i need for the load test within the django app. My plan is then to adjust the locust runner to take an "initial_data" argument which can be referenced within the task. If a master was passed this information, it could also send it along to the slaves when sending the hatch event.

Is there some other way that I can do that currently? Does that seem like a reasonable extension to the current architecture?

@shawngustaw
Copy link

This issue is pretty old and I'm looking for something along the lines of what's been discussed. Has there been any progress?

@samjiks
Copy link

samjiks commented Sep 19, 2016

any progress on this, would be great.

@mohanraj-r
Copy link

mohanraj-r commented Sep 23, 2016

The on_start() can be used as a setup I guess? And not sure if the events.quitting can be used to create a hook that acts as a teardown? Would be nice to have a on_stop() that can be defined similarly to on_start().

@swordmaster2k
Copy link

+1 for an on_stop() feature. I have some custom websockets started on their on greenlet and having an on_stop handler would enable me to tear them down gracefully.

@rmandar16
Copy link

+1 for on_stop() feature...Have some common tear-down tasks, to be executed.

@josh-cain
Copy link

Another +1 for on_stop! would be immensely helpful

@Jim-Lambert-Bose
Copy link

Another +1 for on_stop()

@jdabello
Copy link

+1000

@ad34
Copy link

ad34 commented Mar 21, 2018

+1

@aldenpeterson-wf
Copy link
Contributor

This was addressed in #658 and will be released in the next release of Locust!
🎉

@ad34
Copy link

ad34 commented Mar 22, 2018

awesome :) I am curently testing a websocket based title and it will help a lot because stopping the test dont close the websockets

pancaprima pushed a commit to pancaprima/locust that referenced this issue May 14, 2018
…t file (locustio#59)

* expose client index to locust, so user can access it in the test file

* update logger info
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests