Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add worker_count = 1 to LocalRunner for parity with MasterRunner #2900

Merged
merged 3 commits into from
Sep 11, 2024

Conversation

tarkatronic
Copy link
Contributor

This prevents the custom_messages example from failing when running locally with AttributeErrors for environment.runner.worker_count and environment.runner.clients.

ref:

worker_count = environment.runner.worker_count
chunk_size = int(len(users) / worker_count)
for i, worker in enumerate(environment.runner.clients):

@cyberw
Copy link
Collaborator

cyberw commented Sep 10, 2024

Hmm... I understand the need in this example, but if I ask a LocalRunner how many workers/clients it has connected I'd expect it to be zero, not one, so this might add as much confusion as it solves...

@tarkatronic
Copy link
Contributor Author

Hmm... I understand the need in this example, but if I ask a LocalRunner how many workers/clients it has connected I'd expect it to be zero, not one, so this might add as much confusion as it solves...

I can see that perspective. The way I view it though, when running locally, there is one worker: The runner itself. Nothing happens without a worker right? In fact, it does even have a worker:

self._local_worker_node = WorkerNode(id="local")

The real goal here is to simplify load testing code that I'm writing. Currently, my option is

        try:
            worker_count = environment.runner.worker_count
            workers = environment.runner.clients
        except AttributeError:  # For running locally
            worker_count = 1
            workers = [environment.runner]

Which feels like a hacky workaround for a bug.

@tarkatronic
Copy link
Contributor Author

This is also just making for a more complete picture, in light of this comment, directly above the change:

locust/locust/runners.py

Lines 436 to 437 in 7c85b79

# These attributes dont make a lot of sense for LocalRunner
# but it makes it easier to write tests that work for both local and distributed runs

@cyberw
Copy link
Collaborator

cyberw commented Sep 11, 2024

Hmm... Yea I guess the worker_count = 1 makes sense in a way (but someone might just as well expect it to be zero!), but storing something in self.clients (which I think is what you were after, not self.workers, right?) is just too confusing for a runner that doesnt actually have any clients.

I dont think there's a clean way to handle this, probably best if you just check the type of the runner object and do special treatment of LocalRunner as necessary.

@tarkatronic
Copy link
Contributor Author

Hmm... Yea I guess the worker_count = 1 makes sense in a way (but someone might just as well expect it to be zero!), but storing something in self.clients (which I think is what you were after, not self.workers, right?) is just too confusing for a runner that doesnt actually have any clients.

I dont think there's a clean way to handle this, probably best if you just check the type of the runner object and do special treatment of LocalRunner as necessary.

🤦🏻 Yes, self.clients is what I meant. That's what I get for a super quick vim patch.

So, that special code is exactly what I'm trying to avoid. I ran into this because I was refactoring code that had special handling for LocalRunner, meaning that I would have to test large pieces of code in production. I've managed it with this try/except, but as I mentioned it feels like a workaround for a bug. Especially after reworking my code to work exactly like the provided examples, and finding that it was broken.

All that said, this much debate over 2 small lines with no real impact on the code makes it seem not worth it. 🤷🏻

@cyberw
Copy link
Collaborator

cyberw commented Sep 11, 2024

I get why this would make it easier for this particular use case, but I don't want LocalRunner to "lie" about its state.

LocalRunner has no workers (so saying it has one is questionable) and settings self.workers to be a list containing itself could have very strange consequences down the line, if it were ever used by someone not aware of this feature. Its not even the same type (WorkerNode) as in MasterRunner...

@tarkatronic
Copy link
Contributor Author

@cyberw I've updated to remove self.workers (especially since that was the wrong name anyhow) and also fixed the example to work properly with LocalRunner

@cyberw cyberw merged commit b3e85ae into locustio:master Sep 11, 2024
16 checks passed
@tarkatronic tarkatronic deleted the fix/local-runner-worker-info branch September 11, 2024 16:11
@cyberw cyberw changed the title fix: Add more worker info to LocalRunner for parity with others fix: Add worker_count = 1 to LocalRunner for parity with MasterRunner Sep 15, 2024
@cyberw cyberw changed the title fix: Add worker_count = 1 to LocalRunner for parity with MasterRunner Add worker_count = 1 to LocalRunner for parity with MasterRunner Sep 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants