Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

balance/recover the load distribution when new slave joins #970

Merged
merged 2 commits into from
Mar 13, 2019

Conversation

delulu
Copy link
Contributor

@delulu delulu commented Mar 1, 2019

With Locust master and slave agents running in Kubernetes, Kubernetes will guarantee the availability of Locust agents.

But when a slave agent crashes and restarts, it will have a different client id and it has no idea of the user load that master assigned to it previously. And the total number of running locusts will not be as many as expected.

So it might be a better way to balance the user load when new client joins, and the total number of running locusts will still be the same as we specified in the swarm request.

also this PR fixes some issue I noticed when running in Python 3 with web mode, it turns out to be the inconsistency introduced in recv_from_client and send_to_client

Any thought or comment?

@delulu
Copy link
Contributor Author

delulu commented Mar 1, 2019

@Jonnymcc for awareness

@cgoldberg
Copy link
Member

also this PR fixes some issue I noticed when running in Python 3 with web mode

can you move those to a separate PR?

Copy link
Contributor

@Jonnymcc Jonnymcc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, I was thinking this would be a nice improvement to have.

self.assertEqual(msg.type, 'test')
self.assertEqual(msg.data, 'message')

def test_client_recv(self):
sleep(0.01)
sleep(0.1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was the sleep not long enough?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, not long enough at my side, besides there's no harm to set a longer time here.

@delulu
Copy link
Contributor Author

delulu commented Mar 4, 2019

also this PR fixes some issue I noticed when running in Python 3 with web mode

can you move those to a separate PR?

sure, here is the separate PR: #972

@delulu
Copy link
Contributor Author

delulu commented Mar 13, 2019

@cgoldberg please help check this pr and merge into master, let me know if any concern, thx!

Copy link
Member

@cgoldberg cgoldberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.. thanks

@cgoldberg cgoldberg merged commit f467cf8 into locustio:master Mar 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants