Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed load test k8s and openshift #1100

Closed
emilorol opened this issue Sep 30, 2019 · 7 comments
Closed

Distributed load test k8s and openshift #1100

emilorol opened this issue Sep 30, 2019 · 7 comments

Comments

@emilorol
Copy link

Description of issue

Running distributed test on k8s or openshift the auto scale feature that will bring up more slave to a test based on the load reset the test

Expected behavior

Locust master will send load to the new slaves without resetting the existing slaves.

Actual behavior

Test are been rest when master add a new slave to pool

Environment settings

  • OS: Alpine 3.9
  • Python version: 3.6
  • Locust version: 0.11.0

Steps to reproduce (for bug reports)

openshift: https://github.com/emilorol/locust-openshift

k8s: https://github.com/karol-brejna-i/locust-experiments

@cgoldberg
Copy link
Member

is there a question or issue here?

@emilorol
Copy link
Author

An issue. Using locust in openshift running with auto scale on. Every time a new slave is added to a running test, the test reset. Also after destroying the slaves they are reported missing instead of just removing them. The logic behind this to scale up at the start of the test and scale down when done, all automatically.

@cgoldberg
Copy link
Member

This functions as designed.

@max-rocket-internet
Copy link
Contributor

@emilorol I also tried to start a discussion about autoscaling slaves: #1066

Issue was also abruptly closed.

I think the way work is given to the slaves from the master would need to fundamentally change. Currently the number of clients and the hatch rate is simply divided by the number of slaves and then they start. To enable even rudimental autoscaling this process would need to be more synchronised. For example the master would need to adjust number of clients running on each slave whenever a new slave joins or leaves.

@emilorol
Copy link
Author

emilorol commented Oct 1, 2019

@max-rocket-internet

Just by looking at the project main page I noticed that there is no financial support, not even a "donation" button and that might be the real reason behind the feature freeze. It is a shame the potential this project has to become a real company and offer pay services, but it does nothing about it.

@max-rocket-internet
Copy link
Contributor

that might be the real reason behind the feature freeze

I don't think that's it. There's plenty of open-source projects that are actively developed without donations.

the potential this project has to become a real company and offer pay services

We are on different pages here 😅 I really don't want locust to become a company with paid services!

If you want that you can checkout Load Impact and their tool k6.

@emilorol
Copy link
Author

emilorol commented Oct 2, 2019

I agree with you, but reality is that new features are not even in the back burner. I really want to be wrong here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants