You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to increase concurrent user to 100,000 and decrease min wait time to 1ms, and max wait time 2ms.
I couldn't pass 200 request / second, after increasing concurrent user and decreasing wait time.
Expected behavior
I could achive hundred thousand request / second
Actual behavior
I couldn't pass 200 request / second
Environment settings (for bug reports)
OS: Ubuntu 14.04
Python version: 2.7
Locust version: 0.8.1
The text was updated successfully, but these errors were encountered:
What happens when you do this? Do you get errors? Have you increased the OS limit on the max number of open files? How much CPU does the locust/python process use?
without knowing specifics of your environment and tests, it's impossible to say if this is feasible. It depends on your machine specs, network topology, configuration, number of slaves running, and many other factors. You need to monitor all resources to figure out where your bottleneck is.
btw, I hope you are running in distributed mode with many slave machines. You'll never get even close to 100,000 concurrent users from a single load generator.
I don't really see anything actionable here. Any expectation that a single Locust node can produce hundreds of thousands of requests per second really isn't realistic.
Description of issue
I tried to increase concurrent user to 100,000 and decrease min wait time to 1ms, and max wait time 2ms.
I couldn't pass 200 request / second, after increasing concurrent user and decreasing wait time.
Expected behavior
I could achive hundred thousand request / second
Actual behavior
I couldn't pass 200 request / second
Environment settings (for bug reports)
The text was updated successfully, but these errors were encountered: