-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
threadpool: spawn new tasks onto a random worker #683
Conversation
Interesting... This might be trickier in |
Yep, that's why I'm leaning towards always spawning a new task onto a random worker - just in case.
I'm not as experienced with the real-world use of Tokio, but I wonder: Is the cost of spawning something we should be particularly worried about? My gut feeling is that tasks are relatively rarely spawned and that the cost of spawning tends to be negligible in the grand scheme of things. On the other hand, I'd wager tasks are yielded much more often than they're spawned so we should strive to optimize task switching as much as possible. |
Yeah, you could be right that spawning is relatively rare on the critical path, so this may just be a non-issue :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine to me, I just had a question about the dependency version bump in tokio-reactor/Cargo.toml
, but not a blocker.
tokio-reactor/Cargo.toml
Outdated
@@ -27,9 +27,9 @@ mio = "0.6.14" | |||
num_cpus = "1.8.0" | |||
parking_lot = "0.6.3" | |||
slab = "0.4.0" | |||
tokio-executor = { version = "0.1.1", path = "../tokio-executor" } | |||
tokio-io = { version = "0.1.6", path = "../tokio-io" } | |||
tokio-executor = { version = "0.1.5", path = "../tokio-executor" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this bumped? Were the minimal versions wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was just a desperate attempt while I was struggling to patch the dependencies. :)
In the end, moving [patch.crates-io]
to the root Cargo.toml
fixed the issue.
Thanks for reporting this to rust-lang/cargo!
@@ -89,3 +89,20 @@ serde = "1.0" | |||
serde_derive = "1.0" | |||
serde_json = "1.0" | |||
time = "0.1" | |||
|
|||
[patch.crates-io] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I filed rust-lang/cargo#6126 to address this.
I'm using this patch to bench the hyper hello example: With this patch:
Requests/sec: 74679.24 Against
|
Hyper is doing basic spawning here, so in theory it should be distributed across the runtime. It could be that multi threading provides no added benefit in this benchmark, but I am not 100% sure what is going on. |
Here's the same hyper Baseline
PR #660:
This PR:
|
@stjepang Interesting. What is the system that you are running the benchmarks on? |
@stjepang either way, I have approved the PR, so feel free to merge 👍 |
My machine:
|
Motivation
Once an IO handle gets assigned to a reactor, it stays assigned to it forever. A handle is assigned to a reactor the first time it is polled.
Now that every worker thread in
tokio-threadpool
drives its own reactor, we'd like to distribute the IO work among reactors as equally as possible. This is only possible if each newly spawned task is polled the first time on a random worker.Currently, when spawning a new task inside threadpool, it always goes into the local worker's queue. If a task spawns a bunch of child IO tasks, that means a disproportionate number of them will be polled the first time by the current worker thread. Work stealing doesn't help much in this case - it turns out a lot of tasks will be polled before other worker threads get a chance to steal them, thus skewing the IO handle distribution onto reactors.
Solution
When spawning a new task, always pick a random worker and push the task into that worker's inbound queue.
Note: this heuristic is not perfect, but seems to be an improvement over the current task spawning strategy. We might tweak it further in the future.
I've expanded the
tokio-reactor
benchmark. Here's our baseline usingtokio-io-pool
:This is
tokio-threadpool
before PR #660 (at commit 886511c), with a single global reactor:After PR #660 (commit d35d051), with a reactor per worker thread:
Finally, this PR, with randomized task submission:
Still not as fast as
tokio-io-pool
, but we're getting there.cc @jonhoo