Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Next task" optimization #64

Open
notgull opened this issue Oct 17, 2023 · 2 comments
Open

"Next task" optimization #64

notgull opened this issue Oct 17, 2023 · 2 comments

Comments

@notgull
Copy link
Member

notgull commented Oct 17, 2023

In many actor systems (most notable Erlang), executors are optimized by having a "next task" slot in each executor queue. Whenever a new Runnable is scheduled, the future is first pushed to a slot separate from the normal queue. If a Runnable is already in this slot, it is pushed to the back of the queue. When it comes time to read from this queue, the slot is checked and popped from before the normal queue is.

This is optimal because if a task wakes another task to be immediately executed, the second task will be queued up immediately, which can emulate sequential computations very well.

@nullchinchilla
Copy link

An important issue about this slot is that it's unstealable from other workers, which can cause issues in scenarios where correctness directly depends on stealing. The most obvious example is something like this:

block_on(spawn(async { block_on(spawn(async{1}))}))

Here, if spawn does thread-local scheduling, then all the tasks start on the same thread. But the interior task cannot make progress until the outside task ends (because the outside task has no await points), but the outside task cannot end unless the interior task makes progress.

This admittedly pathological scenario only makes progress if some other thread can come in and steal away the inner task.

@nullchinchilla
Copy link

See geph-official/smolscale#2

notgull added a commit that referenced this issue Apr 25, 2024
This commit attempts to re-introduce the thread-local optimization. It
stores the local queues in a multiplex hash map keyed by the thread ID
that it started in. It also sets it up so the thread can be woken up by
a unique runner ID.

cc #64

Signed-off-by: John Nunley <[email protected]>
notgull added a commit that referenced this issue May 14, 2024
This commit attempts to re-introduce the thread-local optimization. It
stores the local queues in a multiplex hash map keyed by the thread ID
that it started in. It also sets it up so the thread can be woken up by
a unique runner ID.

cc #64

Signed-off-by: John Nunley <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants