-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
async/await/suspend/resume #6025
Comments
I'm willing to try implementing async/await/suspend/resume for stage2 as i require them for a project i'm working on. The issue is that i dont really know where to start. The AIR only have async_call, async_call_alloc, suspend_begin, and suspend_end instructions. By looking at stage1 it seems like the await/suspend/resume instructions are missing. Should I try to just add the instructions and replace calls to Futhermore, is async implemented in a similar way in stage2 as stage1? (basically stage1 being a good representation of how stage2 implements and uses frames, async calls, suspends, resumes, etc) Edit: I've been using the stage2-async branch, assuming that's where the async development is being done. |
This commit removes async/await/suspend/resume from the language reference, as that feature does not yet work in the self-hosted compiler. We will be regressing this feature temporarily. Users of these language features should stick with 0.10.x with the `-fstage1` flag until they are restored. See tracking issue #6025.
I've been following the WASI development and it seems to be going great! That being said, I am currently working on a new project and I am using some specific stage2 features. I am not using async yet, but I'd love to introduce it soon. Can you provide a very rough estimate of when this is planned to be merged in master? It is just for general planning (no pressure). Cheers! |
*Async JS* For now only callback style is handled (Promises is planned later). We use persistent handle on v8 JS callback call after retrieving the event from the kernel, has the parent JS function is finished and therefore local handles are already garbage collected by v8. * Event Loop* We do not use the event loop provided in Zig stdlib but instead Tigerbeetle IO (https://github.com/tigerbeetledb/tigerbeetle/tree/main/src/io). The main reason is to have a strictly single-threaded event loop, see ziglang/zig#1908. In addition the desing of Tigerbeetle IO based on io_uring (for Linux, with wrapper around kqueue for MacOS), seems to be the right direction for IO. Our loop provides callback style native APIs. Async/await style native API are not planned until zig self-hosted compiler (stage2) support concurrent features (see ziglang/zig#6025). Signed-off-by: Francis Bouvier <[email protected]>
There are now very few stage1 cases remaining: * `cases/compile_errors/stage1/obj/*` currently don't work correctly on stage2. There are 6 of these, and most of them are probably fairly simple to fix. * 'cases/compile_errors/async/*' and all remaining 'safety/*' depend on async; see ziglang#6025. Resolves: ziglang#14849
There are now very few stage1 cases remaining: * `cases/compile_errors/stage1/obj/*` currently don't work correctly on stage2. There are 6 of these, and most of them are probably fairly simple to fix. * `cases/compile_errors/async/*` and all remaining `safety/*` depend on async; see ziglang#6025. Resolves: ziglang#14849
There are now very few stage1 cases remaining: * `cases/compile_errors/stage1/obj/*` currently don't work correctly on stage2. There are 6 of these, and most of them are probably fairly simple to fix. * `cases/compile_errors/async/*` and all remaining `safety/*` depend on async; see ziglang#6025. Resolves: ziglang#14849
There are now very few stage1 cases remaining: * `cases/compile_errors/stage1/obj/*` currently don't work correctly on stage2. There are 6 of these, and most of them are probably fairly simple to fix. * `cases/compile_errors/async/*` and all remaining `safety/*` depend on async; see ziglang#6025. Resolves: ziglang#14849
There are now very few stage1 cases remaining: * `cases/compile_errors/stage1/obj/*` currently don't work correctly on stage2. There are 6 of these, and most of them are probably fairly simple to fix. * `cases/compile_errors/async/*` and all remaining `safety/*` depend on async; see #6025. Resolves: #14849
There are now very few stage1 cases remaining: * `cases/compile_errors/stage1/obj/*` currently don't work correctly on stage2. There are 6 of these, and most of them are probably fairly simple to fix. * `cases/compile_errors/async/*` and all remaining `safety/*` depend on async; see ziglang#6025. Resolves: ziglang#14849
Please, don't go the horrendous path of async. It'll be a massive time sink, the ABI for function calling will never be the same, resulting in function colors, and the entire ecosystem will either be split, or all async. Could you look into stackful continuations and effects instead? |
Do you mean something like I think that in general this is a good discussion to have. I did plenty of rust, and I hate their To me, it ends up to uses cases. We must find what problem we (the developers that use the language) want to solve. In #6025 (comment) I explain a use case that would be covered by a non local jump. But there are more uses of As a note, in practice, if I look at how I work with JavaScript as it is there I do the most I think Apple's API such as dispatch and run loop can be of inspiration when thinking around concurrent API design. |
Why does a function's ABI introduce function colours in any practical way? The only thing it really means is that you can't get a default-callconv function pointer to an async function; if you need to support async functions somewhere that you're using function pointers, then you can use Per my understanding, the goal of Zig's colourless async can be framed as that if I have a non-async project, I can throw an async call somewhere into it - potentially causing hundreds or thousands of functions to in turn become async - and it'll basically Just Work.
A huge benefit of colourless async is that it should help avoid this problem. If Alice writes an async version of a package, and Bob a synchronous version, they should both be able to be plugged straight in to any project - regardless of whether or not it is already async - and everything should work with only minor changes. |
It seems you have answered your own question
Again, you've proved my point, the ecosystem would be divided into async and normal code.
Continuations can be implemented by using setjmp and longjmp, yes. The Wikipedia page for Continuations does a good job of explaining them, as well as providing a list of languages that support them. It's important to understand what async is and what problems it solves.
Delimiting computation means splitting up a function into multiple parts, providing the ability for each part to be executed in multiple ways, and allowing for control flow to be more flexible. Async/await is what we usually call an implementation of a subset of stackless coroutines. Coroutines are an abstraction that helps with delimiting computation, however, compared to just async/await they have the additional benefit of being able to yield multiple times. These are also usually stackless. Continuations are a much lower level of abstraction for delimiting computation, however, they are much more powerful, providing the ability to capture the control state of the current computation as a first-class value. This can be used to implement coroutines, generators, and even effects. Continuatinuations are usually implemented by capturing the stack, or by having a split stack system, this means that there is just one ABI for calling functions, the same one everyone else uses, this allows calling into C, or any other language, without any issue.
There is also a good Wikipadia page for Effect systems. Effects are in essence a control flow method. They can be used to implement cooperative scheduling (as with async + Future/Promise), but also much more than that, like function purity because they can be used to abstract over "colors". Here are some examples of effects: Concurrency, I/O, Error handling, Allocation Continuations and effects complement each other very well, both have been very well studied for a long time, and continuations are also implemented in quite a few mainstream languages, while effects are just now gaining more attention in languages like OCaml, JavaScript has a proposal for algebraic effects, and there are new languages that experiment with them, like Koka, Eff, Unison. |
Is not the whole point of Zig's async/await implementation and its focus on |
You would still have to write basically two versions (usually interspersed into a single file / function) to account for the different behavior between async and non-async code. As an example, running a producer/consumer type async function in blocking mode would result in a deadlock if you don't have if/else statements to handle this case. It would be really nice to have continuations built into the language as it is a simple, but very powerful primitive. I've played around with call/cc before in scheme and you can build all sorts of effects on it as @noonien points out. |
It would be nice to see what is being proposed explicitly w.r.t. effects and continuations, rather than links to wikipedia articles and such. If it's anything that would show up in a function signature, that would clearly constitute coloring. It seems that effects, for instance, would have to show up in the return type of a function.
The question is whether this shows up in practice. It seems this is the one area where async might introduce coloring, unless taking a function pointer to async code isn't all that common anyway. |
Yeah, a concrete proposal would be nice. If you (anyone here!) have a specific idea which you believe to be superior to async, please open a separate issue with a proposal (use the "blank issue" button), which should detail:
Without a proposal, an plea to eliminate async will not be taken seriously, since it is a suggestion to remove a useful language feature with no good alternative. If you think there are fundamental problems with async but do not have a specific alternative in mind, you can open a proposal to remove async, but it will require heavy justification. Regardless, let's cease discussion of this here - if a proposal is made, its merits and problems can be talked about there. This is a tracking issue for the re-implementation of async, and hence not really the right place for this conversation. |
If you don't have any open source project written in zig with 100+ commits, don't bother |
I'm not smart enough to write a proposal, I'm too old-school that I much prefer green/virtual threads over async where it makes sense, and I certainly don't have the influencer levels that @andrewrk demands on projects ;) However, I do value the vision that Zig holds so far and want to mention one thing that has been alluded to already and that I think is really important in the 'real' world: A language should never be async all the way down. One reason I moved away from a certain other 'modern' language is that the majority of 3rd-party libraries became async-only. They required you to use async, as the authors only cared about that and "why wouldn't you write async". I didn't want to use async code, and didn't have a need to but suddenly I was having to make changes to use a popular library. Sure, it was possible to workaround but then the code became more complex to maintain. Not great for something that promised 'zero cost'. I trust Andrew to make the right decisions - it's his language after all and that's why zig is so easy to read whilst being as powerful as it is. I just think it's also important to think about the 'average' user in all of this, not just the compiler or language experts, or the college theorists.. but the people that may want to use the language and not find that their code becomes less maintainable over time because one feature now means multiple ways to write code. One day, when I've fully gotten up to speed with zig, I'd hope to contribute on a more formal level, but for now I just want to see it become the language I'm growing to love. There is so much potential, and doing things 'right' is so important. I learned about |
There are now very few stage1 cases remaining: * `cases/compile_errors/stage1/obj/*` currently don't work correctly on stage2. There are 6 of these, and most of them are probably fairly simple to fix. * `cases/compile_errors/async/*` and all remaining `safety/*` depend on async; see ziglang#6025. Resolves: ziglang#14849
I also don't have enough knowledge on Zig (or on levels this low) to write a proposal, but I saw a parallel between tasks scheduling and memory allocation. To allocate you first need an allocator of some kind, like an arena or bump allocator. Then async becomes a method of a scheduler, or a subtype, that returns a handle to the task that can be used to await() for the result or cancel() the task or something else depending on the scheduler (maybe you can yield to another handle like a coroutine). Here is an extremely rough, short, and not well thought example. const std = @import("std");
pub fn main() void {
var scheduler = std.concurrent.Async.init(std.concurrent.event_loop);
defer scheduler.deinit(); // waits for all tasks to be terminated or canceled, maybe can take a timeout param
var task_runner = scheduler.runner() // can run async functins but not for example deinit() so it is safer to pass around
var task_handle = task_runner.async(do_things());
// do other things
var result = task_handle.wait()
// could be task_handle.Cancel()
// maybe different methods depending on the type of scheduler
}
fn do_things() !resutl{
// do all the concurrent things
}
I am not sure if mirroring the allocation paradigm/pattern will result in function coloring by explicitly passing "schedulers" as is done for allocators. At the beginning I thought this was too wild and did not post it, but then I stumble upon this post about nurseries as concept for structured concurrency that somewhat I feel validated this idea, if you squint the "scheduler" in the example above looks like a nursery in the linked post. I like how deterministic the nursery concept is and I think it would be a good fit for Zig. I am sure there are thousands of small devilish details to consider that I am not even aware of, but maybe this path is worth investigating unless it has been already discarded. |
I really like the var some_value = (await (await async_func()).some_other_thing()).finally() var some_value = async_func().await.some_other_thing().await.finally() Would it be possible to adopt this kind of syntax? |
The original async syntax zig had would not be affected by this, as explained in this blog post from 2020: https://kristoff.it/blog/zig-colorblind-async-await/ Don't know if the syntax idea has changed, but I really liked that Zig just flipped the async/await function call usage, so that for the common case of calling a non-async function and an async function would have the same syntax. const some_value = async_func().some_other_thing().finally()
// Equivalent to:
const frame = async async_func()
const other_frame = async (await frame).some_other_thing()
const some_value = (async other_frame).finally()
// Equivalent to:
const some_value = (await async (await async async_func()).some_other_thing()).finally() Though, in this reversed case where you want to grab the async frame, maybe then it could be a field named const frame = async_func.async()
const other_frame = frame.await().some_other_thing.async()
const some_value = other_frame.await().finally()
// Equivalent to:
const some_value = async_func.async().await().some_other_thing.async().await().finally() |
That syntax has not changed, and (if async is re-implemented) will not change, because it's required for colorless async. So, yes, we don't need |
I think you could make const asyncfn = std.async(syncfn);
const result = asyncfn() |
Good day, any ETA for this? |
This comment was marked as off-topic.
This comment was marked as off-topic.
Please don't add noise like this to the issue tracker.
If you can provide a particular reason you think Zig should retain async functionalities (or not) -- especially a concrete use case -- then feel free to give it. Otherwise, rest assured that the core team will get to this issue with time. |
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as off-topic.
This comment was marked as off-topic.
Hello everyone, I have a problem, I don't know if it is suitable for this topic. |
@JiaJiaJiang I am not sure if this will fix your issue. But I hacked something that can turn async JS calls into sync zig calls. On zig side, have something like this: extern fn send_recv(
buf: [*]const u8,
buf_len: usize,
result: [*]u8,
result_len: *usize,
) u8; then, on the JS side (in your WASM thread that you spawn in a web worker), bind a function like this one: return function (buf_ptr, buf_len, result_ptr, result_len_ptr) {
// instance is created with something like this WebAssembly.instantiate(...).instance
const mem = get_memory_buffer() // return instance.exports.memory.buffer
const view = get_memory_view() // return new DataView(instance.exports.memory.buffer)
const ctx = get_shared_context() //see below
const data = new Uint8Array(mem, buf_ptr, buf_len)
ctx.lock()
ctx.write(data)
ctx.client_notify()
ctx.unlock()
ctx.wait_for_server()
ctx.lock()
const result = ctx.read()
ctx.unlock()
const result_len = view.getUint32(result_len_ptr, true)
if (result.length === 0) {
return 1// error codes for zig
}
if (result.length > result_len) {
return 2 // error codes for zig
}
view.setUint32(result_len_ptr, result.length, true)
const dest = new Uint8Array(mem, result_ptr, result.length)
dest.set(result)
return 0// error codes for zig
} In another web worker, do something like this: const step = async function () {
const ctx = get_shared_context() // send the same context to both workers
if (ctx.wait_for_client(10) !== true) {
step()
return
}
ctx.lock()
const request = ctx.read()
// process request.buffer, you can pass JSON commands, function names... encode it the way you like
const response = await whatever_process_request(request) // this is where the magic happens as it turns an async call to a sync call
ctx.write(new Uint8Array(response))
ctx.server_notify()
ctx.unlock()
step()
}
step() // this starts an infinite loop A shared context is something I threw together to allow to sync two thread export default function SharedContext(buffer) {
if (!buffer) {
throw new Error("Buffer must be a shared buffer")
}
const META = new Int32Array(buffer, 0, 4)
const LOCK = 0
const CLIENT_NOTIFY = 1
const SERVER_NOTIFY = 2
const BUF_LEN = 3
// LOCK values
const UNLOCKED = 0
const LOCKED = 1
// NOTIFY values
const OFF = 0
const ON = 1
const DATA = new Uint8Array(buffer, 16) // start at offset 16
function write(buf) {
if (buf.length > DATA.length) {
return 1
}
DATA.set(buf, 0)
Atomics.store(META, BUF_LEN, buf.length)
return 0
}
function writeU32(n) {
const buf = new Uint8Array(4)
new DataView(buf).setUint32(n, true)
return write(buf)
}
function lock() {
while (true) {
Atomics.wait(META, LOCK, LOCKED)
if (
Atomics.compareExchange(META, LOCK, UNLOCKED, LOCKED) ===
UNLOCKED
) {
Atomics.notify(META, LOCK)
break
}
}
}
function unlock() {
Atomics.store(META, LOCK, UNLOCKED)
Atomics.notify(META, LOCK)
}
function read() {
const len = Atomics.load(META, BUF_LEN)
return DATA.slice(0, len)
}
function readU32(n) {
const buf = read()
return new DataView(buf).getUint32(true)
}
function client_notify() {
Atomics.store(META, CLIENT_NOTIFY, ON)
Atomics.notify(META, CLIENT_NOTIFY)
}
function server_notify() {
Atomics.store(META, SERVER_NOTIFY, ON)
Atomics.notify(META, SERVER_NOTIFY)
}
function wait_for_client(timeout) {
if (Atomics.wait(META, CLIENT_NOTIFY, OFF, timeout) === "timed-out") {
return false
}
Atomics.store(META, CLIENT_NOTIFY, OFF)
return true
}
function wait_for_server(timeout) {
if (Atomics.wait(META, SERVER_NOTIFY, OFF, timeout) === "timed-out") {
return false
}
Atomics.store(META, SERVER_NOTIFY, OFF)
return true
}
return {
buffer,
lock,
unlock,
write,
read,
client_notify,
server_notify,
wait_for_client,
wait_for_server,
}
} Create it like this: This is something I threw together to unblock my project, I didn't analyze the performances but it works well enough. |
@kuon Thank you for your reply. |
Please have this discussion in a Zig community instead. This issue exists to track the implementation of Zig's async/await feature in the self-hosted compiler. The issue tracker isn't for questions. |
@mlugg I disagree that this discussion is not relevant to this thread. I think it provides good insights on real world usage and can help prioritize this issue and decide how it should be implemented. I use zig in a fairly large and popular app through WASM, and I was able to workaround the missing Deciding if With that said, I agree that the issue tracker should not be used for a ping/pong kind of discussion as the essence of the issue can be highly diluted and I am sorry if my participation made it go that way. |
I'm sorry, I'm not an expert in asynchronous programming, but tell me why it's impossible to add runtime like golang with green threads when say |
i intended to propose this a while back but was discouraged because it seemed fundamentally incompatible with what everyone else wants. i think any sensible proposal would need to have a good answer how this works with javascript. the current answer is that LLVM cororutines work, so that's just the path of least resistance. |
This is a sub-task of #89.
The text was updated successfully, but these errors were encountered: