-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Coroutine Rewrite Issue #2377
Comments
Once this is complete, it will be time to revisit this work-in-progress code: #1848 |
Your detailed write-ups are always greatly appreciated!
…On Mon, Apr 29, 2019, 12:05 PM Andrew Kelley ***@***.***> wrote:
Once this is complete, it will be time to revisit this work-in-progress
code: #1848 <#1848>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2377 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AJOF5CYHYYHJWJWX4NICAS3PS4TGFANCNFSM4HJFCS2A>
.
|
Since this issue touches on the topic of generators, I'll share my implementation of generators in Zig 0.4.0. https://gist.github.com/adrusi/0dfebca7ff79a8e9ff0c94259d89146d The gist is that generators can be defined like: const Gen = Generator(struct {
slice: []i32,
pub async fn generator(ctx: *@This(), item: *i32) !void {
for (ctx.slice) |x| {
item.* = x;
suspend;
}
}
}); And used like: var gen = Iterator.init(allocator, Gen.Args {
.slice = []i32{ 1, 2, 3, 4, 5 },
});
defer gen.deinit();
while (try gen.next()) |item| {
warn("{}\n", item);
} I think that unless prettier generator support comes along for free with the coroutine rewrite, we should try a library approach like this first, and only move toward language support for generators if it ends up being insufficient. (Currently I have an issue that forces |
Does that mean that the whole frame stays allocated as long as the returned value is alive? I believe the two should have different lifetimes. |
The coroutine frame's lifetime is manually managed by the caller. If you keep reading you can see the example of how to give them different lifetimes by putting the frame on the heap. For an event-based I/O function calling another one (and using the result), this will work nicely because the coroutine frame of the callee will exist inside the coroutine frame of the caller. |
Regarding the cancellation question: I did a writeup for Rust async functions what the impact of always cancellable async functions is: https://gist.github.com/Matthias247/ffc0f189742abf6aa41a226fe07398a8 TLDR:
|
Rather than fixing regressions with deprecated coroutines, I'm going to let them regress more until #2377 is solved.
"Why Continuations are Coming to Java" - https://www.infoq.com/presentations/continuations-java/ |
how about |
I've started working on this in the rewrite-coroutines branch, and making good progress. Here's a simple example: var x: i32 = 1;
export fn entry() void {
const p = async simpleAsyncFn();
}
fn simpleAsyncFn() void {
x +%= 1;
suspend;
x +%= 1;
} Notice that no ; Function Attrs: nobuiltin nounwind
define void @entry() #2 !dbg !35 {
Entry:
%p = alloca %"@Frame(simpleAsyncFn)", align 8
%0 = getelementptr inbounds %"@Frame(simpleAsyncFn)", %"@Frame(simpleAsyncFn)"* %p, i32 0, i32 0, !dbg !44
store i64 0, i64* %0, !dbg !44
%1 = call fastcc i64 @simpleAsyncFn(%"@Frame(simpleAsyncFn)"* %p), !dbg !44
call void @llvm.dbg.declare(metadata %"@Frame(simpleAsyncFn)"* %p, metadata !39, metadata !DIExpression()), !dbg !45
ret void, !dbg !46
}
; Function Attrs: nobuiltin nounwind
define internal fastcc i64 @simpleAsyncFn(%"@Frame(simpleAsyncFn)"* nonnull) unnamed_addr #2 !dbg !47 {
AsyncSwitch:
%1 = getelementptr inbounds %"@Frame(simpleAsyncFn)", %"@Frame(simpleAsyncFn)"* %0, i32 0, i32 0, !dbg !51
%2 = load i64, i64* %1, !dbg !51
switch i64 %2, label %BadResume [
i64 0, label %Entry
i64 1, label %GetSize
i64 2, label %Resume
], !dbg !51
Entry: ; preds = %AsyncSwitch
%3 = load i32, i32* @x, align 4, !dbg !52
%4 = add i32 %3, 1, !dbg !54
store i32 %4, i32* @x, align 4, !dbg !54
%5 = getelementptr inbounds %"@Frame(simpleAsyncFn)", %"@Frame(simpleAsyncFn)"* %0, i32 0, i32 0, !dbg !55
store i64 2, i64* %5, !dbg !55
ret i64 undef, !dbg !55
Resume: ; preds = %AsyncSwitch
%6 = load i32, i32* @x, align 4, !dbg !56
%7 = add i32 %6, 1, !dbg !57
store i32 %7, i32* @x, align 4, !dbg !57
%8 = getelementptr inbounds %"@Frame(simpleAsyncFn)", %"@Frame(simpleAsyncFn)"* %0, i32 0, i32 0, !dbg !58
store i64 3, i64* %8, !dbg !58
ret i64 undef, !dbg !58
BadResume: ; preds = %AsyncSwitch
tail call fastcc void @panic(%"[]u8"* @1, %builtin.StackTrace* null), !dbg !51
unreachable, !dbg !51
GetSize: ; preds = %AsyncSwitch
ret i64 8, !dbg !51
} If you look at this gist you can compare the above code example with the equivalent from master branch and see how much simpler this is. One thing I figured out is that we can support async function pointers (when the function is not comptime-known). How it works is that the frame size is not comptime known. So you wouldn't be able to put the memory on your stack. Something like: const size = @asyncFrameSize(asyncFnPtr); // runtime known asyncFnPtr; runtime known result
var bytes = try some_allocator.alloc(u8, size);
const frame_ptr = @asyncCall(bytes, asyncFnPtr, arg1, arg2, ...); If you look at the above LLVM IR code, you can see the GetSize basic block. Every async function which could potentially be called virtually (e.g. as a function pointer) would have this, and this is how the You can see that calling async function pointers is a lot less convenient syntactically than calling them when the function is comptime known. I think this is actually a good thing. There are plenty of reasons to prefer calling comptime-known functions, and the inconvenience here merely reveals the actual problems of virtual async functions. However, the fact that it is still possible allows interfaces such as an asynchronous output stream API to be made. One design question to answer regarding async functions is: "do we want async function frames to be movable?" If we make the answer "no" then that allows result locations to work for async functions which are called directly: const result = myAsyncFunction(); // initializes result directly even though this is an async function call But it means that copying an async function frame to somewhere else and then resuming it is not allowed. Semantically even functions called with async syntax could have a result location, but I don't know how it would be expressed syntactically: var frame, const result = async someFunction();
// result is undefined here
await frame;
// now `result` has the return value of someFunction() The alternative is that the result is always inside the async function frame, and |
Without #2761, here's a sketch of what async function semantics would look like, with result location support for the return value: fn parseFile(allocator: *Allocator, path: []const u8) anyerror!BigStruct {
// ...
const bytes = allocator.allocate(u8, 100);
errdefer allocator.free(bytes);
// ...
return BigStruct {
// ...
.bytes = bytes,
// ...
};
}
pub fn readFoo() anyerror!BigStruct {
// frame is stack allocated and provided explicitly
// result location is implicitly stack allocated
var frame1 = async parseFile(allocator, "foo.txt");
errdefer cancel frame1;
// frame is heap allocated and provided explicitly
const frame2 = try allocator.create(@Frame(parseFile));
errdefer allocator.destroy(frame2);
// result location heap allocated and provided explicitly
const result2 = try allocator.create(anyerror!BigStruct);
errdefer allocator.destroy(result2);
frame2.* = async<result2> parseFile(allocator, "bar.txt");
errdefer cancel frame2;
// result2 is undefined here
const payload1 = try await frame1;
// `cancel frame1` is now a no-op
errdefer allocator.free(payload1.bytes);
_ = await frame2;
// `cancel frame2` is now a no-op
// result2 is available now
const payload2 = try result2;
// alternately, we could have done: const payload2 = try await frame2;
defer allocator.free(payload2.bytes);
compare(payload1, payload2);
// The `cancel` control flow here from the errdefers checks the `completed`
// flag of each frame and become no-ops in this case. Canceling an already
// awaited async function is harmless. When canceling a frame that has not
// already been canceled or awaited, it runs the errdefers that are in scope
// at the return statement.
return payload1;
} With this, I can answer all the questions from the Next Steps section above:
I am considering two options, both viable. One is that every suspend point in an async function is a cancellation point. It would cascade. With this option, The other option is that there are no cancellation points. I am leaning towards the second option because it is simpler, and offers better performance and less bloated binaries.
When the async function gets to a
I think we do, but it might be a separate concept, in addition to async functions. I'm keeping the use case in mind as I work through this issue. The main things we need for generators are:
The answer here is pretty clearly "no". Async function frames are already not movable if they get casted to The zig code above explores result location support for the return value. However that is probably too complicated for a first pass, and it does not solve how to have something like Here is the zig code example simplified in this way: fn parseFile(allocator: *Allocator, path: []const u8) anyerror!BigStruct {
// ...
const bytes = allocator.allocate(u8, 100);
errdefer allocator.free(bytes);
// ...
return BigStruct {
// ...
.bytes = bytes,
// ...
};
}
pub fn readFoo() anyerror!BigStruct {
// frame is stack allocated and provided explicitly
// result location is inside frame1
var frame1 = async parseFile(allocator, "foo.txt");
errdefer cancel frame1;
// frame is heap allocated and provided explicitly
// result location is inside frame2
const frame2 = try allocator.create(@Frame(parseFile));
errdefer allocator.destroy(frame2);
frame2.* = async parseFile(allocator, "bar.txt");
errdefer cancel frame2;
const payload1 = try await frame1;
errdefer allocator.free(payload1.bytes);
const payload2 = try await frame2;
defer allocator.free(payload2.bytes);
compare(payload1, payload2);
return payload1;
} |
I started implementing this as a switch, and I think that was indeed the cleaner option, except for one thing which I just realized: generating async functions as separate functions requires one less piece of state in the frame. When implementing async functions with a switch, there is a When implementing async functions with function splitting, there is a One of the main use cases of async functions is an event loop, which has a set of suspended async functions and wants to resume them when appropriate. It has pointers to their frames (the Function splitting should be better for performance in this situation as well, because a virtual function call is basically equivalent to the big switch statement, but in the resuming So I'm pretty sure the better approach here is function splitting. However for implementation simplicity's sake I will continue with the switch implementation in the stage1 compiler. We can consider changing the implementation to a function splitting approach once the rest of the concepts are proven, or we can leave it alone and be better informed for the self-hosted compiler. |
Regarding function splitting vs switch: There is one thing I did not consider above, which is the implementation of With function splitting, a different strategy is required. I believe there is a good way to do this: with Prefix Data. Each async function has Prefix Data which is the frame size. This means that This prefix data implementation is so clean that it's a better option to use even for a switch implementation of async functions. |
Regarding What this coroutine rewrite brings to the table is this new type The
One thing I'm trying to figure out is how to represent this in the type system, e.g. But what about Another option is to allow the Finally, one reasonable option might be simply having the Frame and AnyFrame types be different types. An unsolved problem is how to represent |
will |
It's definitely related. I'm purposefully avoiding the recursion issue with these coroutine changes to keep the complexity under control, but indeed this is a step towards safe recursion. |
A couple updates: Function splitting vs big switch statementI went ahead and rewrote the rewrite-coroutines branch using function splitting rather than using a big switch statement. This resulted in a bit more implementation complexity but gained simpler generated code - smaller, faster binaries, with less memory requirements (the reasons for this are described above in this issue). CancellationI'm confident about how to do cancellation now. This is the second option described above, with some of the details ironed out.
Cancellation points can be added manually by using if (@isCancelled()) {
return error.Cancelled; // this is not a special error code, this is just an example
}
|
How does one cancel? Is it e.g. a method on the frame type? |
It's currently a keyword. You can see an example in #2377 (comment) |
In that example |
Once a function has been awaited, the resources and side-effects that it allocated/performed are now the caller's responsibility. When an async function returns, if the cancel bit is set, it runs all the errdefers, which means that the caller never has to take responsibility of the resources and side-effects that it allocated/performed. This makes e.g.
|
This design seems great! Is it correct to say that fn foo() void {
// ...
suspend {
// only executes if @isCancelled() == true
}
// ...
} This way we implicitly make it clear when it's a good time to check if the coroutine got cancelled or not, so that people don't get confused and check at random points / forget to refactor after moving a |
Good question. In a single-threaded build, this is true, because only while suspended is it possible for other code to do anything such as perform a However in a multi-threaded build, here's an example of how this can work:
There is one possible improvement that can be made, which would be to try to avoid doing the atomic rmw more than once in the event of a cancel. cancelpoint {
// any code can go here. it runs if the cancel bit is set.
// `return` and `try` within this scope are generated without atomic ops
} And to change your mind about whether to return, or just to learn if the cancel bit is set, you could label the cancel point: var is_cancelled = false;
label: cancelpoint {
break :label true;
} Maybe that could have an const is_cancelled = label: cancelpoint break :label true else false; This would be uncommon though; most code would want to do something like this: cancelpoint return error.Cancelled; |
@kristoff-it the fn foo() void {
set_things_up();
suspend;
}
fn bar() !void {
foo();
if (@isCancelled()) {
return error.Cancelled;
}
expensiveOperation();
} |
I just realized a neat idea: if the async function frames have the following header: extern struct {
prev_frame_ptr: usize, // pointer to the caller's frame (whether it be stack or async)
return_address: usize, // the address of the next instruction at the callsite
} Then the exact same stack trace walking code will actually be able to print a stack trace of async function calls in addition to normal function calls. Async functions sort of have 2 stack traces that can be printed:
The former is more generally useful when developing an application; the latter is useful when working on the event loop code. One could "select" which stack trace to get depending on whether When this is paired with error return traces, zig's debugging facilities regarding async/await are going to be unbeatable. Imagine that
So if your |
If we want any function to be coroutineable, we might have potential resource leaks: fn foo() !*u8 {
const result = try std.heap.direct_allocator.create(u8);
result.* = 0;
return result;
} The naive analysis shows no memory leak because the data is immediately returned, but in async mode there’s a second error at the implicit cancel point. An even more subtle variant is one where there’s no error in the body, but we need to release: fn bar() Resource {
return Resource.acquire();
} This function shouldn’t even fail so errdefer doesn’t really make sense... |
I'm glad you brought this up. Here's how I see this playing out. Firstly, these functions can be modified to be cancelable: fn foo() !*u8 {
const result = try std.heap.direct_allocator.create(u8);
errdefer std.heap.direct_allocator.destroy(result);
result.* = 0;
return result;
} fn bar() Resource {
const result = Resource.acquire();
errdefer result.release();
return result;
} I'll be the first to admit, it's a bit counter-intuitive that this would be necessary. On the other hand, writing code this way does better document how resources work for the caller, and makes it easier to make modifications to the functions without forgetting to add resource cleanup. Here's a related proposal that I rejected a while ago: #782 Another thing we can do is add a nocancel fn foo() !*u8 {
const result = try std.heap.direct_allocator.create(u8);
result.* = 0;
return result;
} nocancel fn bar() Resource {
return Resource.acquire();
} Then if you tried to It would be a compile error to implicitly cast With these examples, however, I do think the first suggestion would be best: make them both properly cancelable. In practice, I foresee some resource leakage happening, just like we occasionally see with memory leaks in manually memory managed programs. But these are fixable bugs, with plenty of tooling to help identify the leaks, and the flip side of it is that it allows us to employ advanced resource management techniques. For example, maybe for a big section of code, |
how about cancel being an expression with type fn bar() Resource {
return Resource.acquire();
}
var f = async bar();
if (cancel f) |r| {
r.release();
} |
I like where you're going with this, but I think it can get more complicated. Consider this example: const std = @import("std");
const Allocator = std.mem.Allocator;
const simulate_fail_download = false;
const simulate_fail_file = false;
pub fn main() void {
_ = async amainWrap();
resume global_file_frame;
resume global_download_frame;
}
fn amainWrap() void {
amain() catch |e| {
std.debug.warn("{}\n", e);
std.process.exit(1);
};
}
fn amain() !void {
const allocator = std.heap.direct_allocator;
var download_frame = async fetchUrl(allocator, "https://example.com/");
errdefer cancel download_frame;
var file_frame = async readFile(allocator, "something.txt");
errdefer cancel file_frame;
const file_text = try await file_frame;
defer allocator.free(file_text);
const download_text = try await download_frame;
defer allocator.free(download_text);
std.debug.warn("download_text: {}\n", download_text);
std.debug.warn("file_text: {}\n", file_text);
}
var global_download_frame: anyframe = undefined;
fn fetchUrl(allocator: *Allocator, url: []const u8) ![]u8 {
global_download_frame = @frame();
const result = try std.mem.dupe(allocator, u8, "this is the downloaded url contents");
errdefer allocator.free(result);
suspend;
if (simulate_fail_download) return error.NoResponse;
std.debug.warn("fetchUrl returning\n");
return result;
}
var global_file_frame: anyframe = undefined;
fn readFile(allocator: *Allocator, filename: []const u8) ![]u8 {
global_file_frame = @frame();
const result = try std.mem.dupe(allocator, u8, "this is the file contents");
errdefer allocator.free(result);
suspend;
if (simulate_fail_file) return error.FileNotFound;
std.debug.warn("readFile returning\n");
return result;
} |
The design challenge here, specifically, is to make it so that |
With @mikdusan proposal above, I think it would look like this: var download_frame = async fetchUrl(allocator, "https://example.com/");
errdefer cancel (download_frame) |result| if (result) |r| allocator.free(r) else |_| {}
var file_frame = async readFile(allocator, "something.txt");
errdefer cancel (file_frame) |result| if (result) |r| allocator.free(r) else |_| {} Not amazing looking. And at this point, why do we even need fn amain() !void {
const allocator = std.heap.direct_allocator;
var download_frame = async fetchUrl(allocator, "https://example.com/");
var awaited_download_frame = false;
errdefer if (!awaited_download_frame) {
if (await download_frame) |r| allocator.free(r) else |_| {}
}
var file_frame = async readFile(allocator, "something.txt");
var awaited_file_frame = false;
errdefer if (!awaited_file_frame) {
if (await file_frame) |r| allocator.free(r) else |_| {}
}
const file_text = try await file_frame;
awaited_file_frame = true;
defer allocator.free(file_text);
const download_text = try await download_frame;
awaited_download_frame = true;
defer allocator.free(download_text);
std.debug.warn("download_text: {}\n", download_text);
std.debug.warn("file_text: {}\n", file_text);
} Even less amazing looking... but it does remove the need for an entire feature. |
It seems to me that the problem lies mainly in the interaction of two issues
I also understand that restricting people by default is not something Zig likes to do, so here's a second idea to complement the first, based on the concept of considering fn bar() Resource {
const result = Resource.acquire();
errdefer result.release();
return result;
} The previous function supports cancellation but, as stated already by Andrew, it's counter-intuitive, since no error can happen after fn bar() Resource {
const result = Resource.acquire();
canceldefer result.release();
return result;
} A Now, the straightforward way of naming the explicit keyword would be noleaks fn bar(allocator: *Allocator) void {
var in_file = try os.File.openRead(allocator, source_path);
defer in_file.close();
} A Adding
Please note how by making A few more considerations:
|
Thank you @kristoff-it for this detailed proposal. I've thought long and hard about this problem, and consulted with @thejoshwolfe, and I feel confident with the path forward. And... it's the "even less amazing looking" solution above. That is:
That's it. The rules become very simple: If you So that means the way to write the example would be: fn amain() !void {
const allocator = std.heap.direct_allocator;
var download_frame = async fetchUrl(allocator, "https://example.com/");
var awaited_download_frame = false;
errdefer if (!awaited_download_frame) {
if (await download_frame) |r| allocator.free(r) else |_| {}
}
var file_frame = async readFile(allocator, "something.txt");
var awaited_file_frame = false;
errdefer if (!awaited_file_frame) {
if (await file_frame) |r| allocator.free(r) else |_| {}
}
awaited_file_frame = true;
const file_text = try await file_frame;
defer allocator.free(file_text);
awaited_download_frame = true;
const download_text = try await download_frame;
defer allocator.free(download_text);
std.debug.warn("download_text: {}\n", download_text);
std.debug.warn("file_text: {}\n", file_text);
} It's verbose, but Here is another way to write this example: fn amain() !void {
const allocator = std.heap.direct_allocator;
var download_frame = async fetchUrl(allocator, "https://example.com/");
var file_frame = async readFile(allocator, "something.txt");
const file_text_res = await file_frame;
defer if (file_text_res) |x| allocator.free(x) else |_| {}
const download_text_res = await download_frame;
defer if (download_text_res) |x| allocator.free(x) else |_| {}
const file_text = try file_text_res;
const download_text = try download_text_res;
std.debug.warn("download_text: {}\n", download_text);
std.debug.warn("file_text: {}\n", file_text);
} I think this would actually be a less recommend way to do it, because between the first Anyway, removing
These semantics are certainly simple, and while requiring verbosity, will be easy to understand, debug, and maintain. Let's move forward with this plan, and collect data on usage patterns to inform what, if anything, needs to be modified. |
@andrewrk, nice. I like how clean it is. Cancelling an async function seemed like it was going to be a tar pit. Not just for resource leaks, but for maintaining data invariants. For instance, what if you had an async function that did something to a linked list. If I could cancel it in the middle of unlinking a node, I might be left with a corrupt data structure. I wonder if there is a clean way to do something like
That's totally made up and I am overloading the error handling forms. The idea is that I do not want to block on the |
Hmm, that example could be easier:
|
I reread through all the comments on this issue as well as many of the related issues. I feel a bit confused. In no particular order: Marking functions async at the call site but not in the definition. Can I just slap "async" on any function call? I like that idea but it means that the function in question would need to be compiled twice. Once for the non-async case and once for the async case. I think I am missing something here :-( Using suspend outside of an async function. At least one of the examples above shows suspend being called in a function that is not the async function but one that it calls. How does that get implemented? If I have: fn notAsync(x:i32) i32 {
var y = 42;
suspend; // <-- how does this work??
return x+y;
}
fn asyncFunc(a:i32) {
var b = notAsync(a) + 10;
return b;
}
fn main(...) {
var aFuture = async asyncFunc(13);
var aVal = await aFuture;
return aVal;
} How does the suspend in Where's the stack? Assuming that the calling pattern I have above is possible, where is the stack on which the activation record for Perhaps I am getting confused with protothreads. I have used those before and the C version is kind of a pain because a protothread cannot suspend in a called normal function. I also had a C++ version and I got around that by making a protothread class for each function with an overloaded Sorry if these are stupid questions. I must be really missing something... Note that Gabriel Kerneis implemented continuation passing in C using a transform which was a lot like creating protothreads (or Zig-style) continuations in C. It is an interesting read. Their results are quite nice and it was fairly clean: Continuation Passing C. |
(answers are based on my understanding; not necessarily correct)
Yes
Not exactly: an async call of a function that never uses |
Thanks for the response, @daurnimator!
OK... But won't you have to recompile the whole call chain from the tip of the async call all the way down to where the suspend is called? That is not necessarily a problem, but it seems like it could lead to some surprises when your code size increases suddenly. And now I am wondering what Are there two things being conflated here? One is having functions that execute asynchronously, but are otherwise just functions. They get called. They return. In this case, they get called asynchronously and then when we want to, we wait for them to finish. |
Done with the merge of #3033 |
wow, congratulations! this is huge. watching the video now |
This issue is one of the main goals of the 0.5.0 release cycle. It is blocking stable event-based I/O, which is blocking networking in the standard library, which is blocking the Zig Package Manager (#943).
Note: Zig's coroutines are and will continue to be "stackless" coroutines. They are nothing more than a language-supported way to do Continuation Passing Style.
Background:
Status quo coroutines have some crippling problems:
-ftime-report
, is the coroutine splitting pass. Putting the pass in zig frontend code will speed it up. I examined the implementation of LLVM's coroutine support, and it does not appear to provide any advantages over doing the splitting in the frontend.The Plan
Step 1. Result Location Mechanism
Implement the "result location" mechanism from the Copy Elision proposal (#287). This makes it so that every expression has a "result location" where the result will go. If you were to for example do this:
What actually ends up happening is that
hi
gets a secret pointer parameter which is the address ofw
and initializes it directly.Step 2. New Calling Syntax
Next, instead of coroutines being generic across an allocator parameter, they will use the result location as the coroutine frame. So the callsite can choose where the memory for the coroutine frame goes.
In the above example, the coroutine frame goes into the variable
x
, which in this example is in the stack frame (or coroutine frame) ofmain
.The type of
x
is@Frame(myAsyncFunction)
. Every function will have a unique type associated with its coroutine frame. This means the memory can be manually managed, for example it can be put into the heap like this:@Frame
could also be used to put a coroutine frame into a struct, or in static memory. It also means that, for now, it won't be possible to call a function that isn't comptime known. E.g. function pointers don't work unless the function pointer parameter iscomptime
.The
@Frame
type will also represent the "promise"/"future" (#858) of the return value. Theawait
syntax on this type will suspend the current coroutine, putting a pointer to its own handle into the awaited coroutine, which will tail-call resume the awaiter when the value is ready.Next Steps
From this point the current plan is to start going down the path of #1778 and try it out. There are some problems to solve.
defer
anderrdefer
interact with suspension and cancellation? How does resource management work if the cleanup is after a suspend/cancellation point?This proposal for coroutines in Zig gets us closer to a final iteration of how things will be, but you can see it may require some more design iterations as we learn more through experimentation.
The text was updated successfully, but these errors were encountered: