-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add blocking isolate communication. #44804
Comments
We get questions about this type of behavior when the VM shuts down despite not running some code the user expects. |
I think ergonomics of this solution is going to be rather lacking because it involves isolates. If I want to kick off some sort of computations and process results in the main isolate in a synchronous manner, I will have to deal with buffering and serialization. Similarly if I want to kick off some computation that needs to ask "main world" questions (e.g. query the state of some model). Maybe we should instead consider being a bit more daring and go for something like You do violate the invariant that synchronous Dart code runs to completion, but it is unclear to me if this invariant is worth it - or maybe expressive power of something like |
@lrhn Could you elaborate more on what the concrete motivation for this change is? To get rid of |
Yes. The goal is to have a solution which we are comfortable encouraging more than 10 users to use. |
Would performance concerns be mitigated if we keep pushing on |
Given the very low usage as well as the fact that this API comes with very hard to understand / unintended consequences (and can be considered breaking our event-loop based programming model), we should not encourage more than 10 users but rather get rid of it. Do we have a good understanding of existing use cases and why it was chosen to use Whether we would like to introduce synchronous isolate-to-isolate primitives to make concurrency in the Dart VM better is an orthogonal topic IMHO.
Should we then also implement a deadlock detector (like in Go) to debug bugs by detecting cycles in sync-waiting on ports? ... |
The So, rewriting the code to be async was what this feature was introduced to avoid. Various ideas were considered, and as I remember it, this was one which the VM team at the time accepted as an experiment which would avoid breaking Scss until the feature either proved itself to be a viable solution, or something better could be introduced. It was not intended as the final solution. The SCSS API has not changed, so the issue is still there. Providing an alternative makes it more likely that we can remove the |
It doesn't sound like the concerns with async code were performance related. It sounds like folks tried to avoid cascading async api changes in their product. |
The original concerns were not performance related, however the sass authors do have performance concerns when we suggested using isolates for this. #39390 (comment) |
Can you clarify the ways in which |
The model is hard to reason about correctly. Continued discussion about the |
@lrhn Have you also considered limiting this to only the return result from the isolate's main function? e.g. // Assume we have tuples, for brevity
// NOTE: Could also add a "bool sendAndExit = false" named parameter to use sendAndExit for the result instead of send internally.
static Future<(Isolate, R)> spawnWithResult<T, R>(FutureOr<R> Function(T) computation, T argument);
static (Isolate, R) spawnWithResultSync<T, R>(FutureOr<R> Function(T) computation, T argument);
// And also, as a shortcut when isolate spawning is blazingly fast (avoids the need for Futures):
static Isolate spawnSync<T>(void Function(T) computation, T argument); I feel like exposing a blocking version of ReceivePort may lead us towards bugs that hang entire isolates, and may not be desired for things like Flutter. This assumes though you want to run a single computation, which in a lightweight-isolate world isn't that bad because spawning isolates is fast(er). But exposing a result can help Isolate complexities in general, especially when writing abstractions on top of them. Currently you have to do: // Again, assume tuples for brevity
void isolateMain(SendPort replyPort) {
replyPort.send(RawReceivePort(_handlerHere).sendPort);
}
Future<(Isolate, RawReceivePort, SendPort)> initIsolate(Function handler) async {
final completer = Completer<SendPort>();
final port = RawReceivePort(completer.complete);
final isolate = await Isolate.spawn(isolateMain, port.sendPort);
final isolatePort = await completer.future;
port.handler = handler;
return (isolate, port, isolatePort);
} when really, something like this could be possible: // Again, assume tuples AND destructuring for brevity :)
SendPort isolateMain(SendPort replyPort) {
return RawReceivePort(_handlerHere).sendPort;
}
Future<(Isolate, RawReceivePort, SendPort)> initIsolate(Function handler) async {
final port = RawReceivePort(handler);
final (isolate, isolatePort) = await Isolate.spawnWithResult(initIsolate, port.sendPort);
return (isolate, port, isolatePort);
} Your isolate example could use For implementation details, I'm thinking this could reuse the isolate's existing controlPort mechanism, which is currently used for OOB control messages and completely ignores values sent/received from other isolates. Or it could just be an abstraction around spawn + a temporary RawReceivePort. |
any news update? i want to use ffi.Pointer.fromFunction implement c/c++ function to send http request sync. |
Why does it have to be sync? If it is in Flutter in the UI isolate then it is a very bad idea to do anything in a synchronous manner because it is going to lock up the whole UI. |
i want to use ffi.Pointer.fromFunction delegate javascript XMLHttpRequest open/send
final JSObject http = JSObject.make(globalContext);
http.setProperty(
'send',
JSObject.makeFunctionWithCallback(
globalContext,
name: 'send',
callAsFunction: Pointer.fromFunction(_setupHttp),
).value,
); static JSValueRef _setupHttp(
JSContextRef ctx,
JSObjectRef function,
JSObjectRef thisObject,
int argumentCount,
Pointer<JSValueRef> arguments,
Pointer<JSValueRef> exception,
) {
return convertJSObjectCallAsFunctionCallback(
ctx,
function,
thisObject,
argumentCount,
arguments,
exception,
(JSContext context, JSObject function, JSObject thisObject, List<JSValue> arguments, JSException exception) {
final String name = function.getProperty('name').string!;
if (JSVmInject.hasVmId(context)) {
if (name == 'send') {
final XMLHttpRequestAdapter adapter = _globalCache[JSVmInject.getVmId(context)]!;
// FIXME: sync or async http req
return JSValue.makeUndefined(context);
}
}
exception.fill(JSObject.makeError(context, arguments: <JSValue>[
JSValue.makeString(context, string: '_flutter_jsc_jsvm_inject_http.$name not supported'),
]).value);
return JSValue.makeNull(context);
},
);
} |
@v7lin I see you want to implement synchronous XHR |
I'm using My understanding of this issue is that this feature would allow me to block the current isolate, and cut the so to speak "Future-chain" by moving any given Future into a separate isolate and turn it into a blocking call? If that is the case, then I think this feature would great for Dart. I'd like to point out a subtle, but very needed, use case for this feature in the Flutter ecosystem, which I don't think is obvious, and not possible today. mraleph said:
mraleph refers to blocking the current isolate as being a bad idea, and it is (if it can be avoided). However, oftentimes it can't. The issue with dealing with Futures in Flutter is that once you have a Future, you have to rearchitect your application with the constraint that you can't have data on the first frame. That means that you can't easily move from a synchronous operation to an asynchronous one. Many Futures are quick to complete, and it might be more practical to block the application for a frame or two, than to spend a week on testing, approval, QA and so on, just to switch to a Future. There are two important articles that are highly relevant to this issue: What Color is Your Function from Bob from the language team. And xi-editor retrospective from Raph who was on the Fuchsia team, now on the Google Fonts team who describes async as a "complexity multiplier". This feature sounds like a good solution (even if not optimal, but I personally don't know of any better solutions) to the points that Bob and Raph raise in their articles. Also, I feel like if Fuchsia FIDL calls will be asynchronous, then this feature will become even more needed (for all the reasons mentioned above). |
We need this for dart-sass-embedded in order to get rid of Here is a summary of how we got here:
Therefore, native blocking communication between isolates seems to be the only proper way to solve our issue. |
Dart VM offers a FFI that could be utilised to communicate data in-memory, synchronously with low overhead between isolates. The data being communicated between the isolates may need to be encoded to bytes (or other C data structures). So if the data being exchanged is protos / json / ... that would work. Though it does require custom C code. (The limit on maximum number of isolates running in parallel can be avoided by making the native code exit the isolate before doing it's blocking call and re-entering the isolate after it. We are discussing how to make this more automatic when using our FFI.) |
Yes it would work, but:
In our design the main isolate multiplexes stdio for communication with the host process, we have to decode each inbound message on the main isolate to determine which child isolate to forward the message. With FFI, if we pass the original bytes to child isolates, we would need to decode the same message the second time in the child isolate. Thus it will be slower than passing already decoded Dart object via a Port. Not to mention that need of writing C code just for this purpose is kind of awkward. |
If the communication protocol and message types are appropriately designed, this is rather low overhead. Let's say one multiplexes incoming messages to other isolates based on a
Decoding such a proto is really fast, as it will not have to decode the contents of
The purpose would be to provide functionality that (intentionally **) isn't available in Dart core libraries. It could be a self-contained re-usable abstraction that provides synchronous communication primitives. IMHO not awkward at all - we have many users building on top of FFI + C code. (**) As it wold be problematic for our wider ecosystem, especially flutter. |
I don't think FFI is a viable solution for us. Compiling custom C code for each operating system we target would require a massive amount of additional infrastructure overhead. |
Synchronous I/O API in an event driven system can always cause problem if used incorrectly. Another potential safe guard can be that to limit the use of synchronous port to non-main isolates, so that it at least prevent user from locking up main isolate. |
Preventing blocking the main isolate only makes sense for something like Flutter. Using a blocking operation is always dangerous in latency-critical contexts, but we have them all over |
Let us take a step back. I understand that asking you to include C code in a project which is otherwise a pure Dart project is a tall order (especially, because we don't even have a proper integration with C build-system yet). Let's forget about the C code for a bit and just talk about synchronous communication between isolates. The solution that @mkustermann is proposing boils down to the following: using Here is an example of dispatcher-worker communication channel which should work on Linux and Mac OS X. We could also port the code to Windows. Would something like this cover your needs? If so I would be happy to collaborate with you to help you migrate off On the |
The problem of adding FFI to a pure Dart project is not really on whether we need to write C code or not. The real problem is that now we have to deal with cross-platform support ourselves rather than depends on the Dart VM to handle platform differences. For example, as you already pointed out pthread only works out of box on Unix-like OSes, so we have to port to Windows. Today, we test cross-platform support with CI/CD as we don’t have any complex platform specific logic, but once we get into the territory of FFI, that changes the whole landscape - we need local access to multiple platforms for development work and testing. On the other hand, you can argue that we can separate the FFI logic and ship it as a separate package and at least keep our main package pure Dart… However, it does not make much difference when developers run into issues and need to debug FFI related code. |
If the Dart team provided and maintained a package that supported blocking cross-isolate communication, whether it was implemented with That said, I don't think that would really be a better solution than providing Even a synchronous, non-blocking API would be helpful here—a way to poll a |
I propose we strike a middle ground here: I think we can certainly release and own a package which will provide portable low-level synchronisation primitives (mutexes and condition variables) which will work and be tested across all Dart's officially supported platforms. However SASS will have to build suitable channel on top of this package in the same way as you use The reason I am proposing this is because I believe that there are many different ways to use such a channel (e.g. 1-1, 1-n, n-n relationship between send and receiving sides) and consequently many different ways to build it. However building an implementation for concrete use-case (e.g. SASS probably needs 1-1 channel) is much simpler. I am happy to help you build the implementation - but it better live in SASS code base. Based on the discussions here I now believe that we have a clear migration path for SASS which means we are unblocked to move forward with |
@mraleph Will it be possible to use the work on Lightweight Isolates & Faster isolate communication to make the solution that you have provided in this gist more efficient? E.g. to simulate a synchronous Isolate.run: // dart:isolate
/// ...
/// The result is sent using [exit], which means it's sent to this
/// isolate without copying.
/// ...
static Future<R> run<R>(FutureOr<R> computation(), {String? debugName}) ...
/// ...
/// The result is sent using [exit], which means it's sent to this
/// isolate without copying.
/// ...
static R runSync<R>(FutureOr<R> computation(), {String? debugName}) ... From a usability point of view, I think that a way to unwrap Futures into a synchronous value, even if it is blocking the current isolate, is very useful. I've tried to capture some of my reasoning for why I believe that here. |
Mutexes and condition variables aren't enough—we also need a way to transfer data. But if the package includes that and we don't need to use I'll emphasize again, though, that I think having built-in blocking Isolate API support is likely to be much more user-friendly and a better fit with existing Dart metaphors.
I'm not comfortable with declaring Sass unblocked on removing |
Dart provides isolates as a method of concurrency. Currently all cross-isolate communication is asynchronous, and Dart doesn't have any official way to turn an asynchronous computation into a synchronous result. The
waitFor
function ofdart:cli
tries to get around that, but in a way which is not expected to scale, and which can break abstractions and invariants inside a single isolate.This is a proposal for a blocking receive-port feature which can block one isolate until it receives data from another isolate.
Blocking Receive-Port
A blocking receive-port is a low-level receive-port, like
RawReceivePort
, that you can ask for the next event synchronously, and it will block the entire isolate until such data becomes available. The receive-port will have to buffer events until they are asked for, so it's not as trivial as aRawReceivePort
, but also not as complicated as the stream behavior ofReceivePort
.The class interface is defined as:
When a call to
next()
occurs, the entire isolate blocks. Incoming port events and timer events might be enqueued, but they won't trigger Dart code in the isolate. A control port event can potentially be handled unless it explicitly states that it won't happen until control returns to the event loop. This blocking lasts until an object is received on the blocking receive-port, at which point that object is returned from the call tonext()
. As such, the call tonext()
appears to be synchronous.If objects have been buffered in the blocking receive-port prior to calling
next()
, it can continue immediately by dequeuing and returning the oldest object.This blocking receive-port makes it possible to synchronously wait for asynchronous computations which happen in a different isolate, while not breaking the asynchronous model of the blocked isolate.
Isolate keep-alive
Currently any open receive-port (
ReceivePort
orRawReceivePort
) keeps an isolate alive, because the isolate can potentially be made to do more work sending it a port message.A
BlockingReceivePort
does not keep an isolate alive except while callingnext()
. If no-one has callednext
and the isolate is otherwise not being kept alive by anything, then an incoming event will not trigger any code to be run, so the isolate will still never do any further computation, and it can be shut down.While an isolate is calling
next
on a blocking receive-port, the run-time system may be able to detect that no other live isolate has access to the send-port which can awaken the blocked isolate. In that case, the blocked isolate is effectively dead and can be disposed.Example Use
We can build a synchronous
run
function from a blocking port, perhaps as a static onIsolate
:If spinning up a new isolate per request is too much, we can also create a reusable runner (using the same helper classes).
The text was updated successfully, but these errors were encountered: