-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding in timer Futures (e.g. setTimeout and setInterval) #121
Conversation
Yes. After the first call to
I agree. As for the names, well... I guess first we should look at other promise libraries and see how they named similar functionality? If there's an existing convention we should probably stick to it. |
Great!
Well, this problem isn't really encountered by Promise libraries (Promises don't support streaming). And most stream implementations in JavaScript are push-based, not pull-based, so they also don't have this problem. In any case, this isn't about copying a JavaScript API, it's about providing a Rust |
2109fab
to
ee59ba4
Compare
Requesting code review on this. The non-buffered and throttled versions of Also, I think |
We should think carefully about how we go about this. Javascript promises are eager unlike rust's futures and are not cancellable, so the Some more concrete goals:
The first goal is most important: to achieve that, either we have The second goal is a bit trickier. First, the MPSC stream really is overkill in this case: all we need is an integer counter indicating how many outstanding times the stream should return an item. The poll method then just checks if the counter is more than zero and if so decrements it and returns Lastly, we can optimise task wake-up by checking if we are already running asynchronously: for example, the callbacks invoked by |
Yep, this seems like the default
This is easier to implement if you get rid of the MPSC queue - if you use a counter as I described above, you can just replace the counter with a boolean flag.
This probably cannot be implemented well on top of |
@Diggsey Did you actually look at the code? It's not using Promises at all. |
That is a good point, I'll make that change.
But it already does that: because Promises run on the microtask queue, and the microtask queue is immediately run after macrotasks (such as setTimeout and setInterval), they will run synchronously. I'm fine with changing the Executor to guarantee that it will run synchronously, but that's a change that should be made in a different PR. |
Yes, and enough with the passive aggressive comments. Please stop and think before you reply.
https://github.com/koute/stdweb/pull/121/files#diff-a044f9ff5e011566c96f4c8b9be09879R17 You are using promises for On another note, it would be helpful to split out cancellation from futures-compatibility: have a public wrapper around
Yes, this is not a correctness thing, purely an optimisation to avoid extra interop between wasm and javascript which is currently a huge bottleneck, and is always going to be relatively slow, even with improvements to stdweb. Each time a micro-task is scheduled it involves an extra 4 passes through the interop layer, so we should at least start thinking about how to deal with that. |
@Pauan @Diggsey Apologies for the late reply. I very much appreciate your contibutions and great insight from the comments you make, but I'd like to ask you both one extra favor - let's get along, okay? (: We all have strong opinions, but it's usually more productive for all involved to limit the passive-aggressiveness to a minimum. Thanks! |
Just curious - is there going to be any updates to this? Right now I really wanted to use a Actually I got around it by making a recursive function with set_timeout. If anyone else gets here and wants a quick and dirty solution, here it is: fn recurse(){
console!(log, "ddd");
stdweb::web::set_timeout(recurse, 500)
} |
@robert-j-webb I'll see if I can get this done soon. |
Sorry for the delay, I got this all fixed, so it should be ready for review. I don't like that people searching for |
@koute Ping. |
@koute Ping. This is ready for review. |
match sender.send( () ) { | ||
Ok( _ ) => {}, | ||
Err( _ ) => {}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't really matter but I guess let _ = sender.send(())
would be prettier than a match
.
impl Future for Wait { | ||
type Item = (); | ||
// TODO use Void instead | ||
type Error = Error; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason you can't use Void
now?
@Pauan I'm terribly sorry about the delay! This PR slipped by me somehow. Anyhow, it'd be nice to have some tests, although since it's not feasible yet it looks mostly good to me! Thanks! |
@koute @CryZe This is not ready to merge yet. It still needs an implementation of
setInterval
, and also documentation.I want to get feedback on two things:
The code for
WaitFuture
. In particular, is it okay to call.drop()
on aOnce
callback multiple times?I want to implement
setInterval
, however there are actually many different ways to implement it.Let's say you use
interval(1000)
to create aStream
, it might behave in any of these ways:Every 1 second it sends a
()
message, regardless of whether the consumer is ready or not. These messages will be buffered, and when the consumer is ready it might pull multiple()
messages at once.Same behavior as above, except it only buffers a single message, so when the consumer is ready it will never pull more than 1 message at once.
After the consumer pulls a message, it will wait 1 second before it sends another message. In other words, it's guaranteed that the consumer will not receive a message more than once every 1 second (no matter how fast or slowly it pulls).
It seems to me that all three of these behaviors are useful (in different situations), so we should probably have three separate functions. What should these functions be called?