-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
looking for async.eachLimit equivalent? #79
Comments
Just some background on this issue, and mapLimit equivalents in Highland: In Highland .map has no opinion on parallelism. This was one of the main headaches in async, where you could not compose execution order and the thing you wanted to do, so we ended up with As a rough guide:
Hope that helps :) |
So is this expected behavior with the
I mean it's a silly example but intuitively it seemed like it should work? |
@caolan so am I just doing it wrong? |
@dweinstein parallel works with stream of streams. h([1,2,3,4])
.map(function(x){ return x+2 })
// here i map numbers to highlandstreams with a single value, the number in it
.map(function(x){ return h([x]) })
.parallel(2).each(h.log) |
What @greelgorke said :) - though we should probably make that a nicer error message! (pull requests welcome for that) |
i could work on the error messages, after i've finished the other topics |
@greelgorke that would be great! :) |
@caolan nicer error message would mean we throw if a value pulled in |
@greelgorke good point, errors that happen at flow time should be passed down the pipeline. We currently don't wrap iterator (eg, map, filter...) calls with try/catch in order to pass those sync errors down the pipeline but we probably should! |
well my attempt is to check for the value being a highland stream (or feature-detect, but it's less reliable in our case) and to produce an error if it doesn't pass the check |
@greelgorke yes, that makes sense +1. I was talking about the more general case of 'flow-time' errors. |
i'm stealing this discussion to #94 :) |
What's the proper way to protect a limited resource (like number of open fd/sockets) in a map.
The use case here is for each key in an S3 bucket I'd like to perform an operation. I'm treating the key list as a stream of values and then I perform some request, like perhaps retrieving their headers as stored in the S3 bucket.
Is
parallel
the appropriate way to prevent the following error?I think with async I would use
async.eachLimit
since the stuff inside the map returns right away and allows for the next socket to connect before the request is finished.The text was updated successfully, but these errors were encountered: