-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No optimization for Scala parallelized collections #25
Comments
Hi fdietze, Thanks for your feedback ! The plugin optimizes code that leaves room for optimization. This is the case of Range, where a rewrite into while loops can speed things up a lot. With parallel collections though, it is not clear how to make the code run faster, since rewriting the calls into while loops is no longer an (easy) option. What kind of optimization do you have in mind ? Cheers |
From @fdietze on November 11, 2011 0:6 Hi ochafik, thanks for your answer. I'm thinking about something similar of what OpenMP does. Because we have loops with a fixed number of iterations, we can split the range in chunks of size (iterations/#cpus) and run them independently with different threads and while loops. But I don't know if thats as trivial as the other transformations are... |
Hi fdietze, This seems indeed far from being trivial, especially without hints to the compiler (and my guess is that the overall gain, if any, would not justify the work). Cheers |
For the record, here's a document that explains how the OpenMP parallel loops work & look like : |
From @fdietze on November 10, 2011 16:21
It seems like there is no optimization for the parallelized collections in Scala.
This is optimized:
(0 until 1000).map
While this is not:
(0 until 1000).par.map
Whats the easiest way to get the parallelized collections optimized? The CL-Collections?
Thanks for this great compiler plugin. It helped me a lot speeding up my existing project.
Copied from original issue: nativelibs4java/nativelibs4java#199
The text was updated successfully, but these errors were encountered: