Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pmap slowdown since 0.4 #17301

Closed
andreasnoack opened this issue Jul 6, 2016 · 6 comments
Closed

pmap slowdown since 0.4 #17301

andreasnoack opened this issue Jul 6, 2016 · 6 comments
Labels
parallelism Parallel or distributed computation performance Must go faster regression Regression in behavior compared to a previous version
Milestone

Comments

@andreasnoack
Copy link
Member

andreasnoack commented Jul 6, 2016

Inspired by a julia-users question, I've timed pmap and it's become quite a bit slower. It has went through various changes during the development of 0.5 and the test case is not of the type that pmap is supposed to do well for but since the slowdown is quite large, I'll report it anyway

Test code

julia> addprocs(10)                                                                
10-element Array{Int64,1}:                                                         
  2                                                                               
  3                                                                                
  4                                                                                
  5                                                                                
  6                                                                                
  7                                                                                
  8                                                                                
  9                                                                                
 10                                                                                
 11                                                                                

julia> x = [bitrand(4,2) for i = 1:100000];                                        

julia> @time pmap(identity, x);

The slowdown didn't start right after 0.4 came out so the first commit below is from January after we switched to 3.7.1.

d4749d2 (Old)

Takes 10.619339 seconds.

853317d (New)

Takes 18.537377 seconds

UPDATE:

This might be a general issue with our parallel infrastructure. With the same version of DistributedArrays.jl I get

d4749d2 (Old)

julia> @time distribute(x);
  2.828939 seconds (4.56 M allocations: 270.816 MB, 2.46% gc time)

julia> @time distribute(x);
  0.560839 seconds (2.54 M allocations: 184.086 MB, 9.37% gc time)

853317d (New)

julia> @time distribute(x);
 14.178851 seconds (44.49 M allocations: 1.438 GB, 17.24% gc time)

julia> @time distribute(x);
 12.930582 seconds (43.28 M allocations: 1.368 GB, 22.75% gc time)

cc: @amitmurthy

@amitmurthy
Copy link
Contributor

The second issue is JuliaParallel/DistributedArrays.jl#72

Will look into the first one.

@JeffBezanson JeffBezanson added performance Must go faster parallelism Parallel or distributed computation regression Regression in behavior compared to a previous version labels Jul 7, 2016
@amitmurthy
Copy link
Contributor

Just to note that pmap supports batching in 0.5.

For the test case above on my machine:
@time pmap(identity, x); takes 15.3 seconds on master.
@time pmap(identity, x; batch_size=1000); takes 0.9 seconds on master.
@time pmap(identity, x); takes 6.6 seconds on 0.4
@time pmap(identity, x); takes 8.8 seconds with #17331

@multidis
Copy link

I am observing a factor of 18 to 20 slowdown with pmap on Julia 0.5. The more cores are made available to pmap, the slower it gets. Please see:
JuliaNLSolvers/Optim.jl#290 (comment)

@StefanKarpinski StefanKarpinski added this to the 0.5.x milestone Oct 11, 2016
@StefanKarpinski
Copy link
Sponsor Member

Does using a batch size help at all or is there some other issue?

@amitmurthy
Copy link
Contributor

#17331 addressed a large part of the regression. @multidis 's issue was a little different (see referenced issue in Optim)

@StefanKarpinski StefanKarpinski added help wanted Indicates that a maintainer wants help on an issue or pull request and removed help wanted Indicates that a maintainer wants help on an issue or pull request labels Oct 27, 2016
@KristofferC
Copy link
Sponsor Member

Seems to be around a 15-20% difference now. Perhaps enough to close?

@yuyichao yuyichao removed the help wanted Indicates that a maintainer wants help on an issue or pull request label May 25, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parallelism Parallel or distributed computation performance Must go faster regression Regression in behavior compared to a previous version
Projects
None yet
Development

No branches or pull requests

7 participants