Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

precompiling cfunction calls #12256

Closed
stevengj opened this issue Jul 21, 2015 · 7 comments · Fixed by #26486
Closed

precompiling cfunction calls #12256

stevengj opened this issue Jul 21, 2015 · 7 comments · Fixed by #26486
Labels
compiler:precompilation Precompilation of modules performance Must go faster

Comments

@stevengj
Copy link
Member

I noticed in JuliaPy/PyCall.jl#167 that, even after running Base.compile, some cfunction calls in my __init__ function were taking a significant amount of time (0.8 seconds for 6 functions).

I could work around this by adding some explicit precompile lines in order to ensure that these were compiled, but this raises three problems:

  • Calls to cfunction in __init__ should really cause the corresponding functions to be precompiled automatically.
  • Even after precompiling, cfunction was quite expensive: 0.25 seconds for 6 functions.
  • Calling precompile causes the functions to be compiled at the wrong time — before __init__ is called rather than after. Since my __init__ initializes some global const variables that are referenced in the precompiled functions, this means that type inference does not work properly when precompiling (since Julia does not know the types of those globals yet).

cc: @vtjnash

@stevengj stevengj added the performance Must go faster label Jul 21, 2015
@vtjnash
Copy link
Sponsor Member

vtjnash commented Jul 21, 2015

eventually, we need to disable the speed boost provided by calling global const from __init__ since it causes potentially serious bugs in type inference / constant propagation: (e2d842a#diff-c3408de56388e517a8975797c90d9448R385). although we'll likely need to implement the some alternative around the same time

@JeffBezanson
Copy link
Sponsor Member

One thing that would be neat, and should be entirely safe, is to automatically do precompile(__init__, ()) as part of compile.

@vtjnash
Copy link
Sponsor Member

vtjnash commented Jul 21, 2015

i'm pretty sure you already thought of that before, and implemented it (

if (f) jl_get_specialization(f, (jl_tupletype_t*)jl_typeof(jl_emptytuple));
)

@stevengj
Copy link
Member Author

Yes, precompile(__init__, ()) made no difference, so it seems like jl_get_specialization is not specializing the function in the cfunction calls within __init__ (or any other function) unless you precompile them explicitly..

vtjnash added a commit that referenced this issue Aug 18, 2015
@stevengj
Copy link
Member Author

Unfortunately, this doesn't seem to be completely fixed. When I remove the manual precompilation from PyCall/src/PyCall.jl, the load time goes from 1.35s to 1.68s on my machine.

@stevengj stevengj reopened this Aug 18, 2015
@vtjnash
Copy link
Sponsor Member

vtjnash commented Aug 18, 2015

we pre-infer __init__, but i'm not sure if we pre-compile it

@stevengj
Copy link
Member Author

@vtjnash, adding precompile(__init__, ()) makes no difference, so that's not the problem.

@stevengj stevengj added the compiler:precompilation Precompilation of modules label Aug 19, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:precompilation Precompilation of modules performance Must go faster
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants