-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Put inference parameters in a dedicated object #18496
Conversation
Good tests of inference performance are the sys.so build, and some of the longer-running test suites like subarray. |
cached::Bool | ||
|
||
# parameters limiting potentially-infinite types | ||
MAX_TYPEUNION_LEN::Int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just keep all of these as global constants unless you have a use case for changing them on some calls.
I don't really like that this makes a whole bunch of simple pure functions depend on an extra parameter that they usually don't use. |
The only reason we currently pass |
I agree, but I felt like keeping some
Right, didn't realize this, that explains the hang.
OK, so I've changed that behavior in this PR, propagating all parameters but not caching the result instead. The reason I wanted to propagate parameters for an entire inference "cycle", is because of the hooks I'm testing in #18338. Replacing a call applies to every function in a cycle, not only the top one. By propagating all parameters, including |
673194b
to
f0a32b1
Compare
59c365c
to
6f45396
Compare
Updated and rebased. EDIT: also, no performance difference running I would still like to change the caching behavior, where inferring with |
@@ -3951,10 +3980,13 @@ function reindex_labels!(sv::InferenceState) | |||
end | |||
end | |||
|
|||
function return_type(f::ANY, t::ANY) | |||
function return_type(f::ANY, t::ANY, params::InferenceParams=InferenceParams()) | |||
# TODO: are all local calls to return_type processed by pure_eval_call? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no
@@ -1590,7 +1618,7 @@ function typeinf_edge(method::Method, atypes::ANY, sparams::SimpleVector, caller | |||
end | |||
end | |||
end | |||
frame = typeinf_frame(code, true, true, caller) | |||
frame = typeinf_frame(code, caller, InferenceParams(caller.params; cached=true)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a hot path, it would be nice to avoid the kwargs dispatch and allocation here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, removed.
Can we leave out |
Yeah sure, I almost did already but left it in not knowing what a better solution would look like. I can also maintain this PR until after #17057, if you like. |
the rest of it shouldn't conflict (other than superficially, and I already know I'll have to deal with rebasing other stuff anyways) |
7048bc4
to
a19f415
Compare
I've also left out |
a19f415
to
c32870c
Compare
# reasonable defaults | ||
InferenceParams(;inlining::Bool=inlining_enabled(), | ||
tupletype_len::Int=15, tuple_depth::Int=16, | ||
tuple_splat::Int=4, union_splitting::Int=4) = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is swapping the defaults for depth and splat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's bad. Nicely spotted, fixed.
73f3de3
to
56e8b60
Compare
This allows overriding parameters, while reducing global state and simplifying the API.
56e8b60
to
dc2f727
Compare
Is there a particular reason those 4 globals were moved into the struct, and the other 2 were kept global? |
I initially put them all in the struct, but that caused too much churn (as you mentioned above, making a whole bunch of simple pure functions depend on an extra parameter). So I took the ones causing the largest diff out. |
Out of curiosity, when we say "configurable" is e.g. |
@andyferris: no, apart from changing the default value in |
Ok - thanks @maleadt |
Part of #18338, based on #18396 for now because it uses kwargs in inference.
Entry-point function parameters to type inference (
needtree
,optimize
,cached
) are now saved in theInferenceState
, which makes them available in more places. However, usingcached
that way breaksabstract_call_gf_by_type
. Any reason this should always be saving its results?Performance difference in loading
base/inference.jl
seems undetectable. Suggestions for better testing of type inference performance?cc @JeffBezanson @vtjnash