-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[backport] Tracking fix #4093
[backport] Tracking fix #4093
Conversation
eb99114
to
949f4b6
Compare
Looks like there are some unrelated tests failing. We might need some digging in to see if those are related to the substance of these changes, or incidental to the way this has been backported |
949f4b6
to
000217b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies—I was too quick to approve this last night. I decided to give this one more careful step-through with the reproduction case outlined in #4038.
I noticed two issues:
- We don't correctly populate
exp_sample_config._config_call_dict
with_get_config_call_dict()
in the sampling case. This ends up looking like a mismatched config, when it really isn't—the configs are present inexperimentally_parsed
, they're just never populated. I believe this issue is also present in the final code from Fix tracking bug for jinja sampling #4048, so we'll need to fix this inmain
, too. - I also think we're going to skip populating
result
(and therefore firing a tracking event) when the experimental parser fails, because of an extra conditional check. Thankfully, I don't believe this was an issue in Fix tracking bug for jinja sampling #4048
super().render_update(node, config) | ||
|
||
if sample and isinstance(experimentally_parsed, dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want to populate result
any time we're sampling, regardless of whether experimentally_parsed
is a dict
(i.e. regardless of whether it succeeded)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For simplicity, I think it would be okay to consolidate this logic a bit more. We're only sampling in v0.21 in the specific case that if not flags.USE_EXPERIMENTAL_PARSER and sample
. This is different from the logic going forward, where we have multiple conditional branches / types of sampling
model_parser_copy = self.partial_deepcopy() | ||
exp_sample_node = deepcopy(node) | ||
exp_sample_config = deepcopy(config) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're missing this line:
exp_sample_config._config_call_dict = _get_config_call_dict(experimentally_parsed)
Is there a reason that shouldn't just be part of populate
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested locally as in #4109 (review). As soon as the tests are all passing, LGTM!
resolves #4038 for 0.21.latest, backport of #4048
Description
This is a manual backport since the commits on main rely on the static parser being on by default already, so an actual review of this is necessary. I'm trying to simply implement the same logic of fully populating a node for a more accurate comparison.
Checklist
CHANGELOG.md
and added information about my change