Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slightly improve MinOpts JIT TP #105250

Merged
merged 10 commits into from
Jul 24, 2024
Merged

Slightly improve MinOpts JIT TP #105250

merged 10 commits into from
Jul 24, 2024

Conversation

EgorBo
Copy link
Member

@EgorBo EgorBo commented Jul 22, 2024

A few tricks to mitigate regressions which raised concerns in #105190

PS: I still want to rewrite all OptimizationEnabled/Disabled into an OptLevel like Andy once suggested, but didn't want to do it here.

image

@dotnet-issue-labeler dotnet-issue-labeler bot added the area-CodeGen-coreclr CLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI label Jul 22, 2024
Copy link
Contributor

Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch
See info in area-owners.md if you want to be subscribed.

assert(compMinOptsIsSet);
return canUseAllOpts;
}
bool Tier0OptimizationEnabled() const
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A minor nit is that OptimizationEnabled and Tier0OptimizationEnabled both existing may be a bit confusing. Is there any idea what we would call the levels if this were some kind of OptimizationLevel enum, just for ideas on what we could call it as an alternative?

I think this is fine for now, because getting better names is a more involved change; just wanted to check if there were any other ideas/suggestions for the interim

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I agree that it doesn't look pretty, but a proper refactoring will be quite big and hard to review

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any idea what we would call the levels if this were some kind of OptimizationLevel enum, just for ideas on what we could call it as an alternative?

Not yet. Andy suggested to use OptLevel: NoOpts, Size, Speed where Size implies "light-weight". The problem that we already have optLevel in jit, but we never use it (it's always Blended) so it all needs some re-thinking

}

unsigned level = 1;
if (tree->OperIsSimple())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Might be worth a small comment saying that we are intentionally not handling hwintrinsics (they otherwise normally support GTF_REVERSE_OPS and other considerations).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a comment, but we skip a lot of trees here, it was driven by asmdiff + tpdiff results

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we skip a lot of trees here

👍, had mostly thought it might be worth a callout because it normally mirrors a lot of the SimpleOp handling (unlike any other node), so figured it might be worth a special note/callout

Copy link
Member

@tannergooding tannergooding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM and is a very nice improvement overall.

Couple questions and one missed place where the new logic could also be used.

@EgorBo
Copy link
Member Author

EgorBo commented Jul 23, 2024

@jakobbotsch @tannergooding cc @dotnet/jit-contrib Could you please take another look as I've pushed more changes since your last approve.

TL;DR: I've introduced a stripped version of gtSetEvalOrder for MinOpts: gtSetEvalOrderMinOpts it doesn't calculate costs at all, so only operand swapping is preserved for certain operations. It contains a few quirks to minimize size regression impact.

As we discussed in Discord, it might makes sense to introduce a single IR walk e.g. before Rationalization to do the swap where profitable, but I am leaving this for a future PR.

Diffs with noticeable improvements for MinOpts as expected.

Comment on lines +6249 to +6250
// We don't re-order operands in LIR anyway.
return 0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it ever get called?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it ever get called?

Not sure, but the original function gtSetEvalOrder contained this check

// Check for a nilary operator
if (op1 == nullptr)
{
assert(op2 == nullptr);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In what cases do we get here?

Copy link
Member Author

@EgorBo EgorBo Jul 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's also copied from the original function. Looks like GT_RETURN for void is so (and RETFILT)

unsigned levelOp2 = gtSetEvalOrderMinOpts(op2);

bool allowSwap = true;
// TODO: Introduce a function to check whether we can swap the order of its operands or not.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this TODO be removed? Not sure I understand what it refers to. If it refers to extract the below logic into a function, why not just do that in this PR?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd better remove the comment then for now, I don't mind. The idea is to share quirks with the Tier1 version (esp for STOREIND/STORE_BLK)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be useful enough to leave a comment that it should be kept in sync with the quirks in the tier 1 version (and likewise in the tier 1 version)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved both to a single helper, to keep the quirks in one place

}

// In case op2 assigns to a local var that is used in op1, we have to evaluate op1 first.
if (gtMayHaveStoreInterference(op2, op1))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably don't need this expensive version in MinOpts. It can be the conservative (op2->gtFlags & GTF_ASG) != 0 that existed before #97409.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably don't need this expensive version in MinOpts. It can be the conservative (op2->gtFlags & GTF_ASG) != 0 that existed before #97409.

just tried - it adds +28k bytes size regression for benchmarks.run_pgo collection

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok, fine to leave it then. (Actually looks like I called out explicitly doing it in MinOpts in that PR as well)

@EgorBo EgorBo merged commit 5e1081f into dotnet:main Jul 24, 2024
108 checks passed
@EgorBo EgorBo deleted the fix-tp-regressions branch July 24, 2024 11:51
@github-actions github-actions bot locked and limited conversation to collaborators Aug 24, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-CodeGen-coreclr CLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants