-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NativeAOT] Adding CET support #102680
[NativeAOT] Adding CET support #102680
Changes from 17 commits
ecb5c6c
a56e166
0f0c3b6
e69f551
c518684
0ab6e40
f0193bf
bd1ac22
3411400
931e37e
7711ad8
a0c798b
c991f59
af84a14
125b742
4daf9cd
ca52149
024208a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -536,6 +536,61 @@ int32_t __stdcall RhpVectoredExceptionHandler(PEXCEPTION_POINTERS pExPtrs) | |
return EXCEPTION_CONTINUE_SEARCH; | ||
} | ||
|
||
// the following would work on ARM64 as well, but there is no way to test right now. | ||
#ifdef TARGET_AMD64 | ||
|
||
#ifndef STATUS_RETURN_ADDRESS_HIJACK_ATTEMPT | ||
#define STATUS_RETURN_ADDRESS_HIJACK_ATTEMPT ((uintptr_t)0x80000033L) | ||
#endif | ||
|
||
if (faultCode == STATUS_RETURN_ADDRESS_HIJACK_ATTEMPT) | ||
{ | ||
Thread * pThread = ThreadStore::GetCurrentThreadIfAvailable(); | ||
if (pThread == NULL || !pThread->IsCurrentThreadInCooperativeMode()) | ||
{ | ||
// if we are not in coop mode, this cannot be our hijack | ||
// Perhaps some other runtime is responsible. | ||
return EXCEPTION_CONTINUE_SEARCH; | ||
} | ||
|
||
// Sanity check. | ||
if (!pThread->IsHijacked()) | ||
{ | ||
_ASSERTE(!"The thread should be hijacked by us."); | ||
RhFailFast(); | ||
} | ||
|
||
PCONTEXT interruptedContext = pExPtrs->ContextRecord; | ||
bool areShadowStacksEnabled = PalAreShadowStacksEnabled(); | ||
if (areShadowStacksEnabled) | ||
{ | ||
// OS should have fixed the SP value to the same as we`ve stashed for the hijacked thread | ||
_ASSERTE(*(size_t *)interruptedContext->GetSp() == (uintptr_t)pThread->GetHijackedReturnAddress()); | ||
|
||
// When the CET is enabled, the interruption happens on the ret instruction in the calee. | ||
// We need to "pop" rsp to the caller, as if the ret has consumed it. | ||
interruptedContext->SetSp(interruptedContext->GetSp() + 8); | ||
} | ||
|
||
// Change the IP to be at the original return site, as if we have returned to the caller. | ||
// That IP is an interruptible safe point, so we can suspend right there. | ||
uintptr_t origIp = interruptedContext->GetIp(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jakobbotsch this style of hijacking can hit some asserts if the caller is missing safepoint for the call site. (i.e. MinOpt). The code will still work, but asserts will need to be adjusted. Re: #103950 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this work the same way in regular CoreCLR? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It could. I think all hijacks could work this way and it will be simpler. I.E no need for special frames, interrogating the containing method info for kinds of GC returns, capturing that somewhere,… There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That is assuming we bumped r2r version There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It is important that it does. Our native AOT testing strategy depends on minimizing differences between regular CoreCLR and native AOT. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looks like r2r version was incremented. 👍 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since CET compat will be the default, this code path will be exercised on everything win10 and newer, so this will be reasonably tested. I can move CoreCLR on the same plan, so we have more confidence. I was planning to do that anyways. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here a PR that uses this pattern in CoreCLR CET hijacks and in GC stress. |
||
interruptedContext->SetIp((uintptr_t)pThread->GetHijackedReturnAddress()); | ||
|
||
pThread->InlineSuspend(interruptedContext); | ||
|
||
if (areShadowStacksEnabled) | ||
{ | ||
// Undo the "pop", so that the ret could now succeed. | ||
interruptedContext->SetSp(interruptedContext->GetSp() - 8); | ||
interruptedContext->SetIp(origIp); | ||
} | ||
|
||
ASSERT(!pThread->IsHijacked()); | ||
return EXCEPTION_CONTINUE_EXECUTION; | ||
} | ||
#endif // TARGET_AMD64 (support for STATUS_RETURN_ADDRESS_HIJACK_ATTEMPT) | ||
|
||
uintptr_t faultingIP = pExPtrs->ContextRecord->GetIp(); | ||
|
||
ICodeManager * pCodeManager = GetRuntimeInstance()->GetCodeManagerForAddress((PTR_VOID)faultingIP); | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -277,28 +277,8 @@ void StackFrameIterator::InternalInit(Thread * pThreadToWalk, PInvokeTransitionF | |
|
||
#endif // defined(USE_PORTABLE_HELPERS) | ||
|
||
// This function guarantees that the final initialized context will refer to a managed | ||
// frame. In the rare case where the PC does not refer to managed code (and refers to an | ||
// assembly thunk instead), unwind through the thunk sequence to find the nearest managed | ||
// frame. | ||
// NOTE: When thunks are present, the thunk sequence may report a conservative GC reporting | ||
// lower bound that must be applied when processing the managed frame. | ||
|
||
ReturnAddressCategory category = CategorizeUnadjustedReturnAddress(m_ControlPC); | ||
|
||
if (category == InManagedCode) | ||
{ | ||
ASSERT(m_pInstance->IsManaged(m_ControlPC)); | ||
} | ||
else if (IsNonEHThunk(category)) | ||
{ | ||
UnwindNonEHThunkSequence(); | ||
ASSERT(m_pInstance->IsManaged(m_ControlPC)); | ||
} | ||
else | ||
{ | ||
FAILFAST_OR_DAC_FAIL_UNCONDITIONALLY("PInvokeTransitionFrame PC points to an unexpected assembly thunk kind."); | ||
} | ||
// adjust for thunks, if needed | ||
EnsureInitializedToManagedFrame(); | ||
|
||
STRESS_LOG1(LF_STACKWALK, LL_INFO10000, " %p\n", m_ControlPC); | ||
} | ||
|
@@ -484,7 +464,13 @@ void StackFrameIterator::InternalInit(Thread * pThreadToWalk, PTR_PAL_LIMITED_CO | |
} | ||
|
||
// Prepare to start a stack walk from the context listed in the supplied NATIVE_CONTEXT. | ||
// The supplied context can describe a location in managed code. | ||
// NOTE: When a return address hijack is executed, the PC in the NATIVE_CONTEXT | ||
// matches the hijacked return address. This PC is not guaranteed to be in managed code | ||
// since the hijacked return address may refer to a location where an assembly thunk called | ||
// into managed code. | ||
// NOTE: When the PC is in an assembly thunk, this function will unwind to the next managed | ||
// frame and may publish a conservative stack range (if and only if any of the unwound | ||
// thunks report a conservative range). | ||
void StackFrameIterator::InternalInit(Thread * pThreadToWalk, NATIVE_CONTEXT* pCtx, uint32_t dwFlags) | ||
{ | ||
ASSERT((dwFlags & MethodStateCalculated) == 0); | ||
|
@@ -498,8 +484,9 @@ void StackFrameIterator::InternalInit(Thread * pThreadToWalk, NATIVE_CONTEXT* pC | |
// properly walk it in parallel. | ||
ResetNextExInfoForSP(pCtx->GetSp()); | ||
|
||
// This codepath is used by the hijack stackwalk. The IP must be in managed code. | ||
ASSERT(m_pInstance->IsManaged(dac_cast<PTR_VOID>(pCtx->GetIp()))); | ||
// This codepath is used by the hijack stackwalk. The IP must be in managed code | ||
// or in a conservatively reported assembly thunk. | ||
ASSERT(IsValidReturnAddress((void*)pCtx->GetIp())); | ||
|
||
// | ||
// control state | ||
|
@@ -616,6 +603,35 @@ void StackFrameIterator::InternalInit(Thread * pThreadToWalk, NATIVE_CONTEXT* pC | |
#endif // TARGET_ARM | ||
|
||
#undef PTR_TO_REG | ||
|
||
// adjust for thunks, if needed | ||
EnsureInitializedToManagedFrame(); | ||
} | ||
|
||
void StackFrameIterator::EnsureInitializedToManagedFrame() | ||
{ | ||
// This function guarantees that the final initialized context will refer to a managed | ||
// frame. In the rare case where the PC does not refer to managed code (and refers to an | ||
// assembly thunk instead), unwind through the thunk sequence to find the nearest managed | ||
// frame. | ||
// NOTE: When thunks are present, the thunk sequence may report a conservative GC reporting | ||
// lower bound that must be applied when processing the managed frame. | ||
|
||
ReturnAddressCategory category = CategorizeUnadjustedReturnAddress(m_ControlPC); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not sure how this is related to the CET stuff There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. InternalInit is supposed to leave the iterator in managed code. Until this change the InternalInit that takes Now, when we receive hijack exception, we pop to the caller and perform suspension in the caller's context. Perhaps I should unify this code into a helper to not duplicate it in both InternalInit methods. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've unified the adjustment for asm thunks |
||
|
||
if (category == InManagedCode) | ||
{ | ||
ASSERT(m_pInstance->IsManaged(m_ControlPC)); | ||
} | ||
else if (IsNonEHThunk(category)) | ||
{ | ||
UnwindNonEHThunkSequence(); | ||
ASSERT(m_pInstance->IsManaged(m_ControlPC)); | ||
} | ||
else | ||
{ | ||
FAILFAST_OR_DAC_FAIL_UNCONDITIONALLY("Unadjusted initial PC points to an unexpected assembly thunk kind."); | ||
} | ||
} | ||
|
||
PTR_VOID StackFrameIterator::HandleExCollide(PTR_ExInfo pExInfo) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this future proofing in case CETCOMPAT is enabled by default by linker? (I do not think it is needed, but it does not hurt.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wanted to be explicit about this. But, yes, will also help if something defaults everything into "/CETCOMPAT"