-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Failed: Assert failure: 'sz == _idCodeSize' #12840
Comments
Looks like Windows/x64, JitStress=2, JitStressRegs=1/4/8 |
@dotnet/jit-contrib |
Also seen in corefx tests, System.Data.Common.Tests, Windows/x64, JitStressRegs=4. |
The problem is in The good solution would be to clean |
Here's the general issue for refactoring of the emitter, and the specific comment relating to instruction size: https://github.com/dotnet/coreclr/issues/23006#issuecomment-471114107 |
A temporary workaround to push the milestone for this issue to 3.next.
I have published PR with the first solution, meanwhile, I am working on a good one that uses It allows us to delete all functions that are doing estimates now and have very unclear code that we have to support every time when we change There are a few problems for far:
2 and 3 are tricky because the code that does it is spread out through different places, but I am trying to extract it (for now I have template argument that I use to guard such places). I think 2 weeks should be enough to get it into a decent state. |
That's interesting and scary at the same time - any idea what perf impact such an approach has? |
In terms of throughput, the first iteration would be a bit slower, but The second iteration could memorize the generated bytes and keep them in Jit arena allocated memory, then memcpy it to VM memory and apply fixups. It could make TP better than it is now, but it is a far goal. I do not expect any significant steady-state perf impact, we will allocate less memory for each method, so probably crossgen images could be smaller and we could have more short jumps, but It is hard to guess right now. |
Right. And if the mechanism that skips GC updates, relocation & instruction byte emission is efficient (e.g. some template wizardry) the overhead could be very low, perhaps even 0. This sounds very tempting.
Yeah, I was wondering why we simply don't encode instructions directly, there should be another way to deal with short/long jump than what the emitter does now. |
because we do not have memory where to put them and do not know hot/cold block sizes for jumps. I think these estimates became a real problem when we started to add many new instructions (mostly for HW intrinsics) and because the code in estimates allowed to over-estimate it did not complain when the new predictions were very inaccurate. Note: on arm32/arm64 we do not want to use a temporary Jit memory block to put instruction there and copy to VM after, we can estimate there simpler and better than on XARCH avoiding alloc/memcpy. |
* WorkAround for #25050. A temporary workaround to push the milestone for this issue to 3.next. * Response review.
@sandreenko do you think we should try and fix this for 5.0? How much overestimating is going on? |
I don't remember the exact number, but it was very small, like 0.1%. |
There is a follow-up conversation at #8748 (comment) related to over-estimation of certain instructions. |
* Add assert * Remove the assert in emitInsSizeCV * Add a check for includeRexPrefixSize * Remove the codeSize() capping code added to fix #12840 * Make immediate only 4bytes long for non-mov instructions * Delete a commented code
Job:
https://mc.dot.net/#/user/coreclr-outerloop-jitstress2-jitstressregs/ci~2Fdotnet~2Fcoreclr~2Frefs~2Fheads~2Fmaster/test~2Ffunctional~2Fcli~2F/20190609.1/workItem/JIT.Regression.CLR-x86-JIT.V1-M09-M11/analysis/xunit/JIT_Regression._CLR_x86_JIT_V1_M09_5_PDC_b25463_b25463_b25463_~2F_CLR_x86_JIT_V1_M09_5_PDC_b25463_b25463_b25463_cmd
Failed tests:
JIT_Regression.CLR_x86_JIT_V1_M09_5_PDC_b25463_b25463_b25463._CLR_x86_JIT_V1_M09_5_PDC_b25463_b25463_b25463_cmd
Log:
category:implementation
theme:ir
skill-level:intermediate
cost:large
The text was updated successfully, but these errors were encountered: