-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make slice iterators carry only a single provenance #122971
base: master
Are you sure you want to change the base?
Conversation
Today they carry two, which makes certain optimizations illegal at the LLVM-IR level. In particular, it makes it matter whether an operation is done from the start pointer or the end pointer, since as far as LLVM knows those might have different provenance. For example, this code ```rust pub unsafe fn first_via_nth_back(mut it: std::slice::Iter<'_, i8>) -> &i8 { // CHECK: ret ptr %0 let len = it.len(); it.nth_back(len - 1).unwrap_unchecked() } ``` is <https://rust.godbolt.org/z/8e61vqzhP> ```llvm %2 = ptrtoint ptr %1 to i64 %3 = ptrtoint ptr %0 to i64 %.neg = add i64 %3, 1 %_6.neg = sub i64 %.neg, %2 %_15.i6.i = getelementptr inbounds i8, ptr %1, i64 %_6.neg %_15.i.i = getelementptr inbounds i8, ptr %_15.i6.i, i64 -1 ret ptr %_15.i.i ``` whereas after this PR it's just ```llvm ret ptr %0 ``` (some `assume`s removed in both cases)
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
…=<try> Make slice iterators carry only a single provenance Today they carry two potentially-different provenances, which makes certain optimizations illegal at the LLVM-IR level. In particular, it makes it matter whether an operation is done from the start pointer or the end pointer, since as far as LLVM knows those might have different provenance. For example, this code ```rust pub unsafe fn first_via_nth_back(mut it: std::slice::Iter<'_, i8>) -> &i8 { // CHECK: ret ptr %0 let len = it.len(); it.nth_back(len - 1).unwrap_unchecked() } ``` is <https://rust.godbolt.org/z/8e61vqzhP> ```llvm %2 = ptrtoint ptr %1 to i64 %3 = ptrtoint ptr %0 to i64 %.neg = add i64 %3, 1 %_6.neg = sub i64 %.neg, %2 %_15.i6.i = getelementptr inbounds i8, ptr %1, i64 %_6.neg %_15.i.i = getelementptr inbounds i8, ptr %_15.i6.i, i64 -1 ret ptr %_15.i.i ``` whereas after this PR it's just ```llvm ret ptr %0 ``` (some `assume`s removed in both cases)
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (fbc6c4f): comparison URL. Overall result: ❌✅ regressions and improvements - ACTION NEEDEDBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Bootstrap: 669.644s -> 671.582s (0.29%) |
Looks like this might need to wait for llvm/llvm-project#86417 |
// SAFETY: By type invariant `end >= ptr`, and thus the subtraction | ||
// cannot overflow, and the iter represents a single allocated | ||
// object so the `add` will also be in-range. | ||
let $end = unsafe { | ||
let ptr_addr = addr_usize($this.ptr); | ||
// Need to load as `NonZero` to get `!range` metadata | ||
let end_addr: NonZero<usize> = *ptr::addr_of!($this.end_addr_or_len).cast(); | ||
// Not using `with_addr` because we have ordering information that | ||
// we can take advantage of here that `with_addr` cannot. | ||
let byte_diff = intrinsics::unchecked_sub(end_addr.get(), ptr_addr); | ||
$this.ptr.byte_add(byte_diff) | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's unfortunate. I think this will make #120682 impossible since there my approach for forward iteration is to only operate off end
and never read ptr
, only write to it. That makes the writes easy to hoist. If re-acquiring the provenance requires reading from ptr
in each next
that no longer works.
It'd be great if instead we could somehow tell LLVM that two pointers have the same provenance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we keep it a pointer so it can be used as such when it's useful and do ptr-addr-arithmetic-ptr dances otherwise? or would that lose the intended benefits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the problem with "when it's useful" is that it has to mean potentially-two-provenances, so start + len
and end
are not equivalent, and thus anything that uses both isn't fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My goal with that PR is to make unchecked indexing less hazardous and also applicable to vec::IntoIter
with T: Drop
which would mean zip(vec::IntoIter, slice::Iter)
would now get the unchecked indexing too in those cases which it didn't previously. So there would also be perf benefits from that approach.
Also, have you checked how this affects looping over next_back
or rev
? It seems that it would have to recalculate the length / pointer provenance on each iteration. In the end it might optimize away, but the IR probably becomes more gnarly.
I think the problem with "when it's useful" is that it has to mean potentially-two-provenances
For those methods. But the other methods could treat the field as usize, no?
r? the8472 |
|
||
// CHECK-LABEL: @first_via_nth_back | ||
#[no_mangle] | ||
pub unsafe fn first_via_nth_back(mut it: std::slice::Iter<'_, i8>) -> &i8 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how realistic is this test though? often the iter construction gets inlined too which means llvm sees they come from the same pointer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That particular test is here because it actually works already, not because it's particularly likely. The better tests are those mentioned in llvm/llvm-project#86417
Basically, when it matters most is when you're storing an iterator in your own type, rather than just doing myslice.iter().bunch().of().stuff()
.
For example, I was talking with @saethlin a while ago about rustc's MemDecoder
. The reason it currently uses unsafe code is because we couldn't find a way to do things optimally with normal slices or slice iterators.
The exemplar of the problem is basically this function:
fn read_u32(d: &mut MemDecoder) -> Option<u32>;
That's trivially written over it storing a slice (https://rust.godbolt.org/z/vW95ebW7j) but it works poorly to store a slice in it for exactly the same reasons that the slice iterators don't: it has to update both the pointer and the length when you step forward.
So ok, what if you change it to store an iterator, since those have already fixed this problem?
Well, if you do the obvious change (to https://rust.godbolt.org/z/aWcbGvnzK), then it still writes twice: it ends up writing back both the start and end pointers, because by using the slice helper -- after all, there's no equivalent iterator helper -- it actually changes the provenance of the end pointer, as far as the optimizer knows (proof that LLVM is definitely not allowed to optimize it: https://alive2.llvm.org/ce/z/jifMAC).
But by giving the iterator only a single provenance, then LLVM becomes allowed to optimize out things like that (Alive proof https://alive2.llvm.org/ce/z/R327wi).
And unfortunately nikic says https://rust-lang.zulipchat.com/#narrow/stream/187780-t-compiler.2Fwg-llvm/topic/Communicating.20same-provenance.20to.20LLVM/near/425757728 there's no good way to tell LLVM they have the same provenance.
So if we want iter <-> slice to be actually zero-cost, we either need to do something like this so there's only one provenance, or find a way to tell LLVM about it.
Note that p + (q - p)
does actually optimize out in the assembly generation part of LLVM (https://llvm.godbolt.org/z/e3Yrd7WzK), just not in the middle-end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrm, yes, I can see how this is bad for single-access functions like that. Otoh I'm concerned about how things optimize in loops where even if the provenance of the pointers is potentially-different it only gets accessed once (to calculate the length) and then it operates off only one pointer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like we need to ask LLVM for the strict form of ptrsub
that's UB (or I guess probably poison) if the pointers have different provenance, so that we can use it when computing lengths and thus let LLVM optimize out more of these things.
/// Same as `p.addr().get()`, but faster to compile by avoiding a bunch of | ||
/// intermediate steps and unneeded UB checks, which also inlines better. | ||
#[inline] | ||
fn addr_usize<T>(p: NonNull<T>) -> usize { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a case where the existence of this method can be made unnecessary? It feels weird that the alternative would perform so poorly, or involve UB checks at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The runtime performance of .addr().get()
is completely fine.
The problem is just in the sheer volume of MIR that it ends up producing -- just transmuting directly is literally an order of magnitude less stuff: https://rust.godbolt.org/z/cnzTW51oh
And with how often these are used, that makes a difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I can remove a bit of that, at least: #123139
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I'm mostly just wondering if this is sufficient motivation to add this method on NonNull
directly (it can be pub(crate)
for now), whether this hack is in fact specific to this module, or whether this function should be marked as FIXME and should be addressed later.
Today they carry two potentially-different provenances, which makes certain optimizations illegal at the LLVM-IR level.
In particular, it makes it matter whether an operation is done from the start pointer or the end pointer, since as far as LLVM knows those might have different provenance.
For example, this code
is https://rust.godbolt.org/z/8e61vqzhP
whereas after this PR it's just
(some
assume
s removed in both cases)