Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make slice iterators carry only a single provenance #122971

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

scottmcm
Copy link
Member

Today they carry two potentially-different provenances, which makes certain optimizations illegal at the LLVM-IR level.

In particular, it makes it matter whether an operation is done from the start pointer or the end pointer, since as far as LLVM knows those might have different provenance.

For example, this code

pub unsafe fn first_via_nth_back(mut it: std::slice::Iter<'_, i8>) -> &i8 {
    // CHECK: ret ptr %0
    let len = it.len();
    it.nth_back(len - 1).unwrap_unchecked()
}

is https://rust.godbolt.org/z/8e61vqzhP

  %2 = ptrtoint ptr %1 to i64
  %3 = ptrtoint ptr %0 to i64
  %.neg = add i64 %3, 1
  %_6.neg = sub i64 %.neg, %2
  %_15.i6.i = getelementptr inbounds i8, ptr %1, i64 %_6.neg
  %_15.i.i = getelementptr inbounds i8, ptr %_15.i6.i, i64 -1
  ret ptr %_15.i.i

whereas after this PR it's just

  ret ptr %0

(some assumes removed in both cases)

Today they carry two, which makes certain optimizations illegal at the LLVM-IR level.

In particular, it makes it matter whether an operation is done from the start pointer or the end pointer, since as far as LLVM knows those might have different provenance.

For example, this code
```rust
pub unsafe fn first_via_nth_back(mut it: std::slice::Iter<'_, i8>) -> &i8 {
    // CHECK: ret ptr %0
    let len = it.len();
    it.nth_back(len - 1).unwrap_unchecked()
}
```
is <https://rust.godbolt.org/z/8e61vqzhP>
```llvm
  %2 = ptrtoint ptr %1 to i64
  %3 = ptrtoint ptr %0 to i64
  %.neg = add i64 %3, 1
  %_6.neg = sub i64 %.neg, %2
  %_15.i6.i = getelementptr inbounds i8, ptr %1, i64 %_6.neg
  %_15.i.i = getelementptr inbounds i8, ptr %_15.i6.i, i64 -1
  ret ptr %_15.i.i
```
whereas after this PR it's just
```llvm
  ret ptr %0
```
(some `assume`s removed in both cases)
@rustbot
Copy link
Collaborator

rustbot commented Mar 24, 2024

r? @Amanieu

rustbot has assigned @Amanieu.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs Relevant to the library team, which will review and decide on the PR/issue. labels Mar 24, 2024
@scottmcm
Copy link
Member Author

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Mar 24, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Mar 24, 2024
…=<try>

Make slice iterators carry only a single provenance

Today they carry two potentially-different provenances, which makes certain optimizations illegal at the LLVM-IR level.

In particular, it makes it matter whether an operation is done from the start pointer or the end pointer, since as far as LLVM knows those might have different provenance.

For example, this code
```rust
pub unsafe fn first_via_nth_back(mut it: std::slice::Iter<'_, i8>) -> &i8 {
    // CHECK: ret ptr %0
    let len = it.len();
    it.nth_back(len - 1).unwrap_unchecked()
}
```
is <https://rust.godbolt.org/z/8e61vqzhP>
```llvm
  %2 = ptrtoint ptr %1 to i64
  %3 = ptrtoint ptr %0 to i64
  %.neg = add i64 %3, 1
  %_6.neg = sub i64 %.neg, %2
  %_15.i6.i = getelementptr inbounds i8, ptr %1, i64 %_6.neg
  %_15.i.i = getelementptr inbounds i8, ptr %_15.i6.i, i64 -1
  ret ptr %_15.i.i
```
whereas after this PR it's just
```llvm
  ret ptr %0
```
(some `assume`s removed in both cases)
@bors
Copy link
Contributor

bors commented Mar 24, 2024

⌛ Trying commit 9075d4c with merge fbc6c4f...

@bors
Copy link
Contributor

bors commented Mar 24, 2024

☀️ Try build successful - checks-actions
Build commit: fbc6c4f (fbc6c4f6f8a11c0aaf2f4e2a1dbad782ea20a025)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (fbc6c4f): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.7% [0.2%, 1.7%] 27
Regressions ❌
(secondary)
0.6% [0.2%, 2.1%] 6
Improvements ✅
(primary)
-1.2% [-1.7%, -0.7%] 2
Improvements ✅
(secondary)
-1.1% [-2.0%, -0.3%] 2
All ❌✅ (primary) 0.6% [-1.7%, 1.7%] 29

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
3.4% [1.0%, 8.9%] 7
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-4.5% [-5.2%, -3.9%] 4
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.5% [-5.2%, 8.9%] 11

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.4% [0.8%, 2.2%] 7
Regressions ❌
(secondary)
2.2% [2.2%, 2.2%] 1
Improvements ✅
(primary)
-1.3% [-1.3%, -1.3%] 1
Improvements ✅
(secondary)
-2.0% [-2.0%, -2.0%] 1
All ❌✅ (primary) 1.1% [-1.3%, 2.2%] 8

Binary size

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.1% [0.1%, 2.8%] 71
Regressions ❌
(secondary)
2.2% [0.2%, 14.9%] 12
Improvements ✅
(primary)
-0.1% [-0.2%, -0.0%] 7
Improvements ✅
(secondary)
-2.2% [-2.2%, -2.2%] 1
All ❌✅ (primary) 1.0% [-0.2%, 2.8%] 78

Bootstrap: 669.644s -> 671.582s (0.29%)
Artifact size: 314.97 MiB -> 312.92 MiB (-0.65%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Mar 24, 2024
@scottmcm
Copy link
Member Author

Looks like this might need to wait for llvm/llvm-project#86417

Comment on lines +24 to +35
// SAFETY: By type invariant `end >= ptr`, and thus the subtraction
// cannot overflow, and the iter represents a single allocated
// object so the `add` will also be in-range.
let $end = unsafe {
let ptr_addr = addr_usize($this.ptr);
// Need to load as `NonZero` to get `!range` metadata
let end_addr: NonZero<usize> = *ptr::addr_of!($this.end_addr_or_len).cast();
// Not using `with_addr` because we have ordering information that
// we can take advantage of here that `with_addr` cannot.
let byte_diff = intrinsics::unchecked_sub(end_addr.get(), ptr_addr);
$this.ptr.byte_add(byte_diff)
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's unfortunate. I think this will make #120682 impossible since there my approach for forward iteration is to only operate off end and never read ptr, only write to it. That makes the writes easy to hoist. If re-acquiring the provenance requires reading from ptr in each next that no longer works.

It'd be great if instead we could somehow tell LLVM that two pointers have the same provenance.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we keep it a pointer so it can be used as such when it's useful and do ptr-addr-arithmetic-ptr dances otherwise? or would that lose the intended benefits?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the problem with "when it's useful" is that it has to mean potentially-two-provenances, so start + len and end are not equivalent, and thus anything that uses both isn't fixed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My goal with that PR is to make unchecked indexing less hazardous and also applicable to vec::IntoIter with T: Drop which would mean zip(vec::IntoIter, slice::Iter) would now get the unchecked indexing too in those cases which it didn't previously. So there would also be perf benefits from that approach.

Also, have you checked how this affects looping over next_back or rev? It seems that it would have to recalculate the length / pointer provenance on each iteration. In the end it might optimize away, but the IR probably becomes more gnarly.

I think the problem with "when it's useful" is that it has to mean potentially-two-provenances

For those methods. But the other methods could treat the field as usize, no?

@Amanieu
Copy link
Member

Amanieu commented Mar 24, 2024

r? the8472

@rustbot rustbot assigned the8472 and unassigned Amanieu Mar 24, 2024

// CHECK-LABEL: @first_via_nth_back
#[no_mangle]
pub unsafe fn first_via_nth_back(mut it: std::slice::Iter<'_, i8>) -> &i8 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how realistic is this test though? often the iter construction gets inlined too which means llvm sees they come from the same pointer

https://rust.godbolt.org/z/Pj44sTbYf

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That particular test is here because it actually works already, not because it's particularly likely. The better tests are those mentioned in llvm/llvm-project#86417

Basically, when it matters most is when you're storing an iterator in your own type, rather than just doing myslice.iter().bunch().of().stuff().

For example, I was talking with @saethlin a while ago about rustc's MemDecoder. The reason it currently uses unsafe code is because we couldn't find a way to do things optimally with normal slices or slice iterators.

The exemplar of the problem is basically this function:

fn read_u32(d: &mut MemDecoder) -> Option<u32>;

That's trivially written over it storing a slice (https://rust.godbolt.org/z/vW95ebW7j) but it works poorly to store a slice in it for exactly the same reasons that the slice iterators don't: it has to update both the pointer and the length when you step forward.

So ok, what if you change it to store an iterator, since those have already fixed this problem?

Well, if you do the obvious change (to https://rust.godbolt.org/z/aWcbGvnzK), then it still writes twice: it ends up writing back both the start and end pointers, because by using the slice helper -- after all, there's no equivalent iterator helper -- it actually changes the provenance of the end pointer, as far as the optimizer knows (proof that LLVM is definitely not allowed to optimize it: https://alive2.llvm.org/ce/z/jifMAC).

But by giving the iterator only a single provenance, then LLVM becomes allowed to optimize out things like that (Alive proof https://alive2.llvm.org/ce/z/R327wi).

And unfortunately nikic says https://rust-lang.zulipchat.com/#narrow/stream/187780-t-compiler.2Fwg-llvm/topic/Communicating.20same-provenance.20to.20LLVM/near/425757728 there's no good way to tell LLVM they have the same provenance.

So if we want iter <-> slice to be actually zero-cost, we either need to do something like this so there's only one provenance, or find a way to tell LLVM about it.


Note that p + (q - p) does actually optimize out in the assembly generation part of LLVM (https://llvm.godbolt.org/z/e3Yrd7WzK), just not in the middle-end.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm, yes, I can see how this is bad for single-access functions like that. Otoh I'm concerned about how things optimize in loops where even if the provenance of the pointers is potentially-different it only gets accessed once (to calculate the length) and then it operates off only one pointer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like we need to ask LLVM for the strict form of ptrsub that's UB (or I guess probably poison) if the pointers have different provenance, so that we can use it when computing lengths and thus let LLVM optimize out more of these things.

/// Same as `p.addr().get()`, but faster to compile by avoiding a bunch of
/// intermediate steps and unneeded UB checks, which also inlines better.
#[inline]
fn addr_usize<T>(p: NonNull<T>) -> usize {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a case where the existence of this method can be made unnecessary? It feels weird that the alternative would perform so poorly, or involve UB checks at all.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The runtime performance of .addr().get() is completely fine.

The problem is just in the sheer volume of MIR that it ends up producing -- just transmuting directly is literally an order of magnitude less stuff: https://rust.godbolt.org/z/cnzTW51oh

And with how often these are used, that makes a difference.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I can remove a bit of that, at least: #123139

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I'm mostly just wondering if this is sufficient motivation to add this method on NonNull directly (it can be pub(crate) for now), whether this hack is in fact specific to this module, or whether this function should be marked as FIXME and should be addressed later.

@scottmcm scottmcm marked this pull request as draft May 4, 2024 06:50
@Dylan-DPC Dylan-DPC added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Aug 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants