-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rollup of 12 pull requests #41098
Rollup of 12 pull requests #41098
Conversation
```rust struct S; impl S { pub hello_method(&self) { println!("Hello"); } } fn main() { S.hello_method(); } ``` ```rust error: can't qualify macro invocation with `pub` --> file.rs:3:4 | 3 | pub hello_method(&self) { | ^^^- - expected `!` here for a macro invocation | | | did you mean to write `fn` here for a method declaration? | = help: try adjusting the macro to put `pub` inside the invocation ```
The place_back was likely put into block with `T: Clone` bound by mistake.
LLVM has a bug - PR32488 - where it fails to deduplicate allocas in some circumstances. The function `start_new_block` has allocas totalling 1216 bytes, and when LLVM inlines several copies of that function into the recursive function `expr::into`, that function's stack space usage goes into tens of kiBs, causing stack overflows. Mark `start_new_block` as inline(never) to keep it from being inlined, getting stack usage under control. Fixes rust-lang#40493. Fixes rust-lang#40573.
- Prefer simpler constructs instead of going through &mut I's Iterator implementation.
SIG_ERR is defined as 'pub const SIG_ERR: sighandler_t = !0 as sighandler_t;'
* Store capacity_mask instead of capacity * Move bucket index into RawBucket * Bucket index is now always within [0..table_capacity) * Clone RawTable using RawBucket * Simplify iterators by moving logic into RawBuckets * Make retain aware of the number of elements
When the user select more than one target to generate rustlibs for, rustbuild will only install the host one. This patch fixes it, more info in rust-lang#39235 (comment)
This commit shrinks the size of the aforementioned table from 2,102 bytes to 1,197 bytes. This is achieved by an observation that most u16 entries are common in its upper byte. Specifically: - SINGLETONS now uses two tables, one for (upper byte, lower count) and another for a series of lower bytes. For each upper byte given number of lower bytes are read and compared. - NORMAL now uses a variable length format for the count of "true" codepoints and "false" codepoints (one byte with MSB unset, or two big-endian bytes with the first MSB set). The code size and relative performance roughly remains same as this commit tries to optimize for both. The new table and algorithm has been verified for the equivalence to older ones.
Fixes other targets rustlibs installation When the user select more than one target to generate rustlibs for, rustbuild will only install the host one. This patch fixes it, more info in rust-lang#39235 (comment)
Simplify HashMap Bucket interface > Simplify HashMap Bucket interface > > * Store capacity_mask instead of capacity > * Move bucket index into RawBucket > * Valid bucket index is now always within [0..table_capacity) > * Simplify iterators by moving logic into RawBuckets > * Clone RawTable using RawBucket > * Make retain aware of the number of elements The idea was to put idx in RawBucket instead of the other Bucket types and simplify next() and prev() as much as possible. The rest was a side-effect of that change, except maybe the last 2. This change makes iteration and other next/prev() heavy operations noticeably faster. Clone is way faster. ``` ➜ hashmap2 git:(adapt) ✗ cargo benchcmp pre:: adp:: bench.txt name pre:: ns/iter adp:: ns/iter diff ns/iter diff % clone_10_000 74,364 39,736 -34,628 -46.57% grow_100_000 8,343,553 8,233,785 -109,768 -1.32% grow_10_000 817,825 723,958 -93,867 -11.48% grow_big_value_100_000 18,418,979 17,906,186 -512,793 -2.78% grow_big_value_10_000 1,219,242 1,103,334 -115,908 -9.51% insert_1000 74,546 58,343 -16,203 -21.74% insert_100_000 6,743,770 6,238,017 -505,753 -7.50% insert_10_000 798,079 719,123 -78,956 -9.89% insert_1_000_000 275,215,605 266,975,875 -8,239,730 -2.99% insert_int_bigvalue_10_000 1,517,387 1,419,838 -97,549 -6.43% insert_str_10_000 316,179 278,896 -37,283 -11.79% insert_string_10_000 770,927 747,449 -23,478 -3.05% iter_keys_100_000 386,099 333,104 -52,995 -13.73% iterate_100_000 387,320 355,707 -31,613 -8.16% lookup_100_000 206,757 193,063 -13,694 -6.62% lookup_100_000_unif 219,366 193,180 -26,186 -11.94% lookup_1_000_000 206,456 205,716 -740 -0.36% lookup_1_000_000_unif 659,934 629,659 -30,275 -4.59% lru_sim 20,194,334 18,442,149 -1,752,185 -8.68% merge_shuffle 1,168,044 1,063,055 -104,989 -8.99% ``` Note 2: I may have messed up porting the diff, let's see what CI says.
…r, r=alexcrichton Reduce a table used for `Debug` impl of `str`. This commit shrinks the size of the aforementioned table from 2,102 bytes to 1,197 bytes. This is achieved by an observation that most `u16` entries are common in its upper byte. Specifically: - `SINGLETONS` now uses two tables, one for (upper byte, lower count) and another for a series of lower bytes. For each upper byte given number of lower bytes are read and compared. - `NORMAL` now uses a variable length format for the count of "true" codepoints and "false" codepoints (one byte with MSB unset, or two big-endian bytes with the first MSB set). The code size and relative performance roughly remains same as this commit tries to optimize for both. The new table and algorithm has been verified for the equivalence to older ones. In my x86-64 macOS laptop with `rustc 1.17.0-nightly (0aeb9c1 2017-03-15)`, `-C opt-level=3 -C lto` gives the following: * The old routine compiles to 2,102 bytes of data and 416 bytes of code. * The new routine compiles to 1,197 bytes of data and 448 bytes of code. Counting a number of all printable Unicode scalar values (128,003, if you wonder) by filtering `0..0x110000` with `std::char::from_u32` and `is_printable` took 50±7ms for both. This can be surprising as the new routine *has* to do more calculations; this is partly explained by the fact that a linear search of `SINGLETONS` has been replaced by *two* linear searches for upper and lower bytes, which greatly reduces the iteration count.
Identify missing item category in `impl`s ```rust struct S; impl S { pub hello_method(&self) { println!("Hello"); } } fn main() { S.hello_method(); } ``` ```rust error: missing `fn` for method declaration --> file.rs:3:4 | 3 | pub hello_method(&self) { | ^ missing `fn` ``` Fix rust-lang#40006. r? @pnkfelix CC @jonathandturner @GuillaumeGomez
…chton Allow using Vec::<T>::place_back for T: !Clone The place_back was likely put into block with `T: Clone` bound by mistake.
… r=alexcrichton Add a note about overflow for fetch_add/fetch_sub Fixes rust-lang#40916 Fixes rust-lang#34618 r? @steveklabnik
Add ptr::offset_to This PR adds a method to calculate the signed distance (in number of elements) between two pointers. The resulting value can then be passed to `offset` to get one pointer from the other. This is similar to pointer subtraction in C/C++. There are 2 special cases: - If the distance is not a multiple of the element size then the result is rounded towards zero. (in C/C++ this is UB) - ZST return `None`, while normal types return `Some(isize)`. This forces the user to handle the ZST case in unsafe code. (C/C++ doesn't have ZSTs)
…hton mark build::cfg::start_new_block as inline(never) LLVM has a bug - [PR32488](https://bugs.llvm.org//show_bug.cgi?id=32488) - where it fails to deduplicate allocas in some circumstances. The function `start_new_block` has allocas totalling 1216 bytes, and when LLVM inlines several copies of that function into the recursive function `expr::into`, that function's stack space usage goes into tens of kiBs, causing stack overflows. Mark `start_new_block` as inline(never) to keep it from being inlined, getting stack usage under control. Fixes rust-lang#40493. Fixes rust-lang#40573. r? @eddyb
Let .rev()'s find use the underlying rfind and vice versa - Connect the plumbing in an obvious way from Rev's find → underlying rfind and vice versa - A style change in the provided implementation for Iterator::rfind, using simple next_back when it is enough
…pls, r=estebank Make 'overlapping_inherent_impls' lint a hard error This is ought to be implemented in PR rust-lang#40728. Unfortunately, when I rebased the PR to resolve merge conflict, the "hard error" code disappeared. This PR complements the initial PR. Now the following rust code gives the following error: ```rust struct Foo; impl Foo { fn id() {} } impl Foo { fn id() {} } fn main() {} ``` ``` error[E0592]: duplicate definitions with name `id` --> /home/topecongiro/test.rs:4:5 | 4 | fn id() {} | ^^^^^^^^^^ duplicate definitions for `id` ... 8 | fn id() {} | ---------- other definition for `id` error: aborting due to previous error ```
Replace magic number with readable sig constant SIG_ERR is defined as 'pub const SIG_ERR: sighandler_t = !0 as sighandler_t;'
…excrichton [T]::rsplit() and rsplit_mut(), rust-lang#41020
@bors r+ |
📌 Commit d8b6109 has been approved by |
r? @BurntSushi (rust_highfive has picked a reviewer for you, use r? to override) |
@bors r+ |
💡 This pull request was already approved, no need to approve it again.
|
📌 Commit d8b6109 has been approved by |
@bors p=10 |
⌛ Testing commit d8b6109 with merge 0a587b4... |
💔 Test failed - status-travis |
@bors retry |
Edit: actually, probably this
|
☀️ Test successful - status-appveyor, status-travis |
Debug
impl ofstr
. #40709, Identify missing item category inimpl
s #40815, Allow using Vec::<T>::place_back for T: !Clone #40909, Add a note about overflow for fetch_add/fetch_sub #40927, Add ptr::offset_to #40943, mark build::cfg::start_new_block as inline(never) #41015, Let .rev()'s find use the underlying rfind and vice versa #41028, Make 'overlapping_inherent_impls' lint a hard error #41052, Replace magic number with readable sig constant #41054, [T]::rsplit() and rsplit_mut(), #41020 #41065