-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
const_fn usage #4
Comments
I sure am. iirc |
Status update: Doing We'll keep this issue open to track adding Also see the discussion starting here. |
I just bumped into this myself. |
rust-lang/rust#63810 has landed; with that we should be able to allow const-fn usage on nightly. :) |
Should we feature-gate it further than just on nightly to avoid future breakages? |
That's probably a good idea.
No. |
A'right, I'll try working on it tomorrow |
Lol, I was going to do the same. ;) |
I would be interested in using I'm happy to ask on the individual tracking bugs for the features and do some work to get them stabilized, I just figured I should ask here first. |
I am not quite sure, @oli-obk might know better. We currently need the following features: const_ptr_offset_from, const_maybe_uninit_as_ptr, const_raw_ptr_deref, const_refs_to_cell (the latter only if the struct you use with |
This is all still blocked on me finishing rust-lang/nomicon#221, after which we can finally start stabilizing lots of unsafe const things |
I think I have a way to calculate this in stable CTFE that doesn't require pointer offset, it's just more expensive at compile-time: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=2910d9be2d2b7a1c9e584bde946306ee. The key here is that you can create a buffer that holds the bytes of the offset, What are your thoughts, other than |
What happens if the offset does not fit into a |
It's calculated one byte at a time, see the current playground link: const LEN: usize = size_of::<$t>();
// Ensure our offsets buffer is aligned to `$t`
#[repr(C)]
union Offsets {
uninit: ManuallyDrop<MaybeUninit<$t>>,
buffer: [u8; LEN],
}
// Calculate the least-significant to most-significant byte.
// There are many areas for optimization here.
// This is the simplest code for demo.
// Which byte of the output we are calculating?
let mut byte_index = 0;
// How many bytes we need to consider. [1, 2^8) = 1, [2^8; 2^16) = 2, [2^16, 2^24) = 3, etc.
let num_bytes = (usize::BITS - LEN.leading_zeros() - 1) / 8 + 1;
// Create a buffer long enough to store a `$t`
let mut offsets = Offsets { buffer: [0u8; LEN] };
// This is the raw bytes of the output offset, ordered from least to most significant byte.
let mut field_offset_bytes = [0u8; size_of::<usize>()];
// Loop through each byte of output to consider
while byte_index < num_bytes {
// Fill the buffer with the considered byte for each index.
// For example, an index of 2093 is the little-endian bytes [45, 8, 0, 0, 0, 0, 0, 0].
// - When byte_index == 0, offsets.buffer[2093] = 45.
// - When byte_index == 1, offsets.buffer[2093] = 8.
// - For every other byte_index, offsets.buffer[2093] = 0.
let mut i = 1;
while i < LEN {
let byte = i.to_le_bytes()[byte_index];
unsafe { offsets.buffer[i] = byte };
i += 1;
}
// Treat our buffer as a *const T, and offset into the field.
let invalid = unsafe { addr_of!(offsets.uninit) } as *const $t;
let field = unsafe { addr_of!((*invalid).$field) };
// Retrieve the byte at the position of the field into the buffer.
// That value will be the little-endian `byte_index`th byte of the output.
let field_offset_byte = unsafe { *(field as *const u8) };
field_offset_bytes[byte_index] = field_offset_byte;
byte_index += 1;
}
// Build the full field_offset from the little-endian bytes we just determined.
let field_offset = usize::from_le_bytes(field_offset_bytes);
let invalid = unsafe { addr_of!(offsets.uninit) } as *const $t;
let field = unsafe { addr_of!((*invalid).$field) };
let field_length = size_of_ptr(field);
field_offset..(field_offset + field_length) |
Okay... I'm sorry I have no idea how this code works, and it is quite hard to decipher the code. An English description of the algorithm and some comments would be really helpful. :) That said, it doesn't seem like |
It just writes the offsets ([0,1,2,...]) in a byte array that overlaps with the struct, and then casts a pointer to a field to a u8 pointer to read that byte to get the offset. But that breaks down for >255, so it repeats that a few times to get the higher bits of the offset. So the second time the array holds 256 times a zero, followed by 256 a one, etc. That gets the second byte of the offset. And so on. (With a small optimization to not repeat more than necessary by using the 2-log of the length of the struct (from the number of leading zeros).) |
Ah, apologies, the multi-byte version was written in 10 minutes from the original single-byte version, so names and structure are non-ideal and intended for demo only 😅 @m-ou-se's reading is correct, regardless I added some comments in the sample. The algorithm is O(N) (N = size_of) as opposed to the pointer_offset_from version that is O(1). My thought is that we could switch to this expensive-at-compile time version with an |
(It might be a bit confusing and easy to miss that 'bytes' refers to the bytes of the offset |
Ah, I see. Wow! That's clever. How does this approach handle the situation where the
We would however be required to keep that version around even after const_ptr_offset_from is stabilized, due to backwards compatibility. I am am not sure we want to maintain this code for so long, so I'd rather spend my energy on getting const_ptr_offset_from stabilized quickly. I think we could totally make it for the May 19 release (that requires getting the stabilization PR landed until April 7). |
Thanks, that means a lot ❤️
You're correct - I've updated the playground link. The final algorithm will be more rigorously tested and will likely need further adjustments for edge cases I've not yet considered.
Why? There's no change in behavior between this and a pointer-offset version (excepting compile time). My proposal:
My team's not on a strict timeline for the const-time offset-of, so we can definitely wait or fork until May, if the maintainers would rather not spend the energy on reviewing/maintaining a temporary workaround. |
I don't see us doing that though. memoffset is kind of a polyfill crate, so we want to be extremely conservative here. There's a reason our MSRV is still Rust 1.19 (and our code is littered with If @Gilnaa is fine landing this I won't object, but I think in the time it takes me to thoroughly review this, I can resolve rust-lang/miri#1950, and then const_ptr_offset_from is just a stabilization PR away from being done. |
Hey @kupiakos , I admit I'm finding it a bit hard to accept it though, seeing as |
Hi @Gilnaa, thanks for the response. I understand if you'd rather not upstream and maintain this - after all, the solution only starts working in Rust 1.58 when I'd still like to offer this as an interim solution for those stable versions until |
@kupiakos, that's a very interesting idea! I might be missing something obvious, but I think your code can be simplified to this: use core::mem::{ManuallyDrop, MaybeUninit, size_of};
use core::ptr::addr_of;
const LEN: usize = size_of::<$t>();
// Ensure our offsets buffer is aligned to `$t`
#[repr(C)]
union Offsets {
uninit: ManuallyDrop<MaybeUninit<$t>>,
buffer: [u8; LEN + 1],
}
let mut offsets = Offsets { buffer: [0u8; LEN + 1] };
let invalid = unsafe { addr_of!(offsets.uninit) as *const $t };
let field = unsafe { addr_of!((*invalid).$field) };
// Set all bytes sequentially until we bump into the field pointer
let mut field_offset = 0;
while field_offset < LEN {
unsafe { offsets.buffer[field_offset] = 1 };
if unsafe { *(field as *const u8) } == 1 {
break;
}
field_offset += 1;
}
field_offset..(field_offset + size_of_ptr(field)) EDIT: This can be enhanced further to do 8 bytes at once (at the cost of using up to 7 bytes more and doing some unnecessary work for offsets that are smaller than 8), but it doesn't seem to have any significant effect, at least for simple projects. use core::mem::{size_of, ManuallyDrop, MaybeUninit};
use core::ptr::addr_of;
const LEN: usize = size_of::<$t>() / size_of::<u64>();
// Ensure our offsets buffer is aligned to `$t`
#[repr(C)]
union Offsets {
uninit: ManuallyDrop<MaybeUninit<$t>>,
buffer: [u64; LEN + 1],
}
let mut offsets = Offsets {
buffer: [0u64; LEN + 1],
};
let field = unsafe {
let invalid = addr_of!(offsets.uninit) as *const $t;
addr_of!((*invalid).$field)
};
// Construct a native representation of the sequence [1,2,3...] that
// fits in a u64
let v = {
let mut bytes = [0u8; size_of::<u64>()];
let mut i = 1;
while i <= bytes.len() {
bytes[i - 1] = i as u8;
i += 1;
}
u64::from_ne_bytes(bytes)
};
// Set 8 byte chunks sequentially until we bump into the field pointer
let mut buf_offset = 0;
while buf_offset <= LEN {
unsafe { offsets.buffer[buf_offset] = v };
if unsafe { *(field as *const u8) } != 0 {
break;
}
buf_offset += 1;
}
// Calculate the final field offset by multiplying by the chunk size and
// adding the in-chunk offset of the field
let in_chunk_offset = unsafe { *(field as *const u8) as usize } - 1;
let field_offset = buf_offset * size_of::<u64>() + in_chunk_offset;
field_offset..(field_offset + size_of_ptr(field)) |
At least in terms of the Rust aliasing rules this is invalid -- when you write to |
Would getting the pointer after the assignment make it valid? i.e.: // Set 8 byte chunks sequentially until we bump into the field pointer
let mut buf_offset = 0;
while buf_offset <= LEN {
unsafe { offsets.buffer[buf_offset] = v };
let field = unsafe {
let invalid = addr_of!(offsets.uninit) as *const $t;
addr_of!((*invalid).$field)
};
if unsafe { *(field as *const u8) } != 0 {
break;
}
buf_offset += 1;
} |
Yes, that would work. |
@iscgar I like that idea better! It should have far fewer reads/writes on average. I think it can be simplified even further: use core::mem::{size_of, ManuallyDrop, MaybeUninit};
use core::ptr::addr_of;
const LEN: usize = size_of::<$t>() / size_of::<u64>() + 1;
// Ensure our offsets buffer is aligned to `$t`
#[repr(C)]
union Offsets {
uninit: ManuallyDrop<MaybeUninit<$t>>,
buffer: [u64; LEN],
}
let mut offsets = Offsets {
buffer: [0u64; LEN],
};
// Construct a native representation of the sequence [1,2,3...] that
// fits in a u64
let v = u64::from_ne_bytes(0x0807060504030201u64.to_le_bytes());
// Set 8 byte chunks sequentially until we bump into the field pointer
let mut buf_offset = 0;
loop {
unsafe { offsets.buffer[buf_offset] = v };
let field = unsafe {
let invalid = addr_of!(offsets.uninit) as *const $t;
addr_of!((*invalid).$field)
};
let in_chunk_position = unsafe { *(field as *const u8) as usize };
if in_chunk_position != 0 {
// We have hit the target chunk.
// Calculate the final field offset by multiplying by the chunk size and
// adding the in-chunk position of the field, subtracting by 1 because
// in_chunk_position is 1-indexed.
let field_offset = buf_offset * size_of::<u64>() + in_chunk_position - 1;
break field_offset..(field_offset + size_of_pointee(field));
}
buf_offset += 1;
if buf_offset >= LEN {
panic!("could not locate field offset");
}
} Another nice advantage of this is that it doesn't need a special case for |
Note: once This fails because struct Foo {
x: UnsafeCell<u32>,
}
const fn offset_of_x() -> usize {
offset_of!(Foo, x)
} This also means that there are significant limitations for generic parameters on the input type, since the type could be interior mutable (and the struct Bar<T> {
x: u32,
y: T,
}
const fn offset_of_y<T>() -> usize {
offset_of!(Bar<T>, y)
}
// `y: PhantomData<T>` has no such issue Perhaps if there were a |
Yes, indeed. (The offset_from-avoiding alternatives that have been proposed have the same problem as the implementation used in this crate.) Interior mutability is strictly more complicated than regular mutability, so I would not expect any of that to stabilize before rust-lang/rust#57349. |
Here is a simple version. It works well for me. /// Calculates the offset of the specified field from the start of the named struct.
#[macro_export]
macro_rules! offset_of {
($ty: path, $field: tt) => {{
use ::core::mem::MaybeUninit;
use ::core::primitive::{u8, usize};
use ::core::ptr;
#[allow(
unused_unsafe,
clippy::as_conversions,
clippy::unneeded_field_pattern,
clippy::undocumented_unsafe_blocks
)]
const OFFSET: usize = unsafe {
// ensure the type is a named struct
// ensure the field exists and is accessible
let $ty { $field: _, .. };
// const since 1.36
let uninit: MaybeUninit<$ty> = MaybeUninit::uninit();
// const since 1.59
// UnsafeCell needs feature(const_refs_to_cell)
let base_ptr: *const $ty = uninit.as_ptr();
// stable since 1.51
let field_ptr: *const _ = ptr::addr_of!((*base_ptr).$field);
// const since 1.38
let base_addr = base_ptr.cast::<u8>();
let field_addr = field_ptr.cast::<u8>();
// const since 1.65
field_addr.offset_from(base_addr) as usize
};
OFFSET
}};
} |
Coming back to this idea, I don't think its necessary to have a full
The obvious issue with this approach is that statically selecting the alignment cannot always be correct. However, defining the local structure as follows requires
It is not clear to me whether this error is a bug or not, and I know of no other way to specify that one struct should have the same alignment as another. If this is a bug and it were to be fixed, then this would be sufficient for a const implementation. Edit: The Rust reference seems to suggest that the maximum possible alignment value is 2^29. I'm a little suspicious of this statement, but if this is true then const While I think such an implementation is somewhat distasteful, with the 30x cut/paste code, it does work and it would allow const |
Hi there! It seems that Am I mistaken, or could we merge that in without issue? Thanks! |
Hey, Yeah, now that this feature is stable we can auto-use it when available. |
When running on a compiler newer than 1.65, automatically enable usage of the macro in const contexts. Closes #4
When running on a compiler newer than 1.65, automatically enable usage of the macro in const contexts. Closes #4
When running on a compiler newer than 1.65, automatically enable usage of the macro in const contexts. Closes #4
When running on a compiler newer than 1.65, automatically enable usage of the macro in const contexts. Closes #4
I know it's hard/impossible right now, but are you tracking the ongoing
const_fn
changes to ensure thatoffset_of!
can be used in constants at some point? It's usually very useful in tandem withsize_of()
, which is alreadyconst fn
.The text was updated successfully, but these errors were encountered: