Skip to content

Commit

Permalink
Fix spelling
Browse files Browse the repository at this point in the history
* additional
* addresses
* aggregates
* always
* around
* beginning
* behaviours
* borrows
* called
* canary
* deallocated
* determine
* division
* documentation
* empty
* endianness
* ensures
* existing
* github
* hygiene
* individual
* initialize
* instantiate
* library
* location
* miscellaneous
* mitigates
* needs
* nonexistent
* occurred
* occurring
* overridden
* parameter
* performable
* previous
* referential
* requires
* resolved
* scenarios
* semantics
* spurious
* structure
* subtracting
* suppress
* synchronization
* this
* timestamp
* to
* transferring
* unknown
* variable
* windows

Signed-off-by: Josh Soref <[email protected]>
  • Loading branch information
jsoref committed Apr 14, 2023
1 parent f81c76a commit 282840b
Show file tree
Hide file tree
Showing 50 changed files with 77 additions and 77 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,7 @@ to Miri failing to detect cases of undefined behavior in a program.
* `-Zmiri-retag-fields=<all|none|scalar>` controls when Stacked Borrows retagging recurses into
fields. `all` means it always recurses (like `-Zmiri-retag-fields`), `none` means it never
recurses, `scalar` (the default) means it only recurses for types where we would also emit
`noalias` annotations in the generated LLVM IR (types passed as indivudal scalars or pairs of
`noalias` annotations in the generated LLVM IR (types passed as individual scalars or pairs of
scalars). Setting this to `none` is **unsound**.
* `-Zmiri-tag-gc=<blocks>` configures how often the pointer tag garbage collector runs. The default
is to search for and remove unreachable tags once every `10000` basic blocks. Setting this to
Expand Down
2 changes: 1 addition & 1 deletion cargo-miri/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ fn main() {
"miri" => phase_cargo_miri(args),
"runner" => phase_runner(args, RunnerPhase::Cargo),
arg if arg == env::var("RUSTC").unwrap() => {
// If the first arg is equal to the RUSTC env ariable (which should be set at this
// If the first arg is equal to the RUSTC env variable (which should be set at this
// point), then we need to behave as rustc. This is the somewhat counter-intuitive
// behavior of having both RUSTC and RUSTC_WRAPPER set
// (see https://github.com/rust-lang/cargo/issues/10886).
Expand Down
2 changes: 1 addition & 1 deletion src/bin/miri.rs
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ impl rustc_driver::Callbacks for MiriBeRustCompilerCalls {
#[allow(rustc::potential_query_instability)] // rustc_codegen_ssa (where this code is copied from) also allows this lint
fn config(&mut self, config: &mut Config) {
if config.opts.prints.is_empty() && self.target_crate {
// Queries overriden here affect the data stored in `rmeta` files of dependencies,
// Queries overridden here affect the data stored in `rmeta` files of dependencies,
// which will be used later in non-`MIRI_BE_RUSTC` mode.
config.override_queries = Some(|_, local_providers, _| {
// `exported_symbols` and `reachable_non_generics` provided by rustc always returns
Expand Down
2 changes: 1 addition & 1 deletion src/borrow_tracker/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ pub enum BorrowTrackerMethod {
}

impl BorrowTrackerMethod {
pub fn instanciate_global_state(self, config: &MiriConfig) -> GlobalState {
pub fn instantiate_global_state(self, config: &MiriConfig) -> GlobalState {
RefCell::new(GlobalStateInner::new(
self,
config.tracked_pointer_tags.clone(),
Expand Down
2 changes: 1 addition & 1 deletion src/borrow_tracker/stacked_borrows/diagnostics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@ impl<'history, 'ecx, 'mir, 'tcx> DiagnosticCx<'history, 'ecx, 'mir, 'tcx> {
.rev()
.find_map(|event| {
// First, look for a Creation event where the tag and the offset matches. This
// ensrues that we pick the right Creation event when a retag isn't uniform due to
// ensures that we pick the right Creation event when a retag isn't uniform due to
// Freeze.
let range = event.retag.range;
if event.retag.new_tag == tag
Expand Down
4 changes: 2 additions & 2 deletions src/borrow_tracker/stacked_borrows/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -433,7 +433,7 @@ impl<'tcx> Stack {
let (Some(granting_idx), ProvenanceExtra::Concrete(_)) = (granting_idx, derived_from) else {
// The parent is a wildcard pointer or matched the unknown bottom.
// This is approximate. Nobody knows what happened, so forget everything.
// The new thing is SRW anyway, so we cannot push it "on top of the unkown part"
// The new thing is SRW anyway, so we cannot push it "on top of the unknown part"
// (for all we know, it might join an SRW group inside the unknown).
trace!("reborrow: forgetting stack entirely due to SharedReadWrite reborrow from wildcard or unknown");
self.set_unknown_bottom(global.next_ptr_tag);
Expand Down Expand Up @@ -825,7 +825,7 @@ trait EvalContextPrivExt<'mir: 'ecx, 'tcx: 'mir, 'ecx>: crate::MiriInterpCxExt<'
Ok(Some(alloc_id))
}

/// Retags an indidual pointer, returning the retagged version.
/// Retags an individual pointer, returning the retagged version.
/// `kind` indicates what kind of reference is being created.
fn sb_retag_reference(
&mut self,
Expand Down
2 changes: 1 addition & 1 deletion src/borrow_tracker/stacked_borrows/stack.rs
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ impl Stack {
// Note that the algorithm below is based on considering the tag at read_idx - 1,
// so precisely considering the tag at index 0 for removal when we have an unknown
// bottom would complicate the implementation. The simplification of not considering
// it does not have a significant impact on the degree to which the GC mititages
// it does not have a significant impact on the degree to which the GC mitigates
// memory growth.
let mut read_idx = 1;
let mut write_idx = read_idx;
Expand Down
2 changes: 1 addition & 1 deletion src/borrow_tracker/tree_borrows/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,7 @@ trait EvalContextPrivExt<'mir: 'ecx, 'tcx: 'mir, 'ecx>: crate::MiriInterpCxExt<'
Ok(Some((alloc_id, new_tag)))
}

/// Retags an indidual pointer, returning the retagged version.
/// Retags an individual pointer, returning the retagged version.
fn tb_retag_reference(
&mut self,
val: &ImmTy<'tcx, Provenance>,
Expand Down
2 changes: 1 addition & 1 deletion src/borrow_tracker/tree_borrows/perms.rs
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ mod transition {
}

impl PermissionPriv {
/// Determines whether a transition that occured is compatible with the presence
/// Determines whether a transition that occurred is compatible with the presence
/// of a Protector. This is not included in the `transition` functions because
/// it would distract from the few places where the transition is modified
/// because of a protector, but not forbidden.
Expand Down
2 changes: 1 addition & 1 deletion src/borrow_tracker/tree_borrows/tree.rs
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ pub(super) struct LocationState {
/// Before initialization we still apply some preemptive transitions on
/// `permission` to know what to do in case it ever gets initialized,
/// but these can never cause any immediate UB. There can however be UB
/// the moment we attempt to initalize (i.e. child-access) because some
/// the moment we attempt to initialize (i.e. child-access) because some
/// foreign access done between the creation and the initialization is
/// incompatible with child accesses.
initialized: bool,
Expand Down
2 changes: 1 addition & 1 deletion src/concurrency/data_race.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1199,7 +1199,7 @@ pub struct GlobalState {

/// A flag to mark we are currently performing
/// a data race free action (such as atomic access)
/// to supress the race detector
/// to suppress the race detector
ongoing_action_data_race_free: Cell<bool>,

/// Mapping of a vector index to a known set of thread
Expand Down
2 changes: 1 addition & 1 deletion src/concurrency/init_once.rs
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: crate::MiriInterpCxExt<'mir, 'tcx> {
assert_eq!(
init_once.status,
InitOnceStatus::Uninitialized,
"begining already begun or complete init once"
"beginning already begun or complete init once"
);
init_once.status = InitOnceStatus::Begun;
}
Expand Down
6 changes: 3 additions & 3 deletions src/concurrency/range_object_map.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ pub struct RangeObjectMap<T> {

#[derive(Clone, Debug, PartialEq)]
pub enum AccessType {
/// The access perfectly overlaps (same offset and range) with the exsiting allocation
/// The access perfectly overlaps (same offset and range) with the existing allocation
PerfectlyOverlapping(Position),
/// The access does not touch any exising allocation
/// The access does not touch any existing allocation
Empty(Position),
/// The access overlaps with one or more existing allocations
ImperfectlyOverlapping(Range<Position>),
Expand Down Expand Up @@ -115,7 +115,7 @@ impl<T> RangeObjectMap<T> {
// want to repeat the binary search on each time, so we ask the caller to supply Position
pub fn insert_at_pos(&mut self, pos: Position, range: AllocRange, data: T) {
self.v.insert(pos, Elem { range, data });
// If we aren't the first element, then our start must be greater than the preivous element's end
// If we aren't the first element, then our start must be greater than the previous element's end
if pos > 0 {
assert!(self.v[pos - 1].range.end() <= range.start);
}
Expand Down
4 changes: 2 additions & 2 deletions src/concurrency/sync.rs
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ struct Condvar {
waiters: VecDeque<CondvarWaiter>,
/// Tracks the happens-before relationship
/// between a cond-var signal and a cond-var
/// wait during a non-suprious signal event.
/// wait during a non-spurious signal event.
/// Contains the clock of the last thread to
/// perform a futex-signal.
data_race: VClock,
Expand Down Expand Up @@ -373,7 +373,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: crate::MiriInterpCxExt<'mir, 'tcx> {
.expect("invariant violation: lock_count == 0 iff the thread is unlocked");
if mutex.lock_count == 0 {
mutex.owner = None;
// The mutex is completely unlocked. Try transfering ownership
// The mutex is completely unlocked. Try transferring ownership
// to another thread.
if let Some(data_race) = &this.machine.data_race {
data_race.validate_lock_release(
Expand Down
4 changes: 2 additions & 2 deletions src/concurrency/thread.rs
Original file line number Diff line number Diff line change
Expand Up @@ -821,7 +821,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: crate::MiriInterpCxExt<'mir, 'tcx> {
}

// Write the current thread-id, switch to the next thread later
// to treat this write operation as occuring on the current thread.
// to treat this write operation as occurring on the current thread.
if let Some(thread_info_place) = thread {
this.write_scalar(
Scalar::from_uint(new_thread_id.to_u32(), thread_info_place.layout.size),
Expand All @@ -830,7 +830,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: crate::MiriInterpCxExt<'mir, 'tcx> {
}

// Finally switch to new thread so that we can push the first stackframe.
// After this all accesses will be treated as occuring in the new thread.
// After this all accesses will be treated as occurring in the new thread.
let old_thread_id = this.set_active_thread(new_thread_id);

// Perform the function pointer load in the new thread frame.
Expand Down
14 changes: 7 additions & 7 deletions src/concurrency/weak_memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,16 @@
//! However, this model lacks SC accesses and is therefore unusable by Miri (SC accesses are everywhere in library code).
//!
//! If you find anything that proposes a relaxed memory model that is C++20-consistent, supports all orderings Rust's atomic accesses
//! and fences accept, and is implementable (with operational semanitcs), please open a GitHub issue!
//! and fences accept, and is implementable (with operational semantics), please open a GitHub issue!
//!
//! One characteristic of this implementation, in contrast to some other notable operational models such as ones proposed in
//! Taming Release-Acquire Consistency by Ori Lahav et al. (<https://plv.mpi-sws.org/sra/paper.pdf>) or Promising Semantics noted above,
//! is that this implementation does not require each thread to hold an isolated view of the entire memory. Here, store buffers are per-location
//! and shared across all threads. This is more memory efficient but does require store elements (representing writes to a location) to record
//! information about reads, whereas in the other two models it is the other way round: reads points to the write it got its value from.
//! Additionally, writes in our implementation do not have globally unique timestamps attached. In the other two models this timestamp is
//! used to make sure a value in a thread's view is not overwritten by a write that occured earlier than the one in the existing view.
//! In our implementation, this is detected using read information attached to store elements, as there is no data strucutre representing reads.
//! used to make sure a value in a thread's view is not overwritten by a write that occurred earlier than the one in the existing view.
//! In our implementation, this is detected using read information attached to store elements, as there is no data structure representing reads.
//!
//! The C++ memory model is built around the notion of an 'atomic object', so it would be natural
//! to attach store buffers to atomic objects. However, Rust follows LLVM in that it only has
Expand All @@ -48,7 +48,7 @@
//! One consequence of this difference is that safe/sound Rust allows for more operations on atomic locations
//! than the C++20 atomic API was intended to allow, such as non-atomically accessing
//! a previously atomically accessed location, or accessing previously atomically accessed locations with a differently sized operation
//! (such as accessing the top 16 bits of an AtomicU32). These senarios are generally undiscussed in formalisations of C++ memory model.
//! (such as accessing the top 16 bits of an AtomicU32). These scenarios are generally undiscussed in formalisations of C++ memory model.
//! In Rust, these operations can only be done through a `&mut AtomicFoo` reference or one derived from it, therefore these operations
//! can only happen after all previous accesses on the same locations. This implementation is adapted to allow these operations.
//! A mixed atomicity read that races with writes, or a write that races with reads or writes will still cause UBs to be thrown.
Expand All @@ -61,7 +61,7 @@
//
// 2. In the operational semantics, each store element keeps the timestamp of a thread when it loads from the store.
// If the same thread loads from the same store element multiple times, then the timestamps at all loads are saved in a list of load elements.
// This is not necessary as later loads by the same thread will always have greater timetstamp values, so we only need to record the timestamp of the first
// This is not necessary as later loads by the same thread will always have greater timestamp values, so we only need to record the timestamp of the first
// load by each thread. This optimisation is done in tsan11
// (https://github.com/ChrisLidbury/tsan11/blob/ecbd6b81e9b9454e01cba78eb9d88684168132c7/lib/tsan/rtl/tsan_relaxed.h#L35-L37)
// and here.
Expand Down Expand Up @@ -193,7 +193,7 @@ impl StoreBufferAlloc {
buffers.remove_pos_range(pos_range);
}
AccessType::Empty(_) => {
// The range had no weak behaivours attached, do nothing
// The range had no weak behaviours attached, do nothing
}
}
}
Expand Down Expand Up @@ -336,7 +336,7 @@ impl<'mir, 'tcx: 'mir> StoreBuffer {
let mut found_sc = false;
// FIXME: we want an inclusive take_while (stops after a false predicate, but
// includes the element that gave the false), but such function doesn't yet
// exist in the standard libary https://github.com/rust-lang/rust/issues/62208
// exist in the standard library https://github.com/rust-lang/rust/issues/62208
// so we have to hack around it with keep_searching
let mut keep_searching = true;
let candidates = self
Expand Down
4 changes: 2 additions & 2 deletions src/eval.rs
Original file line number Diff line number Diff line change
Expand Up @@ -372,7 +372,7 @@ pub fn create_ecx<'mir, 'tcx: 'mir>(

// Inlining of `DEFAULT` from
// https://github.com/rust-lang/rust/blob/master/compiler/rustc_session/src/config/sigpipe.rs.
// Alaways using DEFAULT is okay since we don't support signals in Miri anyway.
// Always using DEFAULT is okay since we don't support signals in Miri anyway.
let sigpipe = 2;

ecx.call_function(
Expand Down Expand Up @@ -456,7 +456,7 @@ pub fn eval_entry<'tcx>(
return None;
}
// Check for memory leaks.
info!("Additonal static roots: {:?}", ecx.machine.static_roots);
info!("Additional static roots: {:?}", ecx.machine.static_roots);
let leaks = ecx.leak_report(&ecx.machine.static_roots);
if leaks != 0 {
tcx.sess.err("the evaluated program leaked memory");
Expand Down
2 changes: 1 addition & 1 deletion src/helpers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -524,7 +524,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: crate::MiriInterpCxExt<'mir, 'tcx> {
}
}

// Make sure we visit aggregrates in increasing offset order.
// Make sure we visit aggregates in increasing offset order.
fn visit_aggregate(
&mut self,
place: &MPlaceTy<'tcx, Provenance>,
Expand Down
2 changes: 1 addition & 1 deletion src/intptrcast.rs
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ impl<'mir, 'tcx> GlobalStateInner {
Ok(pos) => Some(global_state.int_to_ptr_map[pos].1),
Err(0) => None,
Err(pos) => {
// This is the largest of the adresses smaller than `int`,
// This is the largest of the addresses smaller than `int`,
// i.e. the greatest lower bound (glb)
let (glb, alloc_id) = global_state.int_to_ptr_map[pos - 1];
// This never overflows because `addr >= glb`
Expand Down
4 changes: 2 additions & 2 deletions src/machine.rs
Original file line number Diff line number Diff line change
Expand Up @@ -491,9 +491,9 @@ impl<'mir, 'tcx> MiriMachine<'mir, 'tcx> {
measureme::Profiler::new(out).expect("Couldn't create `measureme` profiler")
});
let rng = StdRng::seed_from_u64(config.seed.unwrap_or(0));
let borrow_tracker = config.borrow_tracker.map(|bt| bt.instanciate_global_state(config));
let borrow_tracker = config.borrow_tracker.map(|bt| bt.instantiate_global_state(config));
let data_race = config.data_race_detector.then(|| data_race::GlobalState::new(config));
// Determinine page size, stack address, and stack size.
// Determine page size, stack address, and stack size.
// These values are mostly meaningless, but the stack address is also where we start
// allocating physical integer addresses for all allocations.
let page_size = if let Some(page_size) = config.page_size {
Expand Down
4 changes: 2 additions & 2 deletions src/shims/intrinsics/simd.rs
Original file line number Diff line number Diff line change
Expand Up @@ -585,9 +585,9 @@ fn simd_element_to_bool(elem: ImmTy<'_, Provenance>) -> InterpResult<'_, bool> {
})
}

fn simd_bitmask_index(idx: u32, vec_len: u32, endianess: Endian) -> u32 {
fn simd_bitmask_index(idx: u32, vec_len: u32, endianness: Endian) -> u32 {
assert!(idx < vec_len);
match endianess {
match endianness {
Endian::Little => idx,
#[allow(clippy::integer_arithmetic)] // idx < vec_len
Endian::Big => vec_len - 1 - idx, // reverse order of bits
Expand Down
2 changes: 1 addition & 1 deletion src/shims/os_str.rs
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: crate::MiriInterpCxExt<'mir, 'tcx> {
match direction {
PathConversion::HostToTarget => {
// If this start withs a `\`, we add `\\?` so it starts with `\\?\` which is
// some magic path on Windos that *is* considered absolute.
// some magic path on Windows that *is* considered absolute.
if converted.get(0).copied() == Some(b'\\') {
converted.splice(0..0, b"\\\\?".iter().copied());
}
Expand Down
2 changes: 1 addition & 1 deletion src/shims/time.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: crate::MiriInterpCxExt<'mir, 'tcx> {
this.eval_libc_i32("CLOCK_REALTIME_COARSE"),
];
// The second kind is MONOTONIC clocks for which 0 is an arbitrary time point, but they are
// never allowed to go backwards. We don't need to do any additonal monotonicity
// never allowed to go backwards. We don't need to do any additional monotonicity
// enforcement because std::time::Instant already guarantees that it is monotonic.
relative_clocks = vec![
this.eval_libc_i32("CLOCK_MONOTONIC"),
Expand Down
Loading

0 comments on commit 282840b

Please sign in to comment.