Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
258d553
Prepare for merging from rust-lang/rust
Oct 16, 2025
0784632
Merge ref '402ce0ef07d5' from rust-lang/rust
Oct 16, 2025
936dda5
Merge pull request #4637 from rust-lang/rustup-2025-10-16
RalfJung Oct 16, 2025
736a276
Prepare for merging from rust-lang/rust
RalfJung Oct 22, 2025
4dff7b8
Merge ref '96fe3c31c2ec' from rust-lang/rust
RalfJung Oct 22, 2025
1d00266
make genmc tests less dependent on std internals
RalfJung Oct 22, 2025
86bb8eb
Merge pull request #4640 from RalfJung/rustup
RalfJung Oct 22, 2025
70aa534
Avoid panicking when `Cargo.toml` is not present
FranciscoTGouveia Oct 22, 2025
424f7bf
native_call: we only support thin pointers as return type
RalfJung Oct 22, 2025
23e3645
Merge pull request #4644 from RalfJung/native-call-return-ptr
RalfJung Oct 22, 2025
d521a79
Prepare for merging from rust-lang/rust
Oct 23, 2025
d60d4a5
Merge ref '6244effd0372' from rust-lang/rust
Oct 23, 2025
c13da7b
Merge pull request #4645 from rust-lang/rustup-2025-10-23
oli-obk Oct 23, 2025
ce35d4c
SB wildcard handling: extend comments
RalfJung Oct 23, 2025
c8c2e17
Merge pull request #4646 from RalfJung/sb-wildcard-comments
RalfJung Oct 23, 2025
b6c7138
Merge pull request #4643 from FranciscoTGouveia/avoid-panic-cargo-toml
RalfJung Oct 23, 2025
420096d
Prepare for merging from rust-lang/rust
Oct 24, 2025
77e7651
Merge ref '27050c0d15af' from rust-lang/rust
Oct 24, 2025
3c92cf1
Merge pull request #4649 from rust-lang/rustup-2025-10-24
oli-obk Oct 24, 2025
0c7ff00
add -Zmiri-user-relevant-crates
RalfJung Oct 27, 2025
80bd9cf
Merge pull request #4651 from RalfJung/user-relevant-crates
RalfJung Oct 27, 2025
8f70d2d
add miri magic function to configure allocation tracking at runtime
RalfJung Oct 27, 2025
9d60a23
Merge pull request #4652 from RalfJung/track-alloc-runtime
RalfJung Oct 27, 2025
20e0014
Support f32/f64 in native function calls
FranciscoTGouveia Oct 23, 2025
8fef16e
Merge pull request #4650 from FranciscoTGouveia/f32/64-native-func-calls
RalfJung Oct 28, 2025
651076a
Prepare for merging from rust-lang/rust
Oct 30, 2025
c351d06
Merge ref '292be5c7c051' from rust-lang/rust
Oct 30, 2025
7e0783f
alloc access tracking: print accessed range
RalfJung Oct 30, 2025
8c678cc
update comment regarding disabled jobs
RalfJung Oct 30, 2025
3048ef2
unconditionally use Duration::from_nanos_u128
RalfJung Oct 30, 2025
9cb215a
Merge pull request #4656 from rust-lang/rustup-2025-10-30
RalfJung Oct 30, 2025
04eacd0
document -Zmiri-backtrace; dont print backtrace when there's only one…
RalfJung Oct 30, 2025
250ebfa
Merge pull request #4657 from RalfJung/tracking-and-backtrace
RalfJung Oct 30, 2025
20bddb1
weak memory: fix non-atomic read clearing store buffer
RalfJung Oct 30, 2025
851b10f
add store buffer initialization sanity check that would have caught t…
RalfJung Oct 30, 2025
63a18d4
Merge pull request #4658 from RalfJung/weak-mem-na-reads
RalfJung Nov 1, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
5 changes: 3 additions & 2 deletions src/tools/miri/.github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ jobs:
os: ubuntu-24.04-arm
multiarch: armhf
gcc_cross: arm-linux-gnueabihf
# Disabled due to Ubuntu repo trouble
# Ubuntu mirrors are not reliable enough for these architectures
# (see <https://bugs.launchpad.net/ubuntu/+bug/2130309>).
# - host_target: riscv64gc-unknown-linux-gnu
# os: ubuntu-latest
# multiarch: riscv64
Expand Down Expand Up @@ -68,7 +69,7 @@ jobs:
- name: install multiarch
if: ${{ matrix.multiarch != '' }}
run: |
# s390x, ppc64el, riscv64 need Ubuntu Ports to be in the mirror list
# armhf, s390x, ppc64el, riscv64 need Ubuntu Ports to be in the mirror list
sudo bash -c "echo 'https://ports.ubuntu.com/ priority:4' >> /etc/apt/apt-mirrors.txt"
# Add architecture
sudo dpkg --add-architecture ${{ matrix.multiarch }}
Expand Down
12 changes: 11 additions & 1 deletion src/tools/miri/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -292,6 +292,9 @@ Try running `cargo miri clean`.
Miri adds its own set of `-Z` flags, which are usually set via the `MIRIFLAGS`
environment variable. We first document the most relevant and most commonly used flags:

* `-Zmiri-backtrace=<0|1|full>` configures how Miri prints backtraces: `1` is the default,
where backtraces are printed in pruned form; `full` prints backtraces without pruning, and `0`
disables backtraces entirely.
* `-Zmiri-deterministic-concurrency` makes Miri's concurrency-related behavior fully deterministic.
Strictly speaking, Miri is always fully deterministic when isolation is enabled (the default
mode), but this determinism is achieved by using an RNG with a fixed seed. Seemingly harmless
Expand Down Expand Up @@ -373,6 +376,12 @@ environment variable. We first document the most relevant and most commonly used
ensure alignment. (The standard library `align_to` method works fine in both modes; under
symbolic alignment it only fills the middle slice when the allocation guarantees sufficient
alignment.)
* `-Zmiri-user-relevant-crates=<crate>,<crate>,...` extends the list of crates that Miri considers
"user-relevant". This affects the rendering of backtraces (for user-relevant crates, Miri shows
not just the function name but the actual code) and it affects the spans collected for data races
and aliasing violations (where Miri will show the span of the topmost non-`#[track_caller]` frame
in a user-relevant crate). When using `cargo miri`, the crates in the local workspace are always
considered user-relevant.

The remaining flags are for advanced use only, and more likely to change or be removed.
Some of these are **unsound**, which means they can lead
Expand Down Expand Up @@ -474,7 +483,8 @@ to Miri failing to detect cases of undefined behavior in a program.
* `-Zmiri-track-alloc-id=<id1>,<id2>,...` shows a backtrace when the given allocations are
being allocated or freed. This helps in debugging memory leaks and
use after free bugs. Specifying this argument multiple times does not overwrite the previous
values, instead it appends its values to the list. Listing an id multiple times has no effect.
values, instead it appends its values to the list. Listing an ID multiple times has no effect.
You can also add IDs at runtime using `miri_track_alloc`.
* `-Zmiri-track-pointer-tag=<tag1>,<tag2>,...` shows a backtrace when a given pointer tag
is created and when (if ever) it is popped from a borrow stack (which is where the tag becomes invalid
and any future use of it will error). This helps you in finding out why UB is
Expand Down
6 changes: 5 additions & 1 deletion src/tools/miri/cargo-miri/src/util.rs
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,11 @@ fn cargo_extra_flags() -> Vec<String> {

pub fn get_cargo_metadata() -> Metadata {
// This will honor the `CARGO` env var the same way our `cargo()` does.
MetadataCommand::new().no_deps().other_options(cargo_extra_flags()).exec().unwrap()
MetadataCommand::new()
.no_deps()
.other_options(cargo_extra_flags())
.exec()
.unwrap_or_else(|err| show_error!("{}", err))
}

/// Pulls all the crates in this workspace from the cargo metadata.
Expand Down
2 changes: 1 addition & 1 deletion src/tools/miri/rust-version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
28d0a4a205f9e511ad2f51ee79a4aa19a704a455
292be5c7c05138d753bbd4b30db7a3f1a5c914f7
2 changes: 2 additions & 0 deletions src/tools/miri/src/bin/miri.rs
Original file line number Diff line number Diff line change
Expand Up @@ -710,6 +710,8 @@ fn main() {
fatal_error!("-Zmiri-force-page-size requires a power of 2: {page_size}");
};
miri_config.page_size = Some(page_size);
} else if let Some(param) = arg.strip_prefix("-Zmiri-user-relevant-crates=") {
miri_config.user_relevant_crates.extend(param.split(',').map(|s| s.to_owned()));
} else {
// Forward to rustc.
rustc_args.push(arg);
Expand Down
14 changes: 10 additions & 4 deletions src/tools/miri/src/borrow_tracker/stacked_borrows/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -675,16 +675,22 @@ trait EvalContextPrivExt<'tcx, 'ecx>: crate::MiriInterpCxExt<'tcx> {
if let Ok((alloc_id, base_offset, orig_tag)) = this.ptr_try_get_alloc_id(place.ptr(), 0)
{
log_creation(this, Some((alloc_id, base_offset, orig_tag)))?;
// Still give it the new provenance, it got retagged after all.
// Still give it the new provenance, it got retagged after all. If this was a
// wildcard pointer, this will fix the AllocId and make future accesses with this
// reference to other allocations UB, but that's fine: due to subobject provenance,
// *all* future accesses with this reference should be UB!
return interp_ok(Some(Provenance::Concrete { alloc_id, tag: new_tag }));
} else {
// This pointer doesn't come with an AllocId. :shrug:
log_creation(this, None)?;
// Provenance unchanged.
// Provenance unchanged. Ideally we'd make this pointer UB to use like above,
// but there's no easy way to do that.
return interp_ok(place.ptr().provenance);
}
}

// The pointer *must* have a valid AllocId to continue, so we want to resolve this to
// a concrete ID even for wildcard pointers.
let (alloc_id, base_offset, orig_tag) = this.ptr_get_alloc_id(place.ptr(), 0)?;
log_creation(this, Some((alloc_id, base_offset, orig_tag)))?;

Expand Down Expand Up @@ -743,7 +749,7 @@ trait EvalContextPrivExt<'tcx, 'ecx>: crate::MiriInterpCxExt<'tcx> {
// Make sure the data race model also knows about this.
// FIXME(genmc): Ensure this is still done in GenMC mode. Check for other places where GenMC may need to be informed.
if let Some(data_race) = alloc_extra.data_race.as_vclocks_mut() {
data_race.write(
data_race.write_non_atomic(
alloc_id,
range,
NaWriteType::Retag,
Expand Down Expand Up @@ -792,7 +798,7 @@ trait EvalContextPrivExt<'tcx, 'ecx>: crate::MiriInterpCxExt<'tcx> {
assert_eq!(access, AccessKind::Read);
// Make sure the data race model also knows about this.
if let Some(data_race) = alloc_extra.data_race.as_vclocks_ref() {
data_race.read(
data_race.read_non_atomic(
alloc_id,
range,
NaReadType::Retag,
Expand Down
2 changes: 1 addition & 1 deletion src/tools/miri/src/borrow_tracker/tree_borrows/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ trait EvalContextPrivExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
// Also inform the data race model (but only if any bytes are actually affected).
if range_in_alloc.size.bytes() > 0 {
if let Some(data_race) = alloc_extra.data_race.as_vclocks_ref() {
data_race.read(
data_race.read_non_atomic(
alloc_id,
range_in_alloc,
NaReadType::Retag,
Expand Down
16 changes: 1 addition & 15 deletions src/tools/miri/src/clock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -46,21 +46,7 @@ impl Instant {
InstantKind::Virtual { nanoseconds: earlier },
) => {
let duration = nanoseconds.saturating_sub(earlier);
cfg_select! {
bootstrap => {
// `Duration` does not provide a nice constructor from a `u128` of nanoseconds,
// so we have to implement this ourselves.
// It is possible for second to overflow because u64::MAX < (u128::MAX / 1e9).
// It will be saturated to u64::MAX seconds if the value after division exceeds u64::MAX.
let seconds = u64::try_from(duration / 1_000_000_000).unwrap_or(u64::MAX);
// It is impossible for nanosecond to overflow because u32::MAX > 1e9.
let nanosecond = u32::try_from(duration.wrapping_rem(1_000_000_000)).unwrap();
Duration::new(seconds, nanosecond)
}
_ => {
Duration::from_nanos_u128(duration)
}
}
Duration::from_nanos_u128(duration)
}
_ => panic!("all `Instant` must be of the same kind"),
}
Expand Down
97 changes: 61 additions & 36 deletions src/tools/miri/src/concurrency/data_race.rs
Original file line number Diff line number Diff line change
Expand Up @@ -648,18 +648,17 @@ impl MemoryCellClocks {
thread_clocks.clock.index_mut(index).span = current_span;
}
thread_clocks.clock.index_mut(index).set_read_type(read_type);
if self.write_was_before(&thread_clocks.clock) {
// We must be ordered-after all atomic writes.
let race_free = if let Some(atomic) = self.atomic() {
atomic.write_vector <= thread_clocks.clock
} else {
true
};
self.read.set_at_index(&thread_clocks.clock, index);
if race_free { Ok(()) } else { Err(DataRace) }
} else {
Err(DataRace)
// Check synchronization with non-atomic writes.
if !self.write_was_before(&thread_clocks.clock) {
return Err(DataRace);
}
// Check synchronization with atomic writes.
if !self.atomic().is_none_or(|atomic| atomic.write_vector <= thread_clocks.clock) {
return Err(DataRace);
}
// Record this access.
self.read.set_at_index(&thread_clocks.clock, index);
Ok(())
}

/// Detect races for non-atomic write operations at the current memory cell
Expand All @@ -675,24 +674,23 @@ impl MemoryCellClocks {
if !current_span.is_dummy() {
thread_clocks.clock.index_mut(index).span = current_span;
}
if self.write_was_before(&thread_clocks.clock) && self.read <= thread_clocks.clock {
let race_free = if let Some(atomic) = self.atomic() {
atomic.write_vector <= thread_clocks.clock
&& atomic.read_vector <= thread_clocks.clock
} else {
true
};
self.write = (index, thread_clocks.clock[index]);
self.write_type = write_type;
if race_free {
self.read.set_zero_vector();
Ok(())
} else {
Err(DataRace)
}
} else {
Err(DataRace)
// Check synchronization with non-atomic accesses.
if !(self.write_was_before(&thread_clocks.clock) && self.read <= thread_clocks.clock) {
return Err(DataRace);
}
// Check synchronization with atomic accesses.
if !self.atomic().is_none_or(|atomic| {
atomic.write_vector <= thread_clocks.clock && atomic.read_vector <= thread_clocks.clock
}) {
return Err(DataRace);
}
// Record this access.
self.write = (index, thread_clocks.clock[index]);
self.write_type = write_type;
self.read.set_zero_vector();
// This is not an atomic location any more.
self.atomic_ops = None;
Ok(())
}
}

Expand Down Expand Up @@ -758,14 +756,9 @@ pub trait EvalContextExt<'tcx>: MiriInterpCxExt<'tcx> {
let this = self.eval_context_mut();
this.atomic_access_check(dest, AtomicAccessType::Store)?;

// Read the previous value so we can put it in the store buffer later.
// The program didn't actually do a read, so suppress the memory access hooks.
// This is also a very special exception where we just ignore an error -- if this read
// was UB e.g. because the memory is uninitialized, we don't want to know!
let old_val = this.run_for_validation_ref(|this| this.read_scalar(dest)).discard_err();

// Inform GenMC about the atomic store.
if let Some(genmc_ctx) = this.machine.data_race.as_genmc_ref() {
let old_val = this.run_for_validation_ref(|this| this.read_scalar(dest)).discard_err();
if genmc_ctx.atomic_store(
this,
dest.ptr().addr(),
Expand All @@ -780,6 +773,9 @@ pub trait EvalContextExt<'tcx>: MiriInterpCxExt<'tcx> {
}
return interp_ok(());
}

// Read the previous value so we can put it in the store buffer later.
let old_val = this.get_latest_nonatomic_val(dest);
this.allow_data_races_mut(move |this| this.write_scalar(val, dest))?;
this.validate_atomic_store(dest, atomic)?;
this.buffered_atomic_write(val, dest, atomic, old_val)
Expand Down Expand Up @@ -1201,7 +1197,7 @@ impl VClockAlloc {
/// operation for which data-race detection is handled separately, for example
/// atomic read operations. The `ty` parameter is used for diagnostics, letting
/// the user know which type was read.
pub fn read<'tcx>(
pub fn read_non_atomic<'tcx>(
&self,
alloc_id: AllocId,
access_range: AllocRange,
Expand Down Expand Up @@ -1243,7 +1239,7 @@ impl VClockAlloc {
/// being created or if it is temporarily disabled during a racy read or write
/// operation. The `ty` parameter is used for diagnostics, letting
/// the user know which type was written.
pub fn write<'tcx>(
pub fn write_non_atomic<'tcx>(
&mut self,
alloc_id: AllocId,
access_range: AllocRange,
Expand Down Expand Up @@ -1540,6 +1536,35 @@ trait EvalContextPrivExt<'tcx>: MiriInterpCxExt<'tcx> {
)
}

/// Returns the most recent *non-atomic* value stored in the given place.
/// Errors if we don't need that (because we don't do store buffering) or if
/// the most recent value is in fact atomic.
fn get_latest_nonatomic_val(&self, place: &MPlaceTy<'tcx>) -> Result<Option<Scalar>, ()> {
let this = self.eval_context_ref();
// These cannot fail because `atomic_access_check` was done first.
let (alloc_id, offset, _prov) = this.ptr_get_alloc_id(place.ptr(), 0).unwrap();
let alloc_meta = &this.get_alloc_extra(alloc_id).unwrap().data_race;
if alloc_meta.as_weak_memory_ref().is_none() {
// No reason to read old value if we don't track store buffers.
return Err(());
}
let data_race = alloc_meta.as_vclocks_ref().unwrap();
// Only read old value if this is currently a non-atomic location.
for (_range, clocks) in data_race.alloc_ranges.borrow_mut().iter(offset, place.layout.size)
{
// If this had an atomic write that's not before the non-atomic write, that should
// already be in the store buffer. Initializing the store buffer now would use the
// wrong `sync_clock` so we better make sure that does not happen.
if clocks.atomic().is_some_and(|atomic| !(atomic.write_vector <= clocks.write())) {
return Err(());
}
}
// The program didn't actually do a read, so suppress the memory access hooks.
// This is also a very special exception where we just ignore an error -- if this read
// was UB e.g. because the memory is uninitialized, we don't want to know!
Ok(this.run_for_validation_ref(|this| this.read_scalar(place)).discard_err())
}

/// Generic atomic operation implementation
fn validate_atomic_op<A: Debug + Copy>(
&self,
Expand Down
22 changes: 14 additions & 8 deletions src/tools/miri/src/concurrency/weak_memory.rs
Original file line number Diff line number Diff line change
Expand Up @@ -177,11 +177,11 @@ impl StoreBufferAlloc {
Self { store_buffers: RefCell::new(RangeObjectMap::new()) }
}

/// When a non-atomic access happens on a location that has been atomically accessed
/// before without data race, we can determine that the non-atomic access fully happens
/// When a non-atomic write happens on a location that has been atomically accessed
/// before without data race, we can determine that the non-atomic write fully happens
/// after all the prior atomic writes so the location no longer needs to exhibit
/// any weak memory behaviours until further atomic accesses.
pub fn memory_accessed(&self, range: AllocRange, global: &DataRaceState) {
/// any weak memory behaviours until further atomic writes.
pub fn non_atomic_write(&self, range: AllocRange, global: &DataRaceState) {
if !global.ongoing_action_data_race_free() {
let mut buffers = self.store_buffers.borrow_mut();
let access_type = buffers.access_type(range);
Expand Down Expand Up @@ -223,18 +223,23 @@ impl StoreBufferAlloc {
fn get_or_create_store_buffer_mut<'tcx>(
&mut self,
range: AllocRange,
init: Option<Scalar>,
init: Result<Option<Scalar>, ()>,
) -> InterpResult<'tcx, &mut StoreBuffer> {
let buffers = self.store_buffers.get_mut();
let access_type = buffers.access_type(range);
let pos = match access_type {
AccessType::PerfectlyOverlapping(pos) => pos,
AccessType::Empty(pos) => {
let init =
init.expect("cannot have empty store buffer when previous write was atomic");
buffers.insert_at_pos(pos, range, StoreBuffer::new(init));
pos
}
AccessType::ImperfectlyOverlapping(pos_range) => {
// Once we reach here we would've already checked that this access is not racy.
let init = init.expect(
"cannot have partially overlapping store buffer when previous write was atomic",
);
buffers.remove_pos_range(pos_range.clone());
buffers.insert_at_pos(pos_range.start, range, StoreBuffer::new(init));
pos_range.start
Expand Down Expand Up @@ -490,7 +495,7 @@ pub(super) trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
}
let range = alloc_range(base_offset, place.layout.size);
let sync_clock = data_race_clocks.sync_clock(range);
let buffer = alloc_buffers.get_or_create_store_buffer_mut(range, Some(init))?;
let buffer = alloc_buffers.get_or_create_store_buffer_mut(range, Ok(Some(init)))?;
// The RMW always reads from the most recent store.
buffer.read_from_last_store(global, threads, atomic == AtomicRwOrd::SeqCst);
buffer.buffered_write(
Expand Down Expand Up @@ -556,15 +561,16 @@ pub(super) trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
/// Add the given write to the store buffer. (Does not change machine memory.)
///
/// `init` says with which value to initialize the store buffer in case there wasn't a store
/// buffer for this memory range before.
/// buffer for this memory range before. `Err(())` means the value is not available;
/// `Ok(None)` means the memory does not contain a valid scalar.
///
/// Must be called *after* `validate_atomic_store` to ensure that `sync_clock` is up-to-date.
fn buffered_atomic_write(
&mut self,
val: Scalar,
dest: &MPlaceTy<'tcx>,
atomic: AtomicWriteOrd,
init: Option<Scalar>,
init: Result<Option<Scalar>, ()>,
) -> InterpResult<'tcx> {
let this = self.eval_context_mut();
let (alloc_id, base_offset, ..) = this.ptr_get_alloc_id(dest.ptr(), 0)?;
Expand Down
Loading
Loading