Skip to content

Conversation

sayantn
Copy link
Contributor

@sayantn sayantn commented Oct 4, 2025

This PR adds an alignment parameter in simd_masked_load and simd_masked_store, in the form of a const-generic enum core::intrinsics::simd::SimdAlign. This represents the alignment of the ptr argument in these intrinsics as follows

  • SimdAlign::Unaligned - ptr is unaligned/1-byte aligned
  • SimdAlign::Element - ptr is aligned to the element type of the SIMD vector (default behavior in the old signature)
  • SimdAlign::Vector - ptr is aligned to the SIMD vector type

The main motive for this is stdarch - most vector loads are either fully aligned (to the vector size) or unaligned (byte-aligned), so the previous signature doesn't cut it.

Now, stdarch will mostly use SimdAlign::Unaligned and SimdAlign::Vector, whereas portable-simd will use SimdAlign::Element.

Alternatives

Using a const-generic/"const" u32 parameter as alignment (and we error during codegen if this argument is not a power of two). This, although more flexible than this, has a few drawbacks

  • If we use an const-generic argument, then portable-simd somehow needs to pass align_of::<T>() as the alignment, which isn't possible without GCE
  • "const" function parameters are just an ugly hack, and a pain to deal with in non-LLVM backends

We can remedy the problem with the const-generic u32 parameter by adding a special rule for the element alignment case (e.g. 0 can mean "use the alignment of the element type), but I feel like this is not as expressive as the enum approach, although I am open to suggestions

cc @workingjubilee @RalfJung @BoxyUwU

@rustbot
Copy link
Collaborator

rustbot commented Oct 4, 2025

The Miri subtree was changed

cc @rust-lang/miri

Portable SIMD is developed in its own repository. If possible, consider making this change to rust-lang/portable-simd instead.

cc @calebzulawski, @programmerjake

Some changes occurred to the platform-builtins intrinsics. Make sure the
LLVM backend as well as portable-simd gets adapted for the changes.

cc @antoyo, @GuillaumeGomez, @bjorn3, @calebzulawski, @programmerjake

Some changes occurred in compiler/rustc_codegen_ssa

cc @WaffleLapkin

Some changes occurred to the intrinsics. Make sure the CTFE / Miri interpreter
gets adapted for the changes, if necessary.

cc @rust-lang/miri, @RalfJung, @oli-obk, @lcnr

@rustbot rustbot added A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-libs Relevant to the library team, which will review and decide on the PR/issue. labels Oct 4, 2025
@rustbot
Copy link
Collaborator

rustbot commented Oct 4, 2025

r? @lcnr

rustbot has assigned @lcnr.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@rust-log-analyzer

This comment has been minimized.

@RalfJung
Copy link
Member

RalfJung commented Oct 5, 2025

If we only need normally-aligned and unaligned loads, IMO it'd be better to just have a const generic boolean indicating which of them we want for any particular operation. That avoids ad-hoc hacks such as const parameters in intrinsics.

@programmerjake
Copy link
Member

If we only need normally-aligned and unaligned loads, IMO it'd be better to just have a const generic boolean indicating which of them we want for any particular operation. That avoids ad-hoc hacks such as const parameters in intrinsics.

for portable-simd I think we should default to element-level-alignment since I expect that to be more efficient than unaligned ops on some targets (GPUs? maybe RISC-V V?)

@RalfJung
Copy link
Member

RalfJung commented Oct 5, 2025

If we only need normally-aligned and unaligned loads, IMO it'd be better to just have a const generic boolean indicating which of them we want for any particular operation. That avoids ad-hoc hacks such as const parameters in intrinsics.

for portable-simd I think we should default to element-level-alignment since I expect that to be more efficient than unaligned ops on some targets (GPUs? maybe RISC-V V?)

Yes, so...?

IIUC we either want element-level alignment or no alignment, so we can just have a const bool generic controlling that.

@programmerjake
Copy link
Member

If we only need normally-aligned and unaligned loads, IMO it'd be better to just have a const generic boolean indicating which of them we want for any particular operation. That avoids ad-hoc hacks such as const parameters in intrinsics.

for portable-simd I think we should default to element-level-alignment since I expect that to be more efficient than unaligned ops on some targets (GPUs? maybe RISC-V V?)

Yes, so...?

I thought you meant full-simd-type alignment or unaligned, since that's what x86 uses for simd instructions it calls aligned.

@RalfJung
Copy link
Member

RalfJung commented Oct 5, 2025

This is about simd_masked_load/store which are currently documented as

/// Unmasked values in `T` must be readable as if by `<ptr>::read` (e.g. aligned to the element
/// type).

@sayantn
Copy link
Contributor Author

sayantn commented Oct 5, 2025

As a summary, we need 3 types of alignments

  • element alignment (used in portable simd)
  • simd type alignment (x86 aligned)
  • fully unaligned (x86 unaligned)

So a bool flag won't cut it, at best we can use a const generic parameter, with 0 meaning element size aligned (because that is the most used, and can't be specified using const generics (requires gce))

@RalfJung
Copy link
Member

RalfJung commented Oct 5, 2025

As a summary, we need 3 types of alignments

Now you are expanding the scope of the PR. So far the motivation has been, we'd like an unaligned version of the existing intrinsics. If you also want SIMD type aligned variants, the PR description needs to be expanded to argue for this.

IIRC, last time this was looked into, the SIMD type alignment option wasn't necessary -- LLVM was more than able to use surrounding info on reference types to deduce the right alignment for the desired codegen. So please show some concrete undesirable codegen if you want to motivate a form of this intrinsic that requires SIMD type alignment.

@sayantn
Copy link
Contributor Author

sayantn commented Oct 5, 2025

I apologise if I was unclear, but the motivation was always adding these 3 types of loads. LLVM will always generate an unaligned (byte-aligned) load/store if we pass any alignment less that the vector type size (because it is guaranteed to be safe). But for _mm_mask_load_ intrinsics, the pointer needs to be aligned to the vector size, so LLVM won't generate aligned loads unless we pass the vector size as alignment.

@RalfJung
Copy link
Member

RalfJung commented Oct 5, 2025

LLVM will always generate an unaligned (byte-aligned) load/store if we pass any alignment less that the vector type size

I don't think the "always" here is correct. If we are loading from an &__m128 LLVM should be able to use that fact to generate the aligned intrinsics.

However, I guess stdarch uses raw pointers in its API. So yeah this definitely needs to be explained properly in the PR description, currently it is at best confusing.

@RalfJung
Copy link
Member

RalfJung commented Oct 5, 2025

If we need 3 different alignment modes (Cc @Amanieu for the stdarch part here), that can still be done using const generics with a new 3-variant enum (similar to the enum we have for atomic memory access orderings).

@workingjubilee
Copy link
Member

workingjubilee commented Oct 5, 2025

Yes, std::arch tries to reflect the type signatures used by the C vendor functions... it's not exactly just "bindgen for vendor functions", but it kinda is bindgen for vendor functions.

@Amanieu
Copy link
Member

Amanieu commented Oct 5, 2025

I'm happy with the current API that takes a constant (either as an argument or a const generic). An enum doesn't really provide much of an advantage when the desired alignment can just be explicitly provided.

@sayantn
Copy link
Contributor Author

sayantn commented Oct 6, 2025

If everyone agrees, I can substitute the const argument for a const-generic u32 alignment. But then portable-simd would face problems because it has to pass align_of::<T> to the const-generic argument somehow, so I propose to add a special meaning to 0 - if 0 is passed as the alignment it is interpreted as the element type's alignment. This doesn't affect stdarch, all invocations of this intrinsic there will use literal (for x86 at least, I don't know much about other archs)

@RalfJung
Copy link
Member

RalfJung commented Oct 6, 2025

I'm happy with the current API that takes a constant (either as an argument or a const generic). An enum doesn't really provide much of an advantage when the desired alignment can just be explicitly provided.

The enum provides the big advantage that we don't need more ad-hock "constant argument" hacks.

What I was hoping to get from you is confirmation on which forms of the intrinsic are needed for stdarch.

@RalfJung
Copy link
Member

RalfJung commented Oct 6, 2025

If everyone agrees, I can substitute the const argument for a const-generic u32 alignment. But then portable-simd would face problems because it has to pass align_of::<T> to the const-generic argument somehow, so I propose to add a special meaning to 0 - if 0 is passed as the alignment it is interpreted as the element type's alignment. This doesn't affect stdarch, all invocations of this intrinsic there will use literal (for x86 at least, I don't know much about other archs)

I would prefer that over the "constant argument" hack. Not sure if it's better than a 3-value enum but 🤷 .

@lcnr
Copy link
Contributor

lcnr commented Oct 6, 2025

r? @RalfJung though feel free to reassigned

@rustbot rustbot unassigned lcnr Oct 6, 2025
@sayantn
Copy link
Contributor Author

sayantn commented Oct 7, 2025

If we are loading from an &__m128 LLVM should be able to use that fact to generate the aligned intrinsics.

Also worth mentioning that we are working with masked load/stores here, so we can't really have a reference - if it is a reference we can just do simd_select_bitmask(k, *ptr, default), but some elements that have been masked out might be inaccessible

@Amanieu
Copy link
Member

Amanieu commented Oct 7, 2025

I'm not too concerned about the const argument hack since this is an internal intrinsic and we are well within our rights to ICE if it is not passed proper constant arguments. In stdarch we directly invoke plenty of LLVM intrinsics which require a constant argument and will trigger an LLVM assert if given a non-constant argument.

@bjorn3
Copy link
Member

bjorn3 commented Oct 7, 2025

Evaluating those const arguments at compile time is a pain in cg_clif. cg_llvm only needs to know their value after optimizations, but cg_clif needs them right at the point the MIR level intrinsic call is lowered to clif ir, resulting in code like this which is effectively a buggy reimplementation of const eval that depends on MIR optimizations not reusing the same local for two different const values:

/// Used only for intrinsic implementations that need a compile-time constant
///
/// All uses of this function are a bug inside stdarch. [`eval_mir_constant`]
/// should be used everywhere, but for some vendor intrinsics stdarch forgets
/// to wrap the immediate argument in `const {}`, necesitating this hack to get
/// the correct value at compile time instead.
pub(crate) fn mir_operand_get_const_val<'tcx>(
fx: &FunctionCx<'_, '_, 'tcx>,
operand: &Operand<'tcx>,
) -> Option<ScalarInt> {
match operand {
Operand::Constant(const_) => eval_mir_constant(fx, const_).0.try_to_scalar_int(),
// FIXME(rust-lang/rust#85105): Casts like `IMM8 as u32` result in the const being stored
// inside a temporary before being passed to the intrinsic requiring the const argument.
// This code tries to find a single constant defining definition of the referenced local.
Operand::Copy(place) | Operand::Move(place) => {
if !place.projection.is_empty() {
return None;
}
let mut computed_scalar_int = None;
for bb_data in fx.mir.basic_blocks.iter() {
for stmt in &bb_data.statements {
match &stmt.kind {
StatementKind::Assign(local_and_rvalue) if &local_and_rvalue.0 == place => {
match &local_and_rvalue.1 {
Rvalue::Cast(
CastKind::IntToInt
| CastKind::FloatToFloat
| CastKind::FloatToInt
| CastKind::IntToFloat
| CastKind::FnPtrToPtr
| CastKind::PtrToPtr,
operand,
ty,
) => {
if computed_scalar_int.is_some() {
return None; // local assigned twice
}
if !matches!(ty.kind(), ty::Uint(_) | ty::Int(_)) {
return None;
}
let scalar_int = mir_operand_get_const_val(fx, operand)?;
let scalar_int =
match fx.layout_of(*ty).size.cmp(&scalar_int.size()) {
Ordering::Equal => scalar_int,
Ordering::Less => match ty.kind() {
ty::Uint(_) => ScalarInt::try_from_uint(
scalar_int.to_uint(scalar_int.size()),
fx.layout_of(*ty).size,
)
.unwrap(),
ty::Int(_) => ScalarInt::try_from_int(
scalar_int.to_int(scalar_int.size()),
fx.layout_of(*ty).size,
)
.unwrap(),
_ => unreachable!(),
},
Ordering::Greater => return None,
};
computed_scalar_int = Some(scalar_int);
}
Rvalue::Use(operand) => {
computed_scalar_int = mir_operand_get_const_val(fx, operand)
}
_ => return None,
}
}
StatementKind::SetDiscriminant { place: stmt_place, variant_index: _ }
if &**stmt_place == place =>
{
return None;
}
StatementKind::Intrinsic(ref intrinsic) => match **intrinsic {
NonDivergingIntrinsic::CopyNonOverlapping(..) => return None,
NonDivergingIntrinsic::Assume(..) => {}
},
// conservative handling
StatementKind::Assign(_)
| StatementKind::FakeRead(_)
| StatementKind::SetDiscriminant { .. }
| StatementKind::Deinit(_)
| StatementKind::StorageLive(_)
| StatementKind::StorageDead(_)
| StatementKind::Retag(_, _)
| StatementKind::AscribeUserType(_, _)
| StatementKind::PlaceMention(..)
| StatementKind::Coverage(_)
| StatementKind::ConstEvalCounter
| StatementKind::BackwardIncompatibleDropHint { .. }
| StatementKind::Nop => {}
}
}
match &bb_data.terminator().kind {
TerminatorKind::Goto { .. }
| TerminatorKind::SwitchInt { .. }
| TerminatorKind::UnwindResume
| TerminatorKind::UnwindTerminate(_)
| TerminatorKind::Return
| TerminatorKind::Unreachable
| TerminatorKind::Drop { .. }
| TerminatorKind::Assert { .. } => {}
TerminatorKind::Yield { .. }
| TerminatorKind::CoroutineDrop
| TerminatorKind::FalseEdge { .. }
| TerminatorKind::FalseUnwind { .. } => unreachable!(),
TerminatorKind::InlineAsm { .. } => return None,
TerminatorKind::Call { destination, target: Some(_), .. }
if destination == place =>
{
return None;
}
TerminatorKind::TailCall { .. } => return None,
TerminatorKind::Call { .. } => {}
}
}
computed_scalar_int
}
}
}

@Amanieu
Copy link
Member

Amanieu commented Oct 7, 2025

Since this is a new intrinsic, we could just emit an error if the operand is not Operand::Constant and require callers to pass a proper MIR constant (via a const block).

@bjorn3
Copy link
Member

bjorn3 commented Oct 7, 2025

What is the advantage of that over const generics?

@RalfJung
Copy link
Member

RalfJung commented Oct 7, 2025 via email

@sayantn
Copy link
Contributor Author

sayantn commented Oct 7, 2025

As @RalfJung and @bjorn3 both agree on using a const-generic parameter instead of this "hacky" const parameter, I am updating this PR to use a const-generic 3-variant enum - Unaligned (1-byte aligned), Element (aligned to the SIMD element type) and Vector (aligned to the SIMD vector type). These are the most common in SIMD code (stdarch will mostly use Unaligned and Vector, and portable simd uses Element). My only (tiny) complain is that because it has 3 generic parameters, calls look a bit ugly, but sometimes we need to sacrifice a bit of readability for a better design

@rustbot
Copy link
Collaborator

rustbot commented Oct 7, 2025

This PR modifies tests/auxiliary/minicore.rs.

cc @jieyouxu

@rustbot rustbot added the A-test-infra-minicore Area: `minicore` test auxiliary and `//@ add-core-stubs` label Oct 7, 2025
@rustbot
Copy link
Collaborator

rustbot commented Oct 7, 2025

This PR was rebased onto a different master commit. Here's a range-diff highlighting what actually changed.

Rebasing is a normal part of keeping PRs up to date, so no action is needed—this note is just to help reviewers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. A-test-infra-minicore Area: `minicore` test auxiliary and `//@ add-core-stubs` S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants