-
Notifications
You must be signed in to change notification settings - Fork 15.1k
Closed
Closed
Copy link
Labels
Description
All patterns in populateVectorNarrowTypeEmulationPatterns currently assume a 1-D vector load/store rather than an n-D vector load/store. This assumption is evident in ConvertVectorTransferRead, for example, here:
auto newRead = rewriter.create<vector::TransferReadOp>(
loc, VectorType::get(numElements, newElementType), adaptor.getSource(),
getValueOrCreateConstantIndexOp(rewriter, loc, linearizedIndices),
newPadding);
auto bitCast = rewriter.create<vector::BitCastOp>(
loc, VectorType::get(numElements * scale, oldElementType), newRead);
Both invocations of VectorType::get()
here generate a 1-D vector.
Attempts to use these patterns with more generic cases, such as 2-D vectors, fail. For example, trying to cast the following 2-D case to i32
:
func.func @vector_maskedload_2d_i8_negative(
%idx1: index,
%idx2: index,
%num_elems: index,
%passthru: vector<2x4xi8>) -> vector<2x4xi8> {
%0 = memref.alloc() : memref<3x4xi8>
%mask = vector.create_mask %num_elems, %num_elems : vector<2x4xi1>
%1 = vector.maskedload %0[%idx1, %idx2], %mask, %passthru :
memref<3x4xi8>, vector<2x4xi1>, vector<2x4xi8> into vector<2x4xi8>
return %1 : vector<2x4xi8>
}
leads to the following error:
error: 'vector.bitcast' op failed to verify that all of {source, result} have same rank
%1 = vector.maskedload %0[%idx1, %idx2], %mask, %passthru :
^
Here’s the mlir-opt invocation used:
mlir-opt --test-emulate-narrow-int="arith-compute-bitwidth=1 memref-load-bitwidth=32"
For further context, I’ve included a full list of reproductions as tests in this PR:
As a temporary workaround, I suggest restricting these patterns to 1-D vectors.