@@ -66,8 +66,7 @@ vectorizeConvolution(RewriterBase &rewriter, LinalgOp convOp,
6666// / * inferred from the static dims in the input and output tensors.
6767// / Bails out if:
6868// / * vector sizes are not user-provided, and
69- // / * at least one dim is dynamic (in both the input and output tensors),
70- // / bails out.
69+ // / * at least one dim is dynamic (in both the input and output tensors).
7170// /
7271// / Before:
7372// / !t_in_type = tensor<1x2x3xf32>
@@ -1918,15 +1917,15 @@ vectorizeInsertSliceOpPrecondition(tensor::InsertSliceOp sliceOp,
19181917 return failure ();
19191918
19201919 // Get the pad value.
1921- // TransferReadOp (which is used to vectorize InsertSliceOp, requires a scalar
1922- // padding value. Note that:
1923- // * for in-bounds access, the value is actually irrelevant.
1924- // There are 2 cases in which xfer.read accesses are known to be in-bounds:
1920+ // TransferReadOp (which is used to vectorize InsertSliceOp), requires a
1921+ // scalar padding value. Note that:
1922+ // * for in-bounds accesses,
1923+ // the value is actually irrelevant. There are 2 cases in which xfer.read
1924+ // accesses are known to be in-bounds:
19251925 // 1. The source shape is static (output vector sizes would be based on
19261926 // the source shape and hence all memory accesses would be in-bounds),
1927- // 2. Masking is used (output vector sizes would be user-provided, in which
1928- // case it is assumed that all memory accesses are in-bounds). This
1929- // remains a TODO.
1927+ // 2. Masking is used, i.e. the output vector sizes are user-provided. In
1928+ // this case it is safe to assume that all memory accesses are in-bounds.
19301929 //
19311930 // When the value is not known and not needed, use 0. Otherwise, bail out.
19321931 Value padValue = getStaticPadVal (sliceOp);
0 commit comments