Skip to content

Commit e0db2f1

Browse files
committed
Remove redundand comment
1 parent 585300d commit e0db2f1

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

fbgemm_gpu/codegen/training/pt2/embedding_split_host_pt2_autograd_template.cpp

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1061,8 +1061,6 @@ static torch::autograd::variable_list backward(
10611061
#ifdef USE_ROCM
10621062
constexpr int32_t BT_block_size = 64;
10631063
int32_t max_segment_length_per_warp = 64;
1064-
// Workaround. Should not be upstreamed in any way.
1065-
// Redistribute all cta_per_row work to warp_per_row.
10661064
int32_t total_L = indices.numel();
10671065
{%- if (not nobag) and
10681066
(optimizer == "rowwise_adagrad") and

0 commit comments

Comments
 (0)