Skip to content

Commit 990df49

Browse files
committed
removed jinj is_rocm on total_L as USE_ROCM is already applied
1 parent 570f148 commit 990df49

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

fbgemm_gpu/codegen/training/pt2/embedding_split_host_pt2_autograd_template.cpp

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1009,9 +1009,7 @@ static torch::autograd::variable_list backward(
10091009
int32_t max_segment_length_per_warp = 64;
10101010
// Workaround. Should not be upstreamed in any way.
10111011
// Redistribute all cta_per_row work to warp_per_row.
1012-
{% if is_rocm %}
10131012
int32_t total_L = indices.numel();
1014-
{%- endif %}
10151013
{%- if (not nobag) and
10161014
(optimizer == "rowwise_adagrad") and
10171015
(not vbe) and

0 commit comments

Comments
 (0)