Skip to content

Conversation

studyingeugene
Copy link
Contributor

Suggestion

This PR removes a redundant one-line Tensor allocation in
GaussianConditional.update() within entropy_models.py.

# Before
quantized_cdf = torch.Tensor(len(pmf_length), max_length + 2)
quantized_cdf = self._pmf_to_cdf(pmf, tail_mass, pmf_length, max_length)

# After
quantized_cdf = self._pmf_to_cdf(pmf, tail_mass, pmf_length, max_length)

Testing

  • Ran unit tests that exercise GaussianConditional.update() and dependent
    compression/decompression paths; no regressions observed.

  • Verified that self._quantized_cdf, self._offset, and self._cdf_length
    match previous values bit-for-bit for a fixed scale_table.

Why this trivial change matters

TorchDynamo or FX tracing can sometimes interpret redundant allocations as live ops, bloating the captured graph.

Removing this line avoids all of the above while keeping _pmf_to_cdf() solely responsible for dtype and device consistency.

No behavioral change is expected; this is a safe cleanup.

…pdate()

In GaussianConditional.update(), a temporary Tensor is allocated:

    quantized_cdf = torch.Tensor(len(pmf_length), max_length + 2)
    quantized_cdf = self._pmf_to_cdf(pmf, tail_mass, pmf_length, max_length)

The first line is a dead store: the variable is immediately overwritten by
the result of _pmf_to_cdf(). This removes the unnecessary allocation.

- No functional changes
- Slightly reduces heap traffic and avoids creating an uninitialized Tensor
- Keeps dtype/device fully defined by _pmf_to_cdf()
@fracape fracape self-assigned this Oct 22, 2025
@fracape fracape merged commit ff16d32 into InterDigitalInc:master Oct 22, 2025
6 checks passed
@studyingeugene
Copy link
Contributor Author

Thanks for merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants