Skip to content

Conversation

@NickLucche
Copy link
Collaborator

@NickLucche NickLucche commented Jan 31, 2025

There's a test suite that's currently disabled #12240 due to a flaky test, in which the output tokens end up being slightly different wrt the baseline (here's the test https://github.com/vllm-project/vllm/blob/main/tests/spec_decode/e2e/test_integration_dist_tp2.py#L104).
Issue is only reproducible with tp>1.

Due to it being a slight change of phrasing in the output sentence, I suspect it might be a numerical issue (as also noted in the original PR that introduced it #6050 (comment)). Tests are now passing both with fp32 and fp16 accuracy, but I'll be on the lookout in case it errors out again.

Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

It looks like now we have OOM error, can you fix it?

@NickLucche
Copy link
Collaborator Author

NickLucche commented Feb 4, 2025

@DarkLight1337 I've looked into it but I am not sure what's causing the OOM at loading time on L40s. It appears we're at the limit of what can be fit on that gpu, but the math doesn't check out to me.

We should be loading (approx): 2x6.5gb models + 2x0.5 activations=>only 5gb of kv cache left to split among scorer and drafter.

Yet profiling reports 12gb of kvcache, as it wasn't accounting for the draft model..

INFO 02-04 10:21:00 worker.py:266] the current vLLM instance can use total_gpu_memory (21.95GiB) x gpu_memory_utilization (0.90) = 19.76GiB
INFO 02-04 10:21:00 worker.py:266] model weights take 6.50GiB; non_torch_memory takes 0.06GiB; PyTorch activation peak memory takes 0.45GiB; the rest of the memory reserved for KV Cache is 12.73GiB. <===


Issue can be reproduced on L40 even with tp=1.

@DarkLight1337
Copy link
Member

cc @youkaichao @joerunde

@youkaichao
Copy link
Member

cc @LiuXiaoxuanPKU for memory profiling in spec decode.

@LiuXiaoxuanPKU LiuXiaoxuanPKU self-assigned this Feb 4, 2025
@mergify
Copy link

mergify bot commented May 5, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @NickLucche.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label May 5, 2025
@github-actions
Copy link

github-actions bot commented Aug 5, 2025

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label Aug 5, 2025
@github-actions
Copy link

github-actions bot commented Sep 6, 2025

This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you!

@github-actions github-actions bot closed this Sep 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants