Skip to content

Conversation

@JohannesGaessler
Copy link
Collaborator

This PR adds CUDA support for matrix multiplications with ne03 != ne13. I added support to ggml_cuda_op_mul_mat which provides the kernels with pointers on which they can perform matrix multiplications as if dimensions 2 and 3 had size 1. However, because this adds a lot of overhead I also extended mul_mat_vec for batched matrix vector multiplication support in dimension 3.

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Feb 4, 2025
@JohannesGaessler JohannesGaessler force-pushed the cuda-mm-dim3-broadcast branch 2 times, most recently from a33f0f5 to 9887021 Compare February 4, 2025 15:52
@JohannesGaessler
Copy link
Collaborator Author

I don't understand why the server CI jobs are failing, for some reason the server isn't online after 12 seconds. Can I assume it's unrelated to my changes?

@slaren
Copy link
Member

slaren commented Feb 4, 2025

It's probably failing to download the test model from HF, happens occasionally.

@JohannesGaessler JohannesGaessler merged commit fa62da9 into ggml-org:master Feb 5, 2025
91 of 92 checks passed
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
orca-zhang pushed a commit to orca-zhang/llama.cpp that referenced this pull request Feb 26, 2025
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants