Skip to content

Conversation

@tejom
Copy link
Contributor

@tejom tejom commented Nov 7, 2023

llava-cli was loading models with default params and ignoring settings from the cli. This switches to a generic function to load the params from the cli options.

llava-cli was loading models with default params and ignoring settings
from the cli. This switches to a generic function to load the params
from the cli options.
@tejom tejom marked this pull request as ready for review November 7, 2023 07:29
@tejom
Copy link
Contributor Author

tejom commented Nov 7, 2023

Hey small PR here, I wrote a quick fix when I noticed that the model wasn't using my GPU for offloading layers even though I had the setting on the cli.

@monatis
Copy link
Collaborator

monatis commented Nov 7, 2023

Thanks, a regression in #3613

@monatis monatis merged commit 54b4df8 into ggml-org:master Nov 7, 2023
@tejom
Copy link
Contributor Author

tejom commented Nov 7, 2023

Np, appreciate the quick turn around time!

olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
llava-cli was loading models with default params and ignoring settings
from the cli. This switches to a generic function to load the params
from the cli options.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants