-
Notifications
You must be signed in to change notification settings - Fork 13.7k
Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [X ] I carefully followed the README.md.
- [X ] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [ X] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
Trying to get llama.cpp to run on windows 10
Current Behavior
Generally everything seems to work, but the model does not load. After it tries to load the model, it just exits after a few seconds and that is it. No error, message, nothing. RAM gets loaded for a few seconds, same with CPU and GPU.
I have tried every binary available, always with the same outcome. Also the two models I have tried are: ggml-vicuna-7b-1.1 and ggml-vicuna-13b-1.1
If I try for example ".\main.exe --help" the output is correct. So it seems to work in principle, also my GPU gets detected if the binary is with clblast.
Below I attached the console input and output. I am really out of words. Hours of googling did not seem to yield anything, because there is apparently nobody with that problem.

Environment and Context
- 32 GB of RAM
- AMD Ryzen 9 5950x
- Nvidia RTX 3090