Skip to content

Commit 2bf9af9

Browse files
jhen0409vonstring
authored andcommitted
talk-llama : fix n_gpu_layers usage again (ggml-org#1442)
1 parent 924b394 commit 2bf9af9

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/talk-llama/talk-llama.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,7 @@ int main(int argc, char ** argv) {
267267

268268
auto lmparams = llama_model_default_params();
269269
if (!params.use_gpu) {
270-
lcparams.lmparams = 0;
270+
lmparams.n_gpu_layers = 0;
271271
}
272272

273273
struct llama_model * model_llama = llama_load_model_from_file(params.model_llama.c_str(), lmparams);

0 commit comments

Comments
 (0)