Skip to content

Commit 75dc800

Browse files
authored
talk-llama : fix n_gpu_layers usage again (#1442)
1 parent 0c91aef commit 75dc800

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/talk-llama/talk-llama.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,7 @@ int main(int argc, char ** argv) {
267267

268268
auto lmparams = llama_model_default_params();
269269
if (!params.use_gpu) {
270-
lcparams.lmparams = 0;
270+
lmparams.n_gpu_layers = 0;
271271
}
272272

273273
struct llama_model * model_llama = llama_load_model_from_file(params.model_llama.c_str(), lmparams);

0 commit comments

Comments
 (0)