Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit ee12d64

Browse files
jan-service-accountgithub-actions[bot]sangjanai
authored
Update llama.cpp submodule to latest release b3534 (#178)
* Update submodule to latest release b3534 * fix: API changes * fix: assign model and context --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: vansangpfiev <[email protected]>
1 parent 4bf9c4e commit ee12d64

File tree

2 files changed

+4
-2
lines changed

2 files changed

+4
-2
lines changed

llama.cpp

src/llama_server_context.cc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,9 @@ bool LlamaServerContext::LoadModel(const gpt_params& params_) {
177177
}
178178
}
179179

180-
std::tie(model, ctx) = llama_init_from_gpt_params(params);
180+
auto res = llama_init_from_gpt_params(params);
181+
model = res.model;
182+
ctx = res.context;
181183
if (model == nullptr) {
182184
LOG_ERROR_LLAMA("llama.cpp unable to load model",
183185
{{"model", params.model}});

0 commit comments

Comments
 (0)