Skip to content

Conversation

@ddh0
Copy link
Contributor

@ddh0 ddh0 commented Apr 30, 2025

Fixed a typo in src/llama-context.cpp:117: n_ctx_pre_seq --> n_ctx_per_seq

@ericcurtin ericcurtin merged commit 16a457f into ggml-org:master Apr 30, 2025
47 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request May 1, 2025
* origin/master:
sync : ggml
whisper : add check that target name exists (whisper/3103)
ggml : suppress Windows compiler warnings (whisper/3075)
mtmd : add **vision** support for Mistral Small 3.1 (ggml-org#13231)
arg : remove CURLINFO_EFFECTIVE_METHOD (ggml-org#13228)
llama-model : fix the reported size class for nomic-embed-text-v2-moe (ggml-org#13223)
sync : ggml
ggml : fix ggml_gallocr_ptr type (ggml/1205)
cuda : fix unused variable compile warning (whisper/0)
CUDA: batched+noncont MMQ, refactor bs>1 MoE code (ggml-org#13199)
arg : -hf do not fail if url mismatch (ggml-org#13219)
fix typo: `n_ctx_pre_seq` -> `n_ctx_per_seq` (ggml-org#13221)
convert : improve model arch handling (ggml-org#13122)
llava : remove duplicate include (ggml-org#13207)
common : add -jf / --json-schema-file flag (ggml-org#12011)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants