Skip to content

Conversation

@ggerganov
Copy link
Member

No description provided.

@slaren
Copy link
Member

slaren commented Mar 11, 2024

I found it a bit confusing that in llama_context_params this parameter is called n_parallel, and then n_seq_max everywhere else.

@ggerganov ggerganov merged commit 05b0621 into master Mar 11, 2024
@ggerganov ggerganov deleted the gg/counts-consistency branch March 11, 2024 15:49
NeoZhangJianyu pushed a commit to NeoZhangJianyu/llama.cpp that referenced this pull request Mar 12, 2024
* llama : more consistent names of count variables

ggml-ci

* llama : n_parallel -> n_seq_max

* common : fix param name

* examples : fix param name
jordankanter pushed a commit to jordankanter/llama.cpp that referenced this pull request Mar 13, 2024
* llama : more consistent names of count variables

ggml-ci

* llama : n_parallel -> n_seq_max

* common : fix param name

* examples : fix param name
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
* llama : more consistent names of count variables

ggml-ci

* llama : n_parallel -> n_seq_max

* common : fix param name

* examples : fix param name
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants