Skip to content

Conversation

ggerganov
Copy link
Member

ref #9418 (comment)

Useful to print things on the same line as the previous log message which might have a prefix (e.g. timestamp, level, etc.). For example:

            LOG_INF("%s: static prompt based on n_keep: '", __func__);
            for (int i = 0; i < params.n_keep; i++) {
                LOG_CNT("%s", llama_token_to_piece(ctx, embd_inp[i]).c_str());
            }
            LOG_CNT("'\n");
0.00.815.053 I    198 -> '
'
0.00.815.053 I main: static prompt based on n_keep: '<|im_start|>system
You are'
0.00.815.060 I 
0.00.815.062 I main: interactive mode on.

@github-actions github-actions bot added examples ggml changes relating to the ggml tensor library for machine learning labels Sep 23, 2024
@ggerganov ggerganov merged commit cea1486 into master Sep 24, 2024
54 checks passed
@ggerganov ggerganov deleted the gg/log-cnt branch September 24, 2024 07:15
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
examples ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant