Skip to content

Segmentation fault in converting my llama2c models to ggml. #2574

@saltyduckegg

Description

@saltyduckegg

hello !
I am try to convert my llama2c models to ggml.
but it looks like need a vocab file. so how can i get it ?

or How can i convert my tokenizer.model to a GGML file?
i only have tokenizer.model and tokenizer.bin now

$ ./bin/convert-llama2c-to-ggml --vocab-model ../../llama2.c.xs/tokenizer.model   --llama2c-model  ../../llama2.c.xs/out/model.bin   --llama2c-output-model ./xs
[malloc_weights:AK] Allocating [8000] x [288] = [2304000] float space for w->token_embedding_table
[malloc_weights:AK] Allocating [6] x [288] = [1728] float space for w->rms_att_weight
[malloc_weights:AK] Allocating [6] x [288] = [1728] float space for w->rms_ffn_weight
[malloc_weights:AK] Allocating [6] x [288] x [288] = [497664] float space for w->wq
[malloc_weights:AK] Allocating [6] x [288] x [288] = [497664] float space for w->wk
[malloc_weights:AK] Allocating [6] x [288] x [288] = [497664] float space for w->wv
[malloc_weights:AK] Allocating [6] x [288] x [288] = [497664] float space for w->wo
[malloc_weights:AK] Allocating [6] x [768] x [288] = [1327104] float space for w->w1
[malloc_weights:AK] Allocating [6] x [288] x [768] = [1327104] float space for w->w2
[malloc_weights:AK] Allocating [6] x [768] x [288] = [1327104] float space for w->w3
[malloc_weights:AK] Allocating [288] float space for w->rms_final_weight
llama.cpp: loading model from ../../llama2.c.xs/tokenizer.model
error loading model: unknown (magic, version) combination: 050a0e0a, 6b6e753c; is this really a GGML file?
llama_load_model_from_file: failed to load model
Segmentation fault (core dumped)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions