Skip to content

Eval bug: GGML_ASSERT(ggml_can_mul_mat(a, b)) failed #15806

@armin976

Description

@armin976

Name and Version

I'm using the XCFramework on version b6379

Operating systems

Other? (Please let us know in description)

GGML backends

Metal

Hardware

A16 Bionic chip on iPhone 14 Pro.

Models

I'm using the nomic-ai/nomic-embed-text-v1.5-GGUF at Q4_K_M quantization.

Problem description & steps to reproduce

I've created a .h and .mm in order to use the XCFramework in my swift app. I'm getting the GGML Assert error with these parameters in my .mm file. So when going ahead and trying to run local RAG with an embedding model that vectorizes datasets, i get the GGML assert when trying to load the embedding model.

  • (instancetype)initWithModelPath:(NSString *)modelPath threads:(int)threads {
    self = [super init];
    if (!self) return nil;
    llama_backend_init();
    struct llama_model_params mp = llama_model_default_params();
    mp.n_gpu_layers = 0; // CPU-only for embeddings
    _model = llama_load_model_from_file(modelPath.UTF8String, mp);
    if (!_model) return self;
    struct llama_context_params cp = llama_context_default_params();
    cp.embeddings = true;
    cp.n_threads = threads > 0 ? threads : 2;
    cp.n_ctx = 2048;
    cp.n_ubatch = cp.n_ctx;
    cp.pooling_type = (enum llama_pooling_type)LLAMA_POOLING_MEAN;
    _ctx = llama_new_context_with_model(_model, cp);
    _dim = _model ? llama_n_embd(_model) : 0;
    return self;
    }

Log:
...
ggml_metal_init: loaded kernel_cpy_q8_0_f32 0x10b55be00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cpy_q8_0_f16 0x10b55bf00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_concat 0x135f44000 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sqr 0x135f44100 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sqrt 0x135f44200 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sin 0x135f44300 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_cos 0x135f44400 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_neg 0x135f44500 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_reglu 0x135f44600 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_geglu 0x135f44700 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_swiglu 0x135f44800 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_swiglu_oai 0x135f44900 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_geglu_erf 0x135f44a00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_geglu_quick 0x135f44b00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_sum_rows 0x135f44c00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_mean 0x135f44d00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_argmax 0x135f44e00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_pool_2d_avg_f32 0x135f44f00 | th_max = 1024 | th_width = 32
ggml_metal_init: loaded kernel_pool_2d_max_f32 0x135f45000 | th_max = 1024 | th_width = 32
set_abort_callback: call
llama_context: CPU output buffer size = 0.12 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 3
llama_context: max_nodes = 1024
llama_context: worst-case: n_tokens = 2048, n_seqs = 1, n_outputs = 0
graph_reserve: reserving a graph for ubatch with n_tokens = 1, n_seqs = 1, n_outputs = 0
/Users/runner/work/llama.cpp/llama.cpp/ggml/src/ggml.c:3023: GGML_ASSERT(ggml_can_mul_mat(a, b)) failed

First Bad Commit

I cannot figure out where the issue started. I do know it worked with older llama.cpp versions and this definitely isnt a llama.cpp issue its my own implementation that is.

Relevant log output

[DatasetManager] startEmbeddingForID called for: OTL/188
[RAG] prepare.start dataset=OTL/188
[RAG][UI] OTL/188 Embedding / Warming Up 0% ETA … – Preparing chunks
[RAG][UI] OTL/188 Embedding / Warming Up 0% ETA … – Preparing chunks
[RAG] Using maxTokensPerChunk=1200 for chunking
[Embed] load_model path=/var/mobile/Containers/Data/Application/00D5801E-4CF1-4380-9E04-CD5281473BE5/Documents/LocalLLMModels/Embeddings/nomic-ai/nomic-embed-text-v1.5/nomic-embed-text-v1.5.Q4_K_M.gguf
[Embed] Using 3 threads
llama_model_load_from_file_impl: using device Metal (Apple A16 GPU) - 4088 MiB free
llama_model_loader: loaded meta data with 23 key-value pairs and 112 tensors from /var/mobile/Containers/Data/Application/00D5801E-4CF1-4380-9E04-CD5281473BE5/Documents/LocalLLMModels/Embeddings/nomic-ai/nomic-embed-text-v1.5/nomic-embed-text-v1.5.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = nomic-bert
llama_model_loader: - kv   1:                               general.name str              = nomic-embed-text-v1.5
llama_model_loader: - kv   2:                     nomic-bert.block_count u32              = 12
llama_model_loader: - kv   3:                  nomic-bert.context_length u32              = 2048
llama_model_loader: - kv   4:                nomic-bert.embedding_length u32              = 768
llama_model_loader: - kv   5:             nomic-bert.feed_forward_length u32              = 3072
llama_model_loader: - kv   6:            nomic-bert.attention.head_count u32              = 12
llama_model_loader: - kv   7:    nomic-bert.attention.layer_norm_epsilon f32              = 0.000000
llama_model_loader: - kv   8:                          general.file_type u32              = 15
llama_model_loader: - kv   9:                nomic-bert.attention.causal bool             = false
llama_model_loader: - kv  10:                    nomic-bert.pooling_type u32              = 1
llama_model_loader: - kv  11:                  nomic-bert.rope.freq_base f32              = 1000.000000
llama_model_loader: - kv  12:            tokenizer.ggml.token_type_count u32              = 2
llama_model_loader: - kv  13:                tokenizer.ggml.bos_token_id u32              = 101
llama_model_loader: - kv  14:                tokenizer.ggml.eos_token_id u32              = 102
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = bert
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,30522]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 100
llama_model_loader: - kv  20:          tokenizer.ggml.seperator_token_id u32              = 102
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   51 tensors
llama_model_loader: - type q4_K:   43 tensors
llama_model_loader: - type q5_K:   12 tensors
llama_model_loader: - type q6_K:    6 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 79.49 MiB (4.88 BPW) 
init_tokenizer: initializing tokenizer for type 3
load: control token:    100 '[UNK]' is not marked as EOG
load: control token:    101 '[CLS]' is not marked as EOG
load: control token:      0 '[PAD]' is not marked as EOG
load: control token:    102 '[SEP]' is not marked as EOG
load: control token:    103 '[MASK]' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load:   - 102 ('[SEP]')
load: special tokens cache size = 5
load: token to piece cache size = 0.2032 MB
print_info: arch             = nomic-bert
print_info: vocab_only       = 0
print_info: n_ctx_train      = 2048
print_info: n_embd           = 768
print_info: n_layer          = 12
print_info: n_head           = 12
print_info: n_head_kv        = 12
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 768
print_info: n_embd_v_gqa     = 768
print_info: f_norm_eps       = 1.0e-12
print_info: f_norm_rms_eps   = 0.0e+00
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 3072
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 0
print_info: pooling type     = 1
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 2048
print_info: rope_finetuned   = unknown
print_info: model type       = 137M
print_info: model params     = 136.73 M
print_info: general.name     = nomic-embed-text-v1.5
print_info: vocab type       = WPM
print_info: n_vocab          = 30522
print_info: n_merges         = 0
print_info: BOS token        = 101 '[CLS]'
print_info: EOS token        = 102 '[SEP]'
print_info: UNK token        = 100 '[UNK]'
print_info: SEP token        = 102 '[SEP]'
print_info: PAD token        = 0 '[PAD]'
print_info: MASK token       = 103 '[MASK]'
print_info: LF token         = 0 '[PAD]'
print_info: EOG token        = 102 '[SEP]'
print_info: max token length = 21
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device CPU, is_swa = 0
load_tensors: layer   1 assigned to device CPU, is_swa = 0
load_tensors: layer   2 assigned to device CPU, is_swa = 0
load_tensors: layer   3 assigned to device CPU, is_swa = 0
load_tensors: layer   4 assigned to device CPU, is_swa = 0
load_tensors: layer   5 assigned to device CPU, is_swa = 0
load_tensors: layer   6 assigned to device CPU, is_swa = 0
load_tensors: layer   7 assigned to device CPU, is_swa = 0
load_tensors: layer   8 assigned to device CPU, is_swa = 0
load_tensors: layer   9 assigned to device CPU, is_swa = 0
load_tensors: layer  10 assigned to device CPU, is_swa = 0
load_tensors: layer  11 assigned to device CPU, is_swa = 0
load_tensors: layer  12 assigned to device CPU, is_swa = 0
create_tensor: loading tensor token_embd.weight
create_tensor: loading tensor token_types.weight
create_tensor: loading tensor token_embd_norm.weight
create_tensor: loading tensor token_embd_norm.bias
create_tensor: loading tensor blk.0.attn_qkv.weight
create_tensor: loading tensor blk.0.attn_output.weight
create_tensor: loading tensor blk.0.attn_output_norm.weight
create_tensor: loading tensor blk.0.attn_output_norm.bias
create_tensor: loading tensor blk.0.ffn_up.weight
create_tensor: loading tensor blk.0.ffn_down.weight
create_tensor: loading tensor blk.0.ffn_gate.weight
create_tensor: loading tensor blk.0.layer_output_norm.weight
create_tensor: loading tensor blk.0.layer_output_norm.bias
create_tensor: loading tensor blk.1.attn_qkv.weight
create_tensor: loading tensor blk.1.attn_output.weight
create_tensor: loading tensor blk.1.attn_output_norm.weight
create_tensor: loading tensor blk.1.attn_output_norm.bias
create_tensor: loading tensor blk.1.ffn_up.weight
create_tensor: loading tensor blk.1.ffn_down.weight
create_tensor: loading tensor blk.1.ffn_gate.weight
create_tensor: loading tensor blk.1.layer_output_norm.weight
create_tensor: loading tensor blk.1.layer_output_norm.bias
create_tensor: loading tensor blk.2.attn_qkv.weight
create_tensor: loading tensor blk.2.attn_output.weight
create_tensor: loading tensor blk.2.attn_output_norm.weight
create_tensor: loading tensor blk.2.attn_output_norm.bias
create_tensor: loading tensor blk.2.ffn_up.weight
create_tensor: loading tensor blk.2.ffn_down.weight
create_tensor: loading tensor blk.2.ffn_gate.weight
create_tensor: loading tensor blk.2.layer_output_norm.weight
create_tensor: loading tensor blk.2.layer_output_norm.bias
create_tensor: loading tensor blk.3.attn_qkv.weight
create_tensor: loading tensor blk.3.attn_output.weight
create_tensor: loading tensor blk.3.attn_output_norm.weight
create_tensor: loading tensor blk.3.attn_output_norm.bias
create_tensor: loading tensor blk.3.ffn_up.weight
create_tensor: loading tensor blk.3.ffn_down.weight
create_tensor: loading tensor blk.3.ffn_gate.weight
create_tensor: loading tensor blk.3.layer_output_norm.weight
create_tensor: loading tensor blk.3.layer_output_norm.bias
create_tensor: loading tensor blk.4.attn_qkv.weight
create_tensor: loading tensor blk.4.attn_output.weight
create_tensor: loading tensor blk.4.attn_output_norm.weight
create_tensor: loading tensor blk.4.attn_output_norm.bias
create_tensor: loading tensor blk.4.ffn_up.weight
create_tensor: loading tensor blk.4.ffn_down.weight
create_tensor: loading tensor blk.4.ffn_gate.weight
create_tensor: loading tensor blk.4.layer_output_norm.weight
create_tensor: loading tensor blk.4.layer_output_norm.bias
create_tensor: loading tensor blk.5.attn_qkv.weight
create_tensor: loading tensor blk.5.attn_output.weight
create_tensor: loading tensor blk.5.attn_output_norm.weight
create_tensor: loading tensor blk.5.attn_output_norm.bias
create_tensor: loading tensor blk.5.ffn_up.weight
create_tensor: loading tensor blk.5.ffn_down.weight
create_tensor: loading tensor blk.5.ffn_gate.weight
create_tensor: loading tensor blk.5.layer_output_norm.weight
create_tensor: loading tensor blk.5.layer_output_norm.bias
create_tensor: loading tensor blk.6.attn_qkv.weight
create_tensor: loading tensor blk.6.attn_output.weight
create_tensor: loading tensor blk.6.attn_output_norm.weight
create_tensor: loading tensor blk.6.attn_output_norm.bias
create_tensor: loading tensor blk.6.ffn_up.weight
create_tensor: loading tensor blk.6.ffn_down.weight
create_tensor: loading tensor blk.6.ffn_gate.weight
create_tensor: loading tensor blk.6.layer_output_norm.weight
create_tensor: loading tensor blk.6.layer_output_norm.bias
create_tensor: loading tensor blk.7.attn_qkv.weight
create_tensor: loading tensor blk.7.attn_output.weight
create_tensor: loading tensor blk.7.attn_output_norm.weight
create_tensor: loading tensor blk.7.attn_output_norm.bias
create_tensor: loading tensor blk.7.ffn_up.weight
create_tensor: loading tensor blk.7.ffn_down.weight
create_tensor: loading tensor blk.7.ffn_gate.weight
create_tensor: loading tensor blk.7.layer_output_norm.weight
create_tensor: loading tensor blk.7.layer_output_norm.bias
create_tensor: loading tensor blk.8.attn_qkv.weight
create_tensor: loading tensor blk.8.attn_output.weight
create_tensor: loading tensor blk.8.attn_output_norm.weight
create_tensor: loading tensor blk.8.attn_output_norm.bias
create_tensor: loading tensor blk.8.ffn_up.weight
create_tensor: loading tensor blk.8.ffn_down.weight
create_tensor: loading tensor blk.8.ffn_gate.weight
create_tensor: loading tensor blk.8.layer_output_norm.weight
create_tensor: loading tensor blk.8.layer_output_norm.bias
create_tensor: loading tensor blk.9.attn_qkv.weight
create_tensor: loading tensor blk.9.attn_output.weight
create_tensor: loading tensor blk.9.attn_output_norm.weight
create_tensor: loading tensor blk.9.attn_output_norm.bias
create_tensor: loading tensor blk.9.ffn_up.weight
create_tensor: loading tensor blk.9.ffn_down.weight
create_tensor: loading tensor blk.9.ffn_gate.weight
create_tensor: loading tensor blk.9.layer_output_norm.weight
create_tensor: loading tensor blk.9.layer_output_norm.bias
create_tensor: loading tensor blk.10.attn_qkv.weight
create_tensor: loading tensor blk.10.attn_output.weight
create_tensor: loading tensor blk.10.attn_output_norm.weight
create_tensor: loading tensor blk.10.attn_output_norm.bias
create_tensor: loading tensor blk.10.ffn_up.weight
create_tensor: loading tensor blk.10.ffn_down.weight
create_tensor: loading tensor blk.10.ffn_gate.weight
create_tensor: loading tensor blk.10.layer_output_norm.weight
create_tensor: loading tensor blk.10.layer_output_norm.bias
create_tensor: loading tensor blk.11.attn_qkv.weight
create_tensor: loading tensor blk.11.attn_output.weight
create_tensor: loading tensor blk.11.attn_output_norm.weight
create_tensor: loading tensor blk.11.attn_output_norm.bias
create_tensor: loading tensor blk.11.ffn_up.weight
create_tensor: loading tensor blk.11.ffn_down.weight
create_tensor: loading tensor blk.11.ffn_gate.weight
create_tensor: loading tensor blk.11.layer_output_norm.weight
create_tensor: loading tensor blk.11.layer_output_norm.bias
load_tensors: tensor 'token_embd.weight' (q4_K) (and 111 others) cannot be used with preferred buffer type CPU_REPACK, using CPU instead
load_tensors: offloading 0 repeating layers to GPU
load_tensors: offloaded 0/13 layers to GPU
load_tensors:   CPU_Mapped model buffer size =    79.49 MiB
........................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 2048
llama_context: n_ctx_per_seq = 2048
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 2048
llama_context: causal_attn   = 0
llama_context: flash_attn    = auto
llama_context: kv_unified    = false
llama_context: freq_base     = 1000.0
llama_context: freq_scale    = 1
ggml_metal_init: allocating
ggml_metal_init: picking default device: Apple A16 GPU
ggml_metal_load_library: using embedded metal library
ggml_metal_init: GPU name:   Apple A16 GPU
ggml_metal_init: GPU family: MTLGPUFamilyApple8  (1008)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal4  (5002)
ggml_metal_init: simdgroup reduction   = true
ggml_metal_init: simdgroup matrix mul. = true
ggml_metal_init: has residency sets    = true
ggml_metal_init: has bfloat            = true
ggml_metal_init: use bfloat            = true
ggml_metal_init: hasUnifiedMemory      = true
ggml_metal_init: recommendedMaxWorkingSetSize  =  4294.98 MB
ggml_metal_init: loaded kernel_add                                    0x1055db980 | th_max = 1024 | th_width =   32
ggml_metal_init: ALL kernels loaded (shortened to fit in box)
set_abort_callback: call
llama_context:        CPU  output buffer size =     0.12 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 3
llama_context: max_nodes = 1024
llama_context: worst-case: n_tokens = 2048, n_seqs = 1, n_outputs = 0
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    0
/Users/runner/work/llama.cpp/llama.cpp/ggml/src/ggml.c:3023: GGML_ASSERT(ggml_can_mul_mat(a, b)) failed

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions