Skip to content

Compile bug: shaderc v2025.2 causes vulkan compilation failure #15344

@AsbjornOlling

Description

@AsbjornOlling

Git commit

7aeee88

Operating systems

Linux

GGML backends

Vulkan

Problem description & steps to reproduce

Compiling llama.cpp for vulkan with shaderc v2024.0 (the version currently pinned in the nix flake) works fine.
Compiling llama.cpp for vulkan with newer versions of shaderc (e.g the latest release: v2025.2) fails.

It has something to do with bfloat16 support.

I generally reproduce this issue with the nix build system, but it also affects other build systems.

The current flake.lock file references a commit in nixpkgs from 2024-11-19, which still includes shaderc 2024.0.

To reproduce:

nix flake update # this bumps the lockfile to the latest nixos-unstable, which contains shaderc 2025.2
nix build '.#vulkan' # this builds for vulkan - or, actually, it fails to build. error logs are included below

If I bump the nixpkgs pin, and use the latest version of everything, but override the shaderc version to be v2024.0 (by pulling it from nixos-25.05), it succeeds. This demonstrates that it specifically an issue with newer versions of shaderc.

Here is a workaround diff to flake.nix that makes it work with latest nixpkgs-unstable
diff --git a/flake.nix b/flake.nix
index bb02c8e52..18bc24160 100644
--- a/flake.nix
+++ b/flake.nix
@@ -17,6 +17,7 @@
 
   inputs = {
     nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
+    stable-nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
     flake-parts.url = "github:hercules-ci/flake-parts";
   };
 
@@ -152,7 +153,10 @@
             packages =
               {
                 default = config.legacyPackages.llamaPackages.llama-cpp;
-                vulkan = config.packages.default.override { useVulkan = true; };
+                vulkan = config.packages.default.override {
+                  useVulkan = true;
+                  shaderc = (import inputs.stable-nixpkgs { inherit system; }).shaderc;
+                };
                 windows = config.legacyPackages.llamaPackagesWindows.llama-cpp;
                 python-scripts = config.legacyPackages.llamaPackages.python-scripts;
               }

It seems like what's happening is that the ggml/src/ggml-vulkanCMakeLists.txt build script tries to test for GL_EXT_bfloat16 support, which succeeds with a false positive - since later, when generating the actual vulkan shaders, it fails (see compile log output below).

The kind folks over at nixpkgs have also encountered this issue: NixOS/nixpkgs#409284
...and fixed it in this PR: NixOS/nixpkgs#432350
...by applying this patch, which just removes the bfloat16 shader extension support test: https://github.com/peterhoeg/nixpkgs/blob/58d9978d685190d592bd835cd2c7941a85978128/pkgs/by-name/ll/llama-cpp/disable_bfloat16.patch

I suppose we could solve it here by removing bfloat16 support completely, but that seems like a bad solution for obvious reasons.

First Bad Commit

It doesn't have to do with a change in llama.cpp per se, but a change in one of llama.cpp's build-time dependencies (shaderc).

Compile command

nix flake update
nix build '.#vulkan'

Relevant log output

[I] asbjorn@furnace ~/D/llama.cpp (master)> nix flake update
warning: updating lock file '"/home/asbjorn/Development/llama.cpp/flake.lock"':
• Updated input 'flake-parts':
    'github:hercules-ci/flake-parts/506278e768c2a08bec68eb62932193e341f55c90?narHash=sha256-hgmguH29K2fvs9szpq2r3pz2/8cJd2LPS%2Bb4tfNFCwE%3D' (2024-11-01)
  → 'github:hercules-ci/flake-parts/af66ad14b28a127c5c0f3bbb298218fc63528a18?narHash=sha256-pHYj8gUBapuUzKV/kN/tR3Zvqc7o6gdFB9XKXIp1SQ8%3D' (2025-08-06)
• Updated input 'flake-parts/nixpkgs-lib':
    'https://github.com/NixOS/nixpkgs/archive/cc2f28000298e1269cea6612cd06ec9979dd5d7f.tar.gz?narHash=sha256-lXvH/vOfb4aGYyvFmZK/HlsNsr/0CVWlwYvo2rxJk3s%3D' (2024-11-01)
  → 'github:nix-community/nixpkgs.lib/0f36c44e01a6129be94e3ade315a5883f0228a6e?narHash=sha256-zvaMGVn14/Zz8hnp4VWT9xVnhc8vuL3TStRqwk22biA%3D' (2025-07-27)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/23e89b7da85c3640bbc2173fe04f4bd114342367?narHash=sha256-y/MEyuJ5oBWrWAic/14LaIr/u5E0wRVzyYsouYY3W6w%3D' (2024-11-19)
  → 'github:NixOS/nixpkgs/fbcf476f790d8a217c3eab4e12033dc4a0f6d23c?narHash=sha256-wNO3%2BKs2jZJ4nTHMuks%2BcxAiVBGNuEBXsT29Bz6HASo%3D' (2025-08-14)
[I] asbjorn@furnace ~/D/llama.cpp (master)> nix build '.#vulkan' -L
warning: Git tree '/home/asbjorn/Development/llama.cpp' is dirty
llama-cpp-vulkan> Running phase: unpackPhase
llama-cpp-vulkan> unpacking source archive /nix/store/nwxj8zg42fvyqr5lj7cjxdiprbjikz4l-source
llama-cpp-vulkan> source root is source
llama-cpp-vulkan> Running phase: patchPhase
llama-cpp-vulkan> substituteStream() in derivation llama-cpp-vulkan-0.0.0: WARNING: '--replace' is deprecated, use --replace-{fail,warn,quiet}. (file './ggml/src/ggml-metal/ggml-metal.m')
llama-cpp-vulkan> Running phase: updateAutotoolsGnuConfigScriptsPhase
llama-cpp-vulkan> Running phase: configurePhase
llama-cpp-vulkan> fixing cmake files...
llama-cpp-vulkan> cmake flags: -GNinja -DCMAKE_FIND_USE_SYSTEM_PACKAGE_REGISTRY=OFF -DCMAKE_FIND_USE_PACKAGE_REGISTRY=OFF -DCMAKE_EXPORT_NO_PACKAGE_REGISTRY=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF -DCMAKE_INSTALL_LOCALEDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/share/locale -DCMAKE_INSTALL_LIBEXECDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/libexec -DCMAKE_INSTALL_LIBDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/lib -DCMAKE_INSTALL_DOCDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/share/doc/llama-cpp-vulkan -DCMAKE_INSTALL_INFODIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/share/info -DCMAKE_INSTALL_MANDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/share/man -DCMAKE_INSTALL_INCLUDEDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/include -DCMAKE_INSTALL_SBINDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/sbin -DCMAKE_INSTALL_BINDIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/bin -DCMAKE_INSTALL_NAME_DIR=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0/lib -DCMAKE_POLICY_DEFAULT_CMP0025=NEW -DCMAKE_FIND_FRAMEWORK=LAST -DCMAKE_STRIP=/nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/strip -DCMAKE_RANLIB=/nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/ranlib -DCMAKE_AR=/nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/ar -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DCMAKE_INSTALL_PREFIX=/nix/store/4qi3kx605ir8zmm20rf0929hydlhn53v-llama-cpp-vulkan-0.0.0 -DLLAMA_BUILD_SERVER:BOOL=TRUE -DBUILD_SHARED_LIBS:BOOL=TRUE -DCMAKE_SKIP_BUILD_RPATH:BOOL=TRUE -DLLAMA_CURL:BOOL=TRUE -DGGML_NATIVE:BOOL=FALSE -DGGML_BLAS:BOOL=FALSE -DGGML_CUDA:BOOL=FALSE -DGGML_HIP:BOOL=FALSE -DGGML_METAL:BOOL=FALSE -DGGML_VULKAN:BOOL=TRUE -DGGML_STATIC:BOOL=FALSE
llama-cpp-vulkan> -- The C compiler identification is GNU 14.3.0
llama-cpp-vulkan> -- The CXX compiler identification is GNU 14.3.0
llama-cpp-vulkan> -- Detecting C compiler ABI info
llama-cpp-vulkan> -- Detecting C compiler ABI info - done
llama-cpp-vulkan> -- Check for working C compiler: /nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/gcc - skipped
llama-cpp-vulkan> -- Detecting C compile features
llama-cpp-vulkan> -- Detecting C compile features - done
llama-cpp-vulkan> -- Detecting CXX compiler ABI info
llama-cpp-vulkan> -- Detecting CXX compiler ABI info - done
llama-cpp-vulkan> -- Check for working CXX compiler: /nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/g++ - skipped
llama-cpp-vulkan> -- Detecting CXX compile features
llama-cpp-vulkan> -- Detecting CXX compile features - done
llama-cpp-vulkan> CMAKE_BUILD_TYPE=Release
llama-cpp-vulkan> -- Found Git: /nix/store/5i8zvall945kypmwgqd0y47f02pldwp4-git-2.50.1/bin/git (found version "2.50.1")
llama-cpp-vulkan> fatal: not a git repository (or any parent up to mount point /)
llama-cpp-vulkan> Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
llama-cpp-vulkan> fatal: not a git repository (or any parent up to mount point /)
llama-cpp-vulkan> Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
llama-cpp-vulkan> -- Setting GGML_NATIVE_DEFAULT to OFF
llama-cpp-vulkan> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
llama-cpp-vulkan> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
llama-cpp-vulkan> -- Found Threads: TRUE
llama-cpp-vulkan> -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
llama-cpp-vulkan> -- CMAKE_SYSTEM_PROCESSOR: x86_64
llama-cpp-vulkan> -- GGML_SYSTEM_ARCH: x86
llama-cpp-vulkan> -- Including CPU backend
llama-cpp-vulkan> -- Found OpenMP_C: -fopenmp (found version "4.5")
llama-cpp-vulkan> -- Found OpenMP_CXX: -fopenmp (found version "4.5")
llama-cpp-vulkan> -- Found OpenMP: TRUE (found version "4.5")
llama-cpp-vulkan> -- x86 detected
llama-cpp-vulkan> -- Adding CPU backend variant ggml-cpu:
llama-cpp-vulkan> -- Found Vulkan: /nix/store/mcc5zsprwwrkxbvi2j1s2kvhfx9pk23a-vulkan-loader-1.4.313.0/lib/libvulkan.so (found version "1.4.313") found components: glslc missing components: glslangValidator
llama-cpp-vulkan> -- Vulkan found
llama-cpp-vulkan> -- GL_KHR_cooperative_matrix supported by glslc
llama-cpp-vulkan> -- GL_NV_cooperative_matrix2 supported by glslc
llama-cpp-vulkan> -- GL_EXT_integer_dot_product supported by glslc
llama-cpp-vulkan> -- GL_EXT_bfloat16 supported by glslc
llama-cpp-vulkan> -- Including Vulkan backend
llama-cpp-vulkan> -- ggml version: 0.0.0
llama-cpp-vulkan> -- ggml commit:  unknown
llama-cpp-vulkan> CMake Warning at common/CMakeLists.txt:32 (message):
llama-cpp-vulkan>   Git repository not found; to enable automatic generation of build info,
llama-cpp-vulkan>   make sure Git is installed and the project is a Git repository.
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> -- Found CURL: /nix/store/95iy4kvy1xllg2fpb14dhw4m3ff3l0f0-curl-8.14.1/lib/libcurl.so (found version "8.14.1")
llama-cpp-vulkan> -- Configuring done (1.3s)
llama-cpp-vulkan> -- Generating done (0.1s)
llama-cpp-vulkan> CMake Warning:
llama-cpp-vulkan>   Manually-specified variables were not used by the project:
llama-cpp-vulkan> 
llama-cpp-vulkan>     CMAKE_EXPORT_NO_PACKAGE_REGISTRY
llama-cpp-vulkan>     CMAKE_POLICY_DEFAULT_CMP0025
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> -- Build files have been written to: /build/source/build
llama-cpp-vulkan> cmake: enabled parallel building
llama-cpp-vulkan> cmake: enabled parallel installing
llama-cpp-vulkan> Running phase: buildPhase
llama-cpp-vulkan> build flags: -j16
llama-cpp-vulkan> [0/2] Re-checking globbed directories...
llama-cpp-vulkan> [1/260] Creating directories for 'vulkan-shaders-gen'
llama-cpp-vulkan> [2/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/hbm.cpp.o
llama-cpp-vulkan> [3/260] No download step for 'vulkan-shaders-gen'
llama-cpp-vulkan> [4/260] No update step for 'vulkan-shaders-gen'
llama-cpp-vulkan> [5/260] No patch step for 'vulkan-shaders-gen'
llama-cpp-vulkan> [6/260] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o
llama-cpp-vulkan> [7/260] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o
llama-cpp-vulkan> [8/260] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o
llama-cpp-vulkan> [9/260] Performing configure step for 'vulkan-shaders-gen'
llama-cpp-vulkan> -- The C compiler identification is GNU 14.3.0
llama-cpp-vulkan> -- The CXX compiler identification is GNU 14.3.0
llama-cpp-vulkan> -- Detecting C compiler ABI info
llama-cpp-vulkan> -- Detecting C compiler ABI info - done
llama-cpp-vulkan> -- Check for working C compiler: /nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/gcc - skipped
llama-cpp-vulkan> -- Detecting C compile features
llama-cpp-vulkan> -- Detecting C compile features - done
llama-cpp-vulkan> -- Detecting CXX compiler ABI info
llama-cpp-vulkan> -- Detecting CXX compiler ABI info - done
llama-cpp-vulkan> -- Check for working CXX compiler: /nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/g++ - skipped
llama-cpp-vulkan> -- Detecting CXX compile features
llama-cpp-vulkan> -- Detecting CXX compile features - done
llama-cpp-vulkan> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
llama-cpp-vulkan> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
llama-cpp-vulkan> -- Found Threads: TRUE
llama-cpp-vulkan> -- Enabling coopmat glslc support
llama-cpp-vulkan> -- Enabling coopmat2 glslc support
llama-cpp-vulkan> -- Enabling dot glslc support
llama-cpp-vulkan> -- Enabling bfloat16 glslc support
llama-cpp-vulkan> -- Configuring done (1.2s)
llama-cpp-vulkan> -- Generating done (0.0s)
llama-cpp-vulkan> -- Build files have been written to: /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders-gen-prefix/src/vulkan-shaders-gen-build
llama-cpp-vulkan> [10/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/traits.cpp.o
llama-cpp-vulkan> [11/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o
llama-cpp-vulkan> [12/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o
llama-cpp-vulkan> [13/260] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o
llama-cpp-vulkan> [14/260] Building C object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o
llama-cpp-vulkan> [15/260] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
llama-cpp-vulkan> [16/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.cpp.o
llama-cpp-vulkan> [17/260] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o
llama-cpp-vulkan> [18/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/vec.cpp.o
llama-cpp-vulkan> [19/260] Building C object examples/gguf-hash/CMakeFiles/sha1.dir/deps/sha1/sha1.c.o
llama-cpp-vulkan> [20/260] Building C object examples/gguf-hash/CMakeFiles/sha256.dir/deps/sha256/sha256.c.o
llama-cpp-vulkan> [21/260] Building CXX object tools/mtmd/CMakeFiles/llama-llava-cli.dir/deprecation-warning.cpp.o
llama-cpp-vulkan> [22/260] Building CXX object tools/mtmd/CMakeFiles/llama-gemma3-cli.dir/deprecation-warning.cpp.o
llama-cpp-vulkan> [23/260] Building C object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/quants.c.o
llama-cpp-vulkan> [24/260] Building C object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/x86/quants.c.o
llama-cpp-vulkan> [25/260] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o
llama-cpp-vulkan> [26/260] Linking CXX executable bin/llama-llava-cli
llama-cpp-vulkan> [27/260] Linking CXX executable bin/llama-gemma3-cli
llama-cpp-vulkan> [28/260] Building CXX object tools/mtmd/CMakeFiles/llama-minicpmv-cli.dir/deprecation-warning.cpp.o
llama-cpp-vulkan> [29/260] Linking CXX executable bin/llama-minicpmv-cli
llama-cpp-vulkan> [30/260] Building CXX object tools/mtmd/CMakeFiles/llama-qwen2vl-cli.dir/deprecation-warning.cpp.o
llama-cpp-vulkan> [31/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/llamafile/sgemm.cpp.o
llama-cpp-vulkan> [32/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/x86/repack.cpp.o
llama-cpp-vulkan> [33/260] Linking CXX executable bin/llama-qwen2vl-cli
llama-cpp-vulkan> [34/260] Building C object examples/gguf-hash/CMakeFiles/xxhash.dir/deps/xxhash/xxhash.c.o
llama-cpp-vulkan> [35/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/binary-ops.cpp.o
llama-cpp-vulkan> [36/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/unary-ops.cpp.o
llama-cpp-vulkan> [37/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/repack.cpp.o
llama-cpp-vulkan> [38/260] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o
llama-cpp-vulkan> [39/260] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o
llama-cpp-vulkan> [40/260] Linking CXX shared library bin/libggml-base.so
llama-cpp-vulkan> [41/260] Building CXX object ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ops.cpp.o
llama-cpp-vulkan> [42/260] Linking CXX shared library bin/libggml-cpu.so
llama-cpp-vulkan> [43/260] Performing build step for 'vulkan-shaders-gen'
llama-cpp-vulkan> [1/2] Building CXX object CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o
llama-cpp-vulkan> [2/2] Linking CXX executable vulkan-shaders-gen
llama-cpp-vulkan> [44/260] Performing install step for 'vulkan-shaders-gen'
llama-cpp-vulkan> -- Installing: /build/source/build/Release/./vulkan-shaders-gen
llama-cpp-vulkan> [45/260] Completed 'vulkan-shaders-gen'
llama-cpp-vulkan> [46/260] Generate vulkan shaders
llama-cpp-vulkan> ggml_vulkan: Generating and compiling shaders to SPIR-V
llama-cpp-vulkan> cannot compile matmul_bf16_aligned_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_bf16_aligned_cm2.spv -DACC_TYPE=float -DALIGNED=1 -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DLOAD_VEC_B=4 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> cannot compile matmul_bf16_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_bf16_cm2.spv -DACC_TYPE=float -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> cannot compile matmul_bf16_aligned_f16acc_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_bf16_aligned_f16acc_cm2.spv -DACC_TYPE=float16_t -DALIGNED=1 -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DLOAD_VEC_B=4 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> cannot compile matmul_bf16_f16acc_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_bf16_f16acc_cm2.spv -DACC_TYPE=float16_t -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> cannot compile matmul_id_bf16_aligned_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_id_bf16_aligned_cm2.spv -DACC_TYPE=float -DALIGNED=1 -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DLOAD_VEC_B=4 -DMUL_MAT_ID=1 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> cannot compile matmul_id_bf16_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_id_bf16_cm2.spv -DACC_TYPE=float -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DMUL_MAT_ID=1 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> cannot compile matmul_id_bf16_aligned_f16acc_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_id_bf16_aligned_f16acc_cm2.spv -DACC_TYPE=float16_t -DALIGNED=1 -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DLOAD_VEC_B=4 -DMUL_MAT_ID=1 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> cannot compile matmul_id_bf16_f16acc_cm2
llama-cpp-vulkan> 
llama-cpp-vulkan> /nix/store/9hlqwvrlrlxsnj002xxdcvp7siy6l0w5-shaderc-2025.2-bin/bin/glslc -fshader-stage=compute --target-env=vulkan1.3 -O /build/source/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm_cm2.comp -o /build/source/build/ggml/src/ggml-vulkan/vulkan-shaders.spv/matmul_id_bf16_f16acc_cm2.spv -DACC_TYPE=float16_t -DB_IS_FLOAT=1 -DB_TYPE=bfloat16_t -DDATA_A_BF16=1 -DD_TYPE=float -DFLOAT16=1 -DFLOAT_TYPE=bfloat16_t -DFLOAT_TYPE_VEC2=f16vec2 -DLOAD_VEC_A=1 -DMUL_MAT_ID=1 -DTO_FLOAT_TYPE=uintBitsToBFloat16EXT
llama-cpp-vulkan> 
llama-cpp-vulkan> shaderc: internal error: compilation succeeded but failed to optimize: Invalid capability operand: 5116
llama-cpp-vulkan> 
llama-cpp-vulkan> 
llama-cpp-vulkan> [47/260] Building CXX object src/CMakeFiles/llama.dir/llama-cparams.cpp.o
llama-cpp-vulkan> [48/260] Building CXX object src/CMakeFiles/llama.dir/llama-hparams.cpp.o
llama-cpp-vulkan> [49/260] Building CXX object src/CMakeFiles/llama.dir/llama-io.cpp.o
llama-cpp-vulkan> [50/260] Building CXX object src/CMakeFiles/llama.dir/llama-memory.cpp.o
llama-cpp-vulkan> [51/260] Building CXX object src/CMakeFiles/llama.dir/llama-impl.cpp.o
llama-cpp-vulkan> [52/260] Building CXX object src/CMakeFiles/llama.dir/llama.cpp.o
llama-cpp-vulkan> [53/260] Building CXX object src/CMakeFiles/llama.dir/llama-mmap.cpp.o
llama-cpp-vulkan> [54/260] Building CXX object src/CMakeFiles/llama.dir/llama-arch.cpp.o
llama-cpp-vulkan> [55/260] Building CXX object ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o
llama-cpp-vulkan> [56/260] Building CXX object src/CMakeFiles/llama.dir/llama-adapter.cpp.o
llama-cpp-vulkan> [57/260] Building CXX object src/CMakeFiles/llama.dir/llama-memory-hybrid.cpp.o
llama-cpp-vulkan> [58/260] Building CXX object src/CMakeFiles/llama.dir/llama-kv-cache-unified-iswa.cpp.o
llama-cpp-vulkan> [59/260] Building CXX object src/CMakeFiles/llama.dir/llama-graph.cpp.o
llama-cpp-vulkan> [60/260] Building CXX object src/CMakeFiles/llama.dir/llama-batch.cpp.o
llama-cpp-vulkan> [61/260] Building CXX object src/CMakeFiles/llama.dir/llama-memory-recurrent.cpp.o
llama-cpp-vulkan> [62/260] Building CXX object src/CMakeFiles/llama.dir/llama-chat.cpp.o
llama-cpp-vulkan> [63/260] Building CXX object src/CMakeFiles/llama.dir/llama-model-saver.cpp.o
llama-cpp-vulkan> [64/260] Building CXX object src/CMakeFiles/llama.dir/llama-context.cpp.o
llama-cpp-vulkan> [65/260] Building CXX object src/CMakeFiles/llama.dir/unicode-data.cpp.o
llama-cpp-vulkan> [66/260] Building CXX object common/CMakeFiles/common.dir/console.cpp.o
llama-cpp-vulkan> [67/260] Building CXX object src/CMakeFiles/llama.dir/llama-kv-cache-unified.cpp.o
llama-cpp-vulkan> [68/260] Building CXX object src/CMakeFiles/llama.dir/llama-model-loader.cpp.o
llama-cpp-vulkan> [69/260] Building CXX object common/CMakeFiles/common.dir/llguidance.cpp.o
llama-cpp-vulkan> [70/260] Building CXX object common/CMakeFiles/common.dir/log.cpp.o
llama-cpp-vulkan> [71/260] Building CXX object src/CMakeFiles/llama.dir/llama-grammar.cpp.o
llama-cpp-vulkan> [72/260] Building CXX object common/CMakeFiles/common.dir/ngram-cache.cpp.o
llama-cpp-vulkan> [73/260] Building CXX object common/CMakeFiles/common.dir/chat-parser.cpp.o
llama-cpp-vulkan> [74/260] Building CXX object ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o
llama-cpp-vulkan> FAILED: ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o
llama-cpp-vulkan> /nix/store/67x7pknz0qa2j16x02idf0x98lpcspah-gcc-wrapper-14.3.0/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_VULKAN_BFLOAT16_GLSLC_SUPPORT -DGGML_VULKAN_COOPMAT2_GLSLC_SUPPORT -DGGML_VULKAN_COOPMAT_GLSLC_SUPPORT -DGGML_VULKAN_INTEGER_DOT_GLSLC_SUPPORT -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_vulkan_EXPORTS -I/build/source/ggml/src/ggml-vulkan/.. -I/build/source/build/ggml/src/ggml-vulkan -I/build/source/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o -MF ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o.d -o ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o -c /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function 'void ggml_vk_load_shaders(vk_device&)':
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2360:45: error: 'matmul_bf16_cm2_len' was not declared in this scope; did you mean 'matmul_f16_cm2_len'?
llama-cpp-vulkan>  2360 |             CREATE_MM(pipeline_matmul_bf16, matmul_bf16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3)
llama-cpp-vulkan>       |                                             ^~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2345:91: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2345 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->l, #NAMELC #F16ACC "_l", NAMELC ## F16ACC ## _cm2_len, NAMELC ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, 1);   \
llama-cpp-vulkan>       |                                                                                           ^~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2360:45: error: 'matmul_bf16_cm2_data' was not declared in this scope; did you mean 'matmul_f16_cm2_data'?
llama-cpp-vulkan>  2360 |             CREATE_MM(pipeline_matmul_bf16, matmul_bf16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3)
llama-cpp-vulkan>       |                                             ^~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2345:121: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2345 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->l, #NAMELC #F16ACC "_l", NAMELC ## F16ACC ## _cm2_len, NAMELC ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, 1);   \
llama-cpp-vulkan>       |                                                                                                                         ^~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2360:45: error: 'matmul_bf16_aligned_cm2_len' was not declared in this scope; did you mean 'matmul_f16_aligned_cm2_len'?
llama-cpp-vulkan>  2360 |             CREATE_MM(pipeline_matmul_bf16, matmul_bf16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3)
llama-cpp-vulkan>       |                                             ^~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2348:101: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2348 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->a_l, #NAMELC #F16ACC "_aligned_l", NAMELC ## _aligned ## F16ACC ## _cm2_len, NAMELC ## _aligned ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, l_align);   \
llama-cpp-vulkan>       |                                                                                                     ^~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2360:45: error: 'matmul_bf16_aligned_cm2_data' was not declared in this scope; did you mean 'matmul_f16_aligned_cm2_data'?
llama-cpp-vulkan>  2360 |             CREATE_MM(pipeline_matmul_bf16, matmul_bf16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3)
llama-cpp-vulkan>       |                                             ^~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2348:143: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2348 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->a_l, #NAMELC #F16ACC "_aligned_l", NAMELC ## _aligned ## F16ACC ## _cm2_len, NAMELC ## _aligned ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, l_align);   \
llama-cpp-vulkan>       |                                                                                                                                               ^~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2387:48: error: 'matmul_id_bf16_cm2_len' was not declared in this scope; did you mean 'matmul_id_f16_cm2_len'?
llama-cpp-vulkan>  2387 |             CREATE_MM(pipeline_matmul_id_bf16, matmul_id_bf16, , wg_denoms, warptile, vk_mat_mat_id_push_constants, 4)
llama-cpp-vulkan>       |                                                ^~~~~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2345:91: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2345 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->l, #NAMELC #F16ACC "_l", NAMELC ## F16ACC ## _cm2_len, NAMELC ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, 1);   \
llama-cpp-vulkan>       |                                                                                           ^~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2387:48: error: 'matmul_id_bf16_cm2_data' was not declared in this scope; did you mean 'matmul_id_f16_cm2_data'?
llama-cpp-vulkan>  2387 |             CREATE_MM(pipeline_matmul_id_bf16, matmul_id_bf16, , wg_denoms, warptile, vk_mat_mat_id_push_constants, 4)
llama-cpp-vulkan>       |                                                ^~~~~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2345:121: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2345 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->l, #NAMELC #F16ACC "_l", NAMELC ## F16ACC ## _cm2_len, NAMELC ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, 1);   \
llama-cpp-vulkan>       |                                                                                                                         ^~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2387:48: error: 'matmul_id_bf16_aligned_cm2_len' was not declared in this scope; did you mean 'matmul_id_f16_aligned_cm2_len'?
llama-cpp-vulkan>  2387 |             CREATE_MM(pipeline_matmul_id_bf16, matmul_id_bf16, , wg_denoms, warptile, vk_mat_mat_id_push_constants, 4)
llama-cpp-vulkan>       |                                                ^~~~~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2348:101: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2348 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->a_l, #NAMELC #F16ACC "_aligned_l", NAMELC ## _aligned ## F16ACC ## _cm2_len, NAMELC ## _aligned ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, l_align);   \
llama-cpp-vulkan>       |                                                                                                     ^~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2387:48: error: 'matmul_id_bf16_aligned_cm2_data' was not declared in this scope; did you mean 'matmul_id_f16_aligned_cm2_data'?
llama-cpp-vulkan>  2387 |             CREATE_MM(pipeline_matmul_id_bf16, matmul_id_bf16, , wg_denoms, warptile, vk_mat_mat_id_push_constants, 4)
llama-cpp-vulkan>       |                                                ^~~~~~~~~~~~~~
llama-cpp-vulkan> /build/source/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2348:143: note: in definition of macro 'CREATE_MM'
llama-cpp-vulkan>  2348 |         ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->a_l, #NAMELC #F16ACC "_aligned_l", NAMELC ## _aligned ## F16ACC ## _cm2_len, NAMELC ## _aligned ## F16ACC ## _cm2_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, l_align);   \
llama-cpp-vulkan>       |                                                                                                                                               ^~~~~~
llama-cpp-vulkan> [75/260] Building CXX object src/CMakeFiles/llama.dir/llama-quant.cpp.o
llama-cpp-vulkan> [76/260] Building CXX object common/CMakeFiles/common.dir/speculative.cpp.o
llama-cpp-vulkan> [77/260] Building CXX object common/CMakeFiles/common.dir/sampling.cpp.o
llama-cpp-vulkan> [78/260] Building CXX object common/CMakeFiles/common.dir/json-partial.cpp.o
llama-cpp-vulkan> [79/260] Building CXX object src/CMakeFiles/llama.dir/llama-vocab.cpp.o
llama-cpp-vulkan> [80/260] Building CXX object src/CMakeFiles/llama.dir/llama-sampling.cpp.o
llama-cpp-vulkan> [81/260] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
llama-cpp-vulkan> [82/260] Building CXX object tools/mtmd/CMakeFiles/mtmd.dir/mtmd.cpp.o
llama-cpp-vulkan> [83/260] Building CXX object src/CMakeFiles/llama.dir/unicode.cpp.o
llama-cpp-vulkan> [84/260] Building CXX object common/CMakeFiles/common.dir/regex-partial.cpp.o
llama-cpp-vulkan> [85/260] Building CXX object common/CMakeFiles/common.dir/json-schema-to-grammar.cpp.o
llama-cpp-vulkan> [86/260] Building CXX object src/CMakeFiles/llama.dir/llama-model.cpp.o
llama-cpp-vulkan> [87/260] Building CXX object common/CMakeFiles/common.dir/arg.cpp.o
llama-cpp-vulkan> [88/260] Building CXX object common/CMakeFiles/common.dir/chat.cpp.o
llama-cpp-vulkan> [89/260] Building CXX object ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan-shaders.cpp.o
llama-cpp-vulkan> ninja: build stopped: subcommand failed.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions