Skip to content

Build talk-llama error: no member named 'n_gpu_layers' in 'llama_context_params' #1436

@mattlinares

Description

@mattlinares

Hi all, following build instructions for my MacBookPro M2 here: https://github.com/ggerganov/whisper.cpp/tree/master/examples/talk-llama

and getting the error no member named 'n_gpu_layers' in 'llama_context_params'.

Any ideas? Thanks

technical@Matts-MacBook-Pro ~/c/whisper.cpp (master)> make talk-llama                       (base)
I whisper.cpp build info:
I UNAME_S:  Darwin
I UNAME_P:  arm
I UNAME_M:  arm64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL
I LDFLAGS:   -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
I CC:       Apple clang version 15.0.0 (clang-1500.0.40.1)
I CXX:      Apple clang version 15.0.0 (clang-1500.0.40.1)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml.c -o ggml.o
cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml-alloc.c -o ggml-alloc.o
cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml-backend.c -o ggml-backend.o
cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL   -c ggml-quants.c -o ggml-quants.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL -c whisper.cpp -o whisper.o
cc -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL -c ggml-metal.m -o ggml-metal.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL examples/talk-llama/talk-llama.cpp examples/talk-llama/llama.cpp examples/common.cpp examples/common-ggml.cpp examples/common-sdl.cpp ggml.o ggml-alloc.o ggml-backend.o ggml-quants.o whisper.o ggml-metal.o -o talk-llama `sdl2-config --cflags --libs`  -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
examples/talk-llama/talk-llama.cpp:280:18: error: no member named 'n_gpu_layers' in 'llama_context_params'
        lcparams.n_gpu_layers = 0;
        ~~~~~~~~ ^
examples/talk-llama/talk-llama.cpp:401:9: warning: 'llama_eval' is deprecated: use llama_decode() instead [-Wdeprecated-declarations]
    if (llama_eval(ctx_llama, embd_inp.data(), embd_inp.size(), 0)) {
        ^
examples/talk-llama/llama.h:436:15: note: 'llama_eval' has been explicitly marked deprecated here
    LLAMA_API DEPRECATED(int llama_eval(
              ^
examples/talk-llama/llama.h:31:56: note: expanded from macro 'DEPRECATED'
#    define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
                                                       ^
examples/talk-llama/talk-llama.cpp:584:29: warning: 'llama_eval' is deprecated: use llama_decode() instead [-Wdeprecated-declarations]
                        if (llama_eval(ctx_llama, embd.data(), embd.size(), n_past)) {
                            ^
examples/talk-llama/llama.h:436:15: note: 'llama_eval' has been explicitly marked deprecated here
    LLAMA_API DEPRECATED(int llama_eval(
              ^
examples/talk-llama/llama.h:31:56: note: expanded from macro 'DEPRECATED'
#    define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
                                                       ^
2 warnings and 1 error generated.
make: *** [talk-llama] Error 1

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions