Skip to content

Build with hipblas failed after new changes recenttly  #4525

@sorasoras

Description

@sorasoras

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [y ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [y] I carefully followed the README.md.
  • [y ] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ y] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

I have compiled many times successful but
after changes from b1658, i was not able to compiled.

Current Behavior

instruction
cmake .. -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DLLAMA_HIPBLAS=ON -DLLAMA_CUDA_DMMV_X=64 -DLLAMA_CUDA_MMV_Y=4 -DCMAKE_C_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang.exe" -DCMAKE_CXX_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang++.exe" -DAMDGPU_TARGETS="gfx1100"

cmake --build . -j 16

Please provide a detailed written description of what llama.cpp did, instead.
build_log.txt

Windows 11, full ROCM SDK 5.5,

cmake version 3.26.4

GNU Make 4.4.1
Built for x86_64-w64-mingw32

g++.exe (MinGW-W64 x86_64-msvcrt-posix-seh, built by Brecht Sanders) 13.1.0
Copyright (C) 2023 Free Software Foundation, Inc.


# Failure Information (for bugs)

Please help provide information about the failure / bug.


# Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

1. step 1
cmake .. -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DLLAMA_HIPBLAS=ON -DLLAMA_CUDA_DMMV_X=64 -DLLAMA_CUDA_MMV_Y=4   -DCMAKE_C_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang.exe" -DCMAKE_CXX_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang++.exe" -DAMDGPU_TARGETS="gfx1100"
3. step 2
cmake --build . --config Release

# Failure Logs
FAILED: CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.obj
C:\PROGRA~1\AMD\ROCm\5.5\bin\CLANG_~1.EXE -DGGML_CUDA_DMMV_X=128 -DGGML_CUDA_MMV_Y=4 -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -DK_QUANTS_PER_ITERATION=2 -D_CRT_SECURE_NO_WARNINGS -D_XOPEN_SOURCE=600 -D__HIP_PLATFORM_AMD__=1 -D__HIP_PLATFORM_HCC__=1 -isystem "C:/Program Files/AMD/ROCm/5.5/include" -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -std=gnu++14 -mllvm -amdgpu-early-inline-all=true -mllvm -amdgpu-function-calls=false -x hip --offload-arch=gfx1100 -MD -MT CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.obj -MF CMakeFiles\ggml-rocm.dir\ggml-cuda.cu.obj.d -o CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.obj -c "W:/git/New folder/llama.cpp/ggml-cuda.cu"
W:/git/New folder/llama.cpp/ggml-cuda.cu:8394:5: error: unknown type name 'cublasComputeType_t'
    cublasComputeType_t cu_compute_type = CUBLAS_COMPUTE_16F;
    ^
W:/git/New folder/llama.cpp/ggml-cuda.cu:8395:5: error: unknown type name 'cudaDataType_t'
    cudaDataType_t      cu_data_type    = CUDA_R_16F;
    ^
2 errors generated when compiling for gfx1100.


[build_log.txt](https://github.com/ggerganov/llama.cpp/files/13707336/build_log.txt)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions