Skip to content

[CUDA] SYCL sees only one nvidia gpu #1167

@mfbalin

Description

@mfbalin

commit: 80b0306

Compiled using g++ with version 9.2 with CUDA support following instructions in getting started guide with the following configuration:

cmake -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD="X86" -DLLVM_EXTERNAL_PROJECTS="llvm-spirv;sycl" -DLLVM_ENABLE_PROJECTS="clang;llvm-spirv;sycl" -DLLVM_EXTERNAL_SYCL_SOURCE_DIR=$SYCL_HOME/llvm/sycl -DLLVM_EXTERNAL_LLVM_SPIRV_SOURCE_DIR=$SYCL_HOME/llvm/llvm-spirv -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DLLVM_ENABLE_PROJECTS="clang;llvm-spirv;sycl;libclc" -DSYCL_BUILD_PI_CUDA=ON -DLLVM_TARGETS_TO_BUILD="X86;NVPTX" -DLIBCLC_TARGETS_TO_BUILD="nvptx64--;nvptx64--nvidiacl" $SYCL_HOME/llvm/llvm

make -j sycl-toolchain

The following code snipped was compiled with clang++ -fsycl -fsycl-targets=nvptx64-nvidia-cuda-sycldevice

auto plats = cl::sycl::platform::get_platforms();
for(int i = 0; i < plats.size(); i++) {
	std::cerr << plats[i].get_info<sycl::info::platform::name>() << ":\n";
	auto devs = plats[i].get_devices();
	for(int j = 0; j < devs.size(); j++)
		std::cerr << '\t' << devs[j].get_info<sycl::info::device::name>() << '\n';
}

and the output was:

NVIDIA CUDA:
        Tesla V100-SXM2-32GB
SYCL host platform:
        SYCL host device

The running environment is inside a docker and nvidia-smi lists 8 Tesla-V100 GPUs. When I query all the devices, there is only one GPU and one Host device.

I can provide further information if needed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    cudaCUDA back-end

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions