From 4f29b83f71dfed33278a3f8d4eed43db0f662e35 Mon Sep 17 00:00:00 2001 From: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Date: Tue, 16 Sep 2025 00:01:05 +0000 Subject: [PATCH] Update documentation to specify C++23 compiler requirements - Change C++17 to C++23 in .github/copilot-instructions.md (lines 19, 232) - Update SYCL documentation to reference C++23 standard - Update compiler requirements to GCC 11+, Clang 12+, MSVC 2022+ Ticket: MBA-133 Co-Authored-By: Jake Cosme --- .github/copilot-instructions.md | 4 ++-- docs/backend/SYCL.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 3250e3279ecb6..3e50de2cc8828 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -16,7 +16,7 @@ llama.cpp is a large-scale C/C++ project for efficient LLM (Large Language Model ### Prerequisites - CMake 3.14+ (primary build system) -- C++17 compatible compiler (GCC 13.3+, Clang, MSVC) +- C++23 compatible compiler (GCC 11+, Clang 12+, MSVC 2022+) - Optional: ccache for faster compilation ### Basic Build (CPU-only) @@ -229,7 +229,7 @@ Primary tools: ### Required Tools - CMake 3.14+ (install via system package manager) -- Modern C++ compiler with C++17 support +- Modern C++ compiler with C++23 support - Git (for submodule management) - Python 3.9+ with virtual environment (`.venv` is provided) diff --git a/docs/backend/SYCL.md b/docs/backend/SYCL.md index 6e9b88935da97..e13f8fb3285ae 100644 --- a/docs/backend/SYCL.md +++ b/docs/backend/SYCL.md @@ -15,7 +15,7 @@ ## Background -**SYCL** is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++17. +**SYCL** is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++23. **oneAPI** is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to Intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include: