Skip to content

Change GPU plugin's behavior as Level zero's default hierarchy mode changed from composite to flat #1601

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
234 changes: 44 additions & 190 deletions cmd/gpu_plugin/README.md

Large diffs are not rendered by default.

24 changes: 24 additions & 0 deletions cmd/gpu_plugin/advanced-install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Alternative installation methods for Intel GPU plugin

## Install to all nodes

In case the target cluster will not have NFD (or you don't want to install it), Intel GPU plugin can be installed to all nodes. This installation method will consume little unnecessary CPU resources on nodes without Intel GPUs.

```bash
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin?ref=<RELEASE_VERSION>'
```

## Install to nodes via NFD, with Monitoring and Shared-dev

Intel GPU plugin is installed via NFD's labels and node selector. Plugin is configured with monitoring and shared devices enabled. This option is useful when there is a desire to retrieve GPU metrics from nodes. For example with [XPU-Manager](https://github.com/intel/xpumanager/) or [collectd](https://github.com/collectd/collectd/tree/collectd-6.0).

```bash
# Start NFD - if your cluster doesn't have NFD installed yet
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref=<RELEASE_VERSION>'

# Create NodeFeatureRules for detecting GPUs on nodes
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref=<RELEASE_VERSION>'

# Create GPU plugin daemonset
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/monitoring_shared-dev_nfd/?ref=<RELEASE_VERSION>'
```
80 changes: 80 additions & 0 deletions cmd/gpu_plugin/driver-firmware.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Driver and firmware for Intel GPUs

Access to a GPU device requires firmware, kernel and user-space
drivers supporting it. Firmware and kernel driver need to be on the
host, user-space drivers in the GPU workload containers.

Intel GPU devices supported by the current kernel can be listed with:
```
$ grep i915 /sys/class/drm/card?/device/uevent
/sys/class/drm/card0/device/uevent:DRIVER=i915
/sys/class/drm/card1/device/uevent:DRIVER=i915
```

## Drivers for discrete GPUs

> **Note**: Kernel (on host) and user-space drivers (in containers)
> should be installed from the same repository as there are some
> differences between DKMS and upstream GPU driver uAPI.

##### Kernel driver

###### Intel DKMS packages

`i915` GPU driver DKMS[^dkms] package is recommended for Intel
discrete GPUs, until their support in upstream is complete. DKMS
package(s) can be installed from Intel package repositories for a
subset of older kernel versions used in enterprise / LTS
distributions:
https://dgpu-docs.intel.com/installation-guides/index.html

[^dkms]: [intel-gpu-i915-backports](https://github.com/intel-gpu/intel-gpu-i915-backports).

###### Upstream kernel

Support for first Intel discrete GPUs was added to upstream Linux kernel in v6.2,
and expanded in later versions. For now, upstream kernel is still missing support
for few of the features available in DKMS kernels, listed here:
https://dgpu-docs.intel.com/driver/kernel-driver-types.html

##### GPU Version

PCI IDs for the Intel GPUs on given host can be listed with:
```
$ lspci | grep -e VGA -e Display | grep Intel
88:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
8d:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
```

(`lspci` lists GPUs with display support as "VGA compatible controller",
and server GPUs without display support, as "Display controller".)

A mapping between GPU PCI IDs and their Intel brand names is available here:
https://dgpu-docs.intel.com/devices/hardware-table.html

###### GPU Firmware

If your kernel build does not find the correct firmware version for
a given GPU from the host (see `dmesg | grep i915` output), latest
firmware versions are available in upstream:
https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915

##### User-space drivers

Until new enough user-space drivers (supporting also discrete GPUs)
are available directly from distribution package repositories, they
can be installed to containers from Intel package repositories. See:
https://dgpu-docs.intel.com/installation-guides/index.html

Example container is listed in [Testing and demos](#testing-and-demos).

Validation status against *upstream* kernel is listed in the user-space drivers release notes:
* Media driver: https://github.com/intel/media-driver/releases
* Compute driver: https://github.com/intel/compute-runtime/releases

#### Drivers for older (integrated) GPUs

For the older (integrated) GPUs, new enough firmware and kernel driver
are typically included already with the host OS, and new enough
user-space drivers (for the GPU containers) are in the host OS
repositories.
64 changes: 64 additions & 0 deletions cmd/gpu_plugin/fractional.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# GPU plugin with GPU Aware Scheduling

This is an experimental feature.

Installing the GPU plugin with [GPU Aware Scheduling](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling) (GAS) enables containers to request partial (fractional) GPU resources. For example, a Pod's container can request GPU's millicores or memory and use only a fraction of the GPU. The remaining resources could be leveraged by another container.

> *NOTE*: For this use case to work properly, all GPUs in a given node should provide equal amount of resources
i.e. heterogenous GPU nodes are not supported.

> *NOTE*: Resource values are used only for scheduling workloads to nodes, not for limiting their GPU usage on the nodes. Container requesting 50% of the GPU's resources is not restricted by the kernel driver or firmware from using more than 50% of the resources. A container requesting 1% of the GPU could use 100% of it.

## Install GPU Aware Scheduling

GAS' installation is described in its [README](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling#usage-with-nfd-and-the-gpu-plugin).

## Install GPU plugin with fractional resources

### With yaml deployments

The GPU Plugin DaemonSet needs additional RBAC-permissions and access to the kubelet podresources
gRPC service to function. All the required changes are gathered in the `fractional_resources`
overlay. Install GPU plugin by running:

```bash
# Start NFD - if your cluster doesn't have NFD installed yet
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref=<RELEASE_VERSION>'

# Create NodeFeatureRules for detecting GPUs on nodes
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref=<RELEASE_VERSION>'

# Create GPU plugin daemonset
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/fractional_resources?ref=<RELEASE_VERSION>'
```

### With Device Plugin Operator

Install the Device Plugin Operator according to the [install](../operator/README.md#installation) instructions. When applying the [GPU plugin Custom Resource](../../deployments/operator/samples/deviceplugin_v1_gpudeviceplugin.yaml) (CR), set `resourceManager` option to `true`. The Operator will install all the required RBAC objects and service accounts.

```
spec:
resourceManager: true
```

## Details about fractional resources

Use of fractional GPU resources requires that the cluster has node extended resources with the name prefix `gpu.intel.com/`. Those are automatically created by GPU plugin with the help of the NFD. When fractional resources are enabled, the plugin lets GAS do card selection decisions based on resource availability and the amount of extended resources requested in the [pod spec](https://github.com/intel/platform-aware-scheduling/blob/master/gpu-aware-scheduling/docs/usage.md#pods).

GAS then annotates the pod objects with unique increasing numeric timestamps in the annotation `gas-ts` and container card selections in `gas-container-cards` annotation. The latter has container separator '`|`' and card separator '`,`'. Example for a pod with two containers and both containers getting two cards: `gas-container-cards:card0,card1|card2,card3`.

Enabling the fractional resource support in the plugin without running GAS in the cluster will only slow down GPU-deployments, so do not enable this feature unnecessarily.

## Tile level access and Level Zero workloads

Level Zero library supports targeting different tiles on a GPU. If the host is equipped with multi-tile GPU devices, and the container requests both `gpu.intel.com/i915` and `gpu.intel.com/tiles` resources, GPU plugin (with GAS) adds an [affinity mask](https://spec.oneapi.io/level-zero/latest/core/PROG.html#affinity-mask) to the container. By default the mask is in "FLAT" [device hierarchy](https://spec.oneapi.io/level-zero/latest/core/PROG.html#device-hierarchy) format. With the affinity mask, two Level Zero workloads can share a two tile GPU so that workloads use one tile each.

If a multi-tile workload is intended to work in "COMPOSITE" hierarchy mode, the container spec environment should include hierarchy mode variable (ZE_FLAT_DEVICE_HIERARCHY) with "COMPOSITE" value. GPU plugin will then adapt the affinity mask from the default "FLAT" to "COMPOSITE" format.

If the GPU is a single tile device, GPU plugin does not set the affinity mask. Only exposing GPU devices is enough in that case.

### Details about tile resources

GAS makes the GPU and tile selection based on the Pod's resource specification. The selection is passed to GPU plugin via the Pod's annotation.

Tiles targeted for containers are specified to Pod via `gas-container-tiles` annotation where the the annotation value describes a set of card and tile combinations. For example in a two container pod, the annotation could be `gas-container-tiles:card0:gt0+gt1|card1:gt1,card2:gt0`. Similarly to `gas-container-cards`, the container details are split via `|`. In the example above, the first container gets tiles 0 and 1 from card 0, and the second container gets tile 1 from card 1 and tile 0 from card 2.
42 changes: 28 additions & 14 deletions cmd/gpu_plugin/gpu_plugin.go
Original file line number Diff line number Diff line change
Expand Up @@ -403,6 +403,29 @@ func (dp *devicePlugin) isCompatibleDevice(name string) bool {
return true
}

func (dp *devicePlugin) devSpecForDrmFile(drmFile string) (devSpec pluginapi.DeviceSpec, devPath string, err error) {
if dp.controlDeviceReg.MatchString(drmFile) {
//Skipping possible drm control node
err = os.ErrInvalid

return
}

devPath = path.Join(dp.devfsDir, drmFile)
if _, err = os.Stat(devPath); err != nil {
return
}

// even querying metrics requires device to be writable
devSpec = pluginapi.DeviceSpec{
HostPath: devPath,
ContainerPath: devPath,
Permissions: "rw",
}

return
}

func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {
files, err := os.ReadDir(dp.sysfsDir)
if err != nil {
Expand All @@ -413,6 +436,7 @@ func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {

devTree := dpapi.NewDeviceTree()
rmDevInfos := rm.NewDeviceInfoMap()
tileCounts := []uint64{}

for _, f := range files {
var nodes []pluginapi.DeviceSpec
Expand All @@ -429,25 +453,14 @@ func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {
}

isPFwithVFs := pluginutils.IsSriovPFwithVFs(path.Join(dp.sysfsDir, f.Name()))
tileCounts = append(tileCounts, labeler.GetTileCount(dp.sysfsDir, f.Name()))

for _, drmFile := range drmFiles {
if dp.controlDeviceReg.MatchString(drmFile.Name()) {
//Skipping possible drm control node
devSpec, devPath, devSpecErr := dp.devSpecForDrmFile(drmFile.Name())
if devSpecErr != nil {
continue
}

devPath := path.Join(dp.devfsDir, drmFile.Name())
if _, err := os.Stat(devPath); err != nil {
continue
}

// even querying metrics requires device to be writable
devSpec := pluginapi.DeviceSpec{
HostPath: devPath,
ContainerPath: devPath,
Permissions: "rw",
}

if !isPFwithVFs {
klog.V(4).Infof("Adding %s to GPU %s", devPath, f.Name())

Expand Down Expand Up @@ -487,6 +500,7 @@ func (dp *devicePlugin) scan() (dpapi.DeviceTree, error) {

if dp.resMan != nil {
dp.resMan.SetDevInfos(rmDevInfos)
dp.resMan.SetTileCountPerCard(tileCounts)
}

return devTree, nil
Expand Down
3 changes: 3 additions & 0 deletions cmd/gpu_plugin/gpu_plugin_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,9 @@ func (m *mockResourceManager) GetPreferredFractionalAllocation(*v1beta1.Preferre
return &v1beta1.PreferredAllocationResponse{}, &dpapi.UseDefaultMethodError{}
}

func (m *mockResourceManager) SetTileCountPerCard(counts []uint64) {
}

func createTestFiles(root string, devfsdirs, sysfsdirs []string, sysfsfiles map[string][]byte) (string, string, error) {
sysfs := path.Join(root, "sys")
devfs := path.Join(root, "dev")
Expand Down
Loading