Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions content/patterns/azure-rag-llm-gitops/_index.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
title: RAG-LLM pattern on Microsoft Azure
date:
validated: false
tier: tested
summary: The RAG-LLM pattern on Microsoft Azure offers a robust and scalable solution for deploying LLM-based applications with integrated retrieval capabilities on Microsoft Azure.
rh_products:
- Red Hat OpenShift Container Platform
- Red Hat OpenShift AI
partners:
- Microsoft
industries:
- General
aliases: /azure-rag-llm-gitops/
#pattern_logo:
links:
github: https://github.com/validatedpatterns/rag-llm-gitops
install: getting-started
bugs: https://github.com/validatedpatterns/rag-llm-gitops/issues
feedback: https://docs.google.com/forms/d/e/1FAIpQLScI76b6tD1WyPu2-d_9CCVDr3Fu5jYERthqLKJDUGwqBg7Vcg/viewform
---

:toc:
:imagesdir: /images
:_content-type: ASSEMBLY
include::modules/comm-attributes.adoc[]

[id="about-azure-rag-llm-gitops-pattern"]
== About the RAG-LLM pattern on Microsoft Azure on Microsoft Azure

The RAG-LLM GitOps Pattern offers a robust and scalable solution for deploying LLM-based applications with integrated retrieval capabilities on Microsoft Azure. By embracing GitOps principles, this pattern ensures automated, consistent, and auditable deployments. It streamlines the setup of complex LLM architectures, allowing users to focus on application development rather than intricate infrastructure provisioning.

[id="solution-elements-and-technologies"]
== Solution elements and technologies

The RAG-LLM pattern on Microsoft Azure leverages the following key technologies and components:

* **{rh-ocp} on Microsoft Azure**: The foundation for container orchestration and application deployment.
* **Microsoft SQL Server **: The default relational database backend for storing vector embeddings.
* **Hugging Face Models**: Used for both embedding generation and large language model inference.
* **{rh-gitops}**: The primary driver for automated deployment and continuous synchronization of the pattern's components.
* **{rhoai}**: An optimized inference engine for large language models, deployed on GPU-enabled nodes.
* **Node Feature Discovery (NFD) Operator**: A Kubernetes add-on for detecting hardware features and system configuration.
* **NVIDIA GPU Operator**: The GPU Operator uses the Operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPU.
124 changes: 124 additions & 0 deletions content/patterns/azure-rag-llm-gitops/az-ragllm-getting-started.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
---
title: Getting Started
weight: 10
aliases: /getting-started/
---

:toc:
:imagesdir: /images
:_content-type: ASSEMBLY
include::modules/comm-attributes.adoc[]

[id="installing-rag-llm-azure-pattern"]
== Installing the RAG-LLM GitOps Pattern on Microsoft Azure

.Prerequisites

* You are logged into an existing a Red Hat OpenShift cluster on Microsoft Azure with administrative privileges.
* Your Azure subscription has the required GPU quota to provision the necessary compute resources for the vLLM inference service. The default is Standard_NC8as_T4_v3, which requires at least 8 CPUs.
* A Hugging Face token.
* Database server
** Microsoft SQL Server - It is the default vector database for deploying the RAG-LLM pattern on Azure.
** (Optional) Local databases- You can also deploy Redis, PostgreSQL (EDB), or Elasticsearch (ELASTIC) directly within your cluster. If choosing a local database, ensure that it is provisioned and accessible before deployment.

[IMPORTANT]
====
* To select your database type, edit `overrides/values-Azure.yaml` file.
+
[source,yaml]
----
global:
db:
type: "MSSQL" # Options: MSSQL, AZURESQL, REDIS, EDB, ELASTIC
----


* When choosing local database instances such as Redis, PostgreSQL, or Elasticsearch, ensure that your cluster has sufficient resources available.
====

[id="overview-of-the-installation-workflow_{context}"]
== Overview of the installation workflow
To install the RAG-LLM GitOps Pattern on Microsoft Azure, you must complete the following setup and configurations:

* xref:creating-huggingface-token[Create a Hugging face token]
* xref:creating-secret-credentials[Create required secrets]
* xref:provisioning-gpu-nodes[Create GPU nodes]
* xref:deploy-rag-llm-azure-pattern[Install the RAG-LLM GitOps Pattern on Microsoft Azure]

[id="creating-huggingface-token_{context}"]
=== Creating a Hugging Face token
.Procedure

. To obtain a Hugging Face token, navigate to the link:https://huggingface.co/settings/tokens[Hugging Face] site.
. Log in to your account.
. Go to your *Settings* -> *Access Tokens*.
. Create a new token with appropriate permissions. Ensure you accept the terms of the specific model you plan to use, as required by Hugging Face. For example, Mistral-7B-Instruct-v0.3-AWQ

[id="creating-secret-credentials_{context}"]
=== Creating secret credentials

To securely store your sensitive credentials, create a YAML file named `~/values-secret-rag-llm-gitops.yaml`. This file is used during the pattern deployment; however, you must not commit it to your Git repository.

[source,yaml]
----
# ~/values-secret-rag-llm-gitops.yaml
# Replace placeholders with your actual credentials
version: "2.0"

secrets:
- name: hfmodel
fields:
- name: hftoken <1>
value: <hf_your_huggingface_token>
- name: mssql
fields:
- name: sa-pass <2>
value: <value: <password_for_sa_user>
----
<1> Specify your Hugging Face token.
<2> Specify the system administrator password for the MS SQL Server instance.

[id="provisioning-gpu-nodes_{context}"]
=== Provisioning GPU nodes

The vLLM inference service requires dedicated GPU nodes with a specific taint. You can provision these nodes by using one of the following methods:

Automatic Provisioning:: The pattern includes capabilities to automatically provision GPU-enabled `MachineSet` resources.
+
Run the following command to create a single Standard_NC8as_T4_v3 GPU node:
+
[source,terminal]
----
./pattern.sh make create-gpu-machineset-azure
----

Customizable Method:: For environments requiring more granular control, you can manually create a `MachineSet` with the necessary GPU instance types and apply the required taint.
+
To control GPU node specifics, provide additional parameters:
+
[source,terminal]
----
./pattern.sh make create-gpu-machineset-azure GPU_REPLICAS=3 OVERRIDE_ZONE=2 GPU_VM_SIZE=Standard_NC16as_T4_v3
----
+
where:
+
- `GPU_REPLICAS` is the umber of GPU nodes to provision.
+
- (Optional): `OVERRIDE_ZONE` is the availability zone .
+
- `GPU_VM_SIZE` is the Azure VM SKU for GPU nodes.
+
The script automatically applies the required taint. The NVIDIA GPU Operator that is installed by the pattern manages the CUDA driver installation on GPU nodes.

[id="deploy-rag-llm-azure-pattern_{context}"]
=== Deploying the RAG-LLM GitOps Pattern

To deploy the RAG-LLM GitOps Pattern to your ARO cluster, run the following command:

[source,terminal]
----
pattern.sh make install
----

This command initiates the GitOps-driven deployment process, which installs and configures all RAG-LLM components on your ARO cluster based on the provided values and secrets.
2 changes: 0 additions & 2 deletions content/patterns/medical-diagnosis/getting-started.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@ include::modules/comm-attributes.adoc[]

.Prerequisites

.Prerequisites

* An OpenShift cluster
** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console].
** Select *OpenShift \-> Red Hat OpenShift Container Platform \-> Create cluster*.
Expand Down
34 changes: 34 additions & 0 deletions modules/rag-llm-gitops/az-rag-llm-config-options.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
:_content-type: CONCEPT
:imagesdir: ../../images

[id="configuration-options_{context}"]
= Configuration options

To tailor the deployment to your specific use case, data sources, and model requirements, the RAG-LLM GitOps pattern offers extensive configuration options through its Helm chart values.

[id="document-sources-for-rag-db-population_{context}"]
== Document sources for RAG DB population

To populate your vector database with relevant documents, you can specify various sources within the pattern's configuration. This is typically managed under the `populateDbJob` section of the Helm values.

* Git Repository Sources (`populateDbJob.repoSources`): Specify documents from Git repositories. You can use glob patterns to include or exclude specific file types from these repositories.
+
[TIP]
====
To optimize retrieval quality and performance, restrict Git repository sources to file types that are suitable for semantic search (e.g., `.txt`, `.md`, `.pdf`, `.json`). Avoid including binary files or irrelevant content that could degrade search accuracy.
====

* Web Page Sources (`populateDbJob.webSources`): Include content directly from specified web pages.

[id="embedding-and-llm-inference-models_{context}"]
== Embedding and LLM inference models

The models used for generating embeddings and performing LLM inference are defined in the `values-global.yaml` file:

* LLM Inference Model:Configured under `global.model.vllm`. This specifies the Hugging Face model identifier for the large language model.
* Embedding Model: Configured under `global.model.embedding`. This specifies the Hugging Face model identifier for the text embedding model.

Both models should be compatible with the Hugging Face ecosystem. When deploying in cloud environments such as Azure, carefully consider the VRAM requirements of your chosen models to ensure that your provisioned GPU nodes have sufficient memory for optimal performance and to avoid resource contention.

.Additional resource
* link:https://validatedpatterns.io/blog/2025-06-10-rag-llm-gitops-configuration/[How to Configure the RAG-LLM GitOps Pattern for Your Use Case]