Skip to content

Commit 51d1cc1

Browse files
authored
Merge pull request #583 from abhatt-rh/ms-rag
RAG-LLM GitOps Pattern on MS Azure
2 parents 812d93f + cc2a7a5 commit 51d1cc1

File tree

4 files changed

+202
-2
lines changed

4 files changed

+202
-2
lines changed
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
---
2+
title: RAG-LLM pattern on Microsoft Azure
3+
date:
4+
validated: false
5+
tier: tested
6+
summary: The RAG-LLM pattern on Microsoft Azure offers a robust and scalable solution for deploying LLM-based applications with integrated retrieval capabilities on Microsoft Azure.
7+
rh_products:
8+
- Red Hat OpenShift Container Platform
9+
- Red Hat OpenShift AI
10+
partners:
11+
- Microsoft
12+
industries:
13+
- General
14+
aliases: /azure-rag-llm-gitops/
15+
#pattern_logo:
16+
links:
17+
github: https://github.com/validatedpatterns/rag-llm-gitops
18+
install: getting-started
19+
bugs: https://github.com/validatedpatterns/rag-llm-gitops/issues
20+
feedback: https://docs.google.com/forms/d/e/1FAIpQLScI76b6tD1WyPu2-d_9CCVDr3Fu5jYERthqLKJDUGwqBg7Vcg/viewform
21+
---
22+
23+
:toc:
24+
:imagesdir: /images
25+
:_content-type: ASSEMBLY
26+
include::modules/comm-attributes.adoc[]
27+
28+
[id="about-azure-rag-llm-gitops-pattern"]
29+
== About the RAG-LLM pattern on Microsoft Azure on Microsoft Azure
30+
31+
The RAG-LLM GitOps Pattern offers a robust and scalable solution for deploying LLM-based applications with integrated retrieval capabilities on Microsoft Azure. By embracing GitOps principles, this pattern ensures automated, consistent, and auditable deployments. It streamlines the setup of complex LLM architectures, allowing users to focus on application development rather than intricate infrastructure provisioning.
32+
33+
[id="solution-elements-and-technologies"]
34+
== Solution elements and technologies
35+
36+
The RAG-LLM pattern on Microsoft Azure leverages the following key technologies and components:
37+
38+
* **{rh-ocp} on Microsoft Azure**: The foundation for container orchestration and application deployment.
39+
* **Microsoft SQL Server **: The default relational database backend for storing vector embeddings.
40+
* **Hugging Face Models**: Used for both embedding generation and large language model inference.
41+
* **{rh-gitops}**: The primary driver for automated deployment and continuous synchronization of the pattern's components.
42+
* **{rhoai}**: An optimized inference engine for large language models, deployed on GPU-enabled nodes.
43+
* **Node Feature Discovery (NFD) Operator**: A Kubernetes add-on for detecting hardware features and system configuration.
44+
* **NVIDIA GPU Operator**: The GPU Operator uses the Operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision GPU.
Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
---
2+
title: Getting Started
3+
weight: 10
4+
aliases: /getting-started/
5+
---
6+
7+
:toc:
8+
:imagesdir: /images
9+
:_content-type: ASSEMBLY
10+
include::modules/comm-attributes.adoc[]
11+
12+
[id="installing-rag-llm-azure-pattern"]
13+
== Installing the RAG-LLM GitOps Pattern on Microsoft Azure
14+
15+
.Prerequisites
16+
17+
* You are logged into an existing a Red Hat OpenShift cluster on Microsoft Azure with administrative privileges.
18+
* Your Azure subscription has the required GPU quota to provision the necessary compute resources for the vLLM inference service. The default is Standard_NC8as_T4_v3, which requires at least 8 CPUs.
19+
* A Hugging Face token.
20+
* Database server
21+
** Microsoft SQL Server - It is the default vector database for deploying the RAG-LLM pattern on Azure.
22+
** (Optional) Local databases- You can also deploy Redis, PostgreSQL (EDB), or Elasticsearch (ELASTIC) directly within your cluster. If choosing a local database, ensure that it is provisioned and accessible before deployment.
23+
24+
[IMPORTANT]
25+
====
26+
* To select your database type, edit `overrides/values-Azure.yaml` file.
27+
+
28+
[source,yaml]
29+
----
30+
global:
31+
db:
32+
type: "MSSQL" # Options: MSSQL, AZURESQL, REDIS, EDB, ELASTIC
33+
----
34+
35+
36+
* When choosing local database instances such as Redis, PostgreSQL, or Elasticsearch, ensure that your cluster has sufficient resources available.
37+
====
38+
39+
[id="overview-of-the-installation-workflow_{context}"]
40+
== Overview of the installation workflow
41+
To install the RAG-LLM GitOps Pattern on Microsoft Azure, you must complete the following setup and configurations:
42+
43+
* xref:creating-huggingface-token[Create a Hugging face token]
44+
* xref:creating-secret-credentials[Create required secrets]
45+
* xref:provisioning-gpu-nodes[Create GPU nodes]
46+
* xref:deploy-rag-llm-azure-pattern[Install the RAG-LLM GitOps Pattern on Microsoft Azure]
47+
48+
[id="creating-huggingface-token_{context}"]
49+
=== Creating a Hugging Face token
50+
.Procedure
51+
52+
. To obtain a Hugging Face token, navigate to the link:https://huggingface.co/settings/tokens[Hugging Face] site.
53+
. Log in to your account.
54+
. Go to your *Settings* -> *Access Tokens*.
55+
. Create a new token with appropriate permissions. Ensure you accept the terms of the specific model you plan to use, as required by Hugging Face. For example, Mistral-7B-Instruct-v0.3-AWQ
56+
57+
[id="creating-secret-credentials_{context}"]
58+
=== Creating secret credentials
59+
60+
To securely store your sensitive credentials, create a YAML file named `~/values-secret-rag-llm-gitops.yaml`. This file is used during the pattern deployment; however, you must not commit it to your Git repository.
61+
62+
[source,yaml]
63+
----
64+
# ~/values-secret-rag-llm-gitops.yaml
65+
# Replace placeholders with your actual credentials
66+
version: "2.0"
67+
68+
secrets:
69+
- name: hfmodel
70+
fields:
71+
- name: hftoken <1>
72+
value: <hf_your_huggingface_token>
73+
- name: mssql
74+
fields:
75+
- name: sa-pass <2>
76+
value: <value: <password_for_sa_user>
77+
----
78+
<1> Specify your Hugging Face token.
79+
<2> Specify the system administrator password for the MS SQL Server instance.
80+
81+
[id="provisioning-gpu-nodes_{context}"]
82+
=== Provisioning GPU nodes
83+
84+
The vLLM inference service requires dedicated GPU nodes with a specific taint. You can provision these nodes by using one of the following methods:
85+
86+
Automatic Provisioning:: The pattern includes capabilities to automatically provision GPU-enabled `MachineSet` resources.
87+
+
88+
Run the following command to create a single Standard_NC8as_T4_v3 GPU node:
89+
+
90+
[source,terminal]
91+
----
92+
./pattern.sh make create-gpu-machineset-azure
93+
----
94+
95+
Customizable Method:: For environments requiring more granular control, you can manually create a `MachineSet` with the necessary GPU instance types and apply the required taint.
96+
+
97+
To control GPU node specifics, provide additional parameters:
98+
+
99+
[source,terminal]
100+
----
101+
./pattern.sh make create-gpu-machineset-azure GPU_REPLICAS=3 OVERRIDE_ZONE=2 GPU_VM_SIZE=Standard_NC16as_T4_v3
102+
----
103+
+
104+
where:
105+
+
106+
- `GPU_REPLICAS` is the umber of GPU nodes to provision.
107+
+
108+
- (Optional): `OVERRIDE_ZONE` is the availability zone .
109+
+
110+
- `GPU_VM_SIZE` is the Azure VM SKU for GPU nodes.
111+
+
112+
The script automatically applies the required taint. The NVIDIA GPU Operator that is installed by the pattern manages the CUDA driver installation on GPU nodes.
113+
114+
[id="deploy-rag-llm-azure-pattern_{context}"]
115+
=== Deploying the RAG-LLM GitOps Pattern
116+
117+
To deploy the RAG-LLM GitOps Pattern to your ARO cluster, run the following command:
118+
119+
[source,terminal]
120+
----
121+
pattern.sh make install
122+
----
123+
124+
This command initiates the GitOps-driven deployment process, which installs and configures all RAG-LLM components on your ARO cluster based on the provided values and secrets.

content/patterns/medical-diagnosis/getting-started.adoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,6 @@ include::modules/comm-attributes.adoc[]
1414

1515
.Prerequisites
1616

17-
.Prerequisites
18-
1917
* An OpenShift cluster
2018
** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console].
2119
** Select *OpenShift \-> Red Hat OpenShift Container Platform \-> Create cluster*.
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
:_content-type: CONCEPT
2+
:imagesdir: ../../images
3+
4+
[id="configuration-options_{context}"]
5+
= Configuration options
6+
7+
To tailor the deployment to your specific use case, data sources, and model requirements, the RAG-LLM GitOps pattern offers extensive configuration options through its Helm chart values.
8+
9+
[id="document-sources-for-rag-db-population_{context}"]
10+
== Document sources for RAG DB population
11+
12+
To populate your vector database with relevant documents, you can specify various sources within the pattern's configuration. This is typically managed under the `populateDbJob` section of the Helm values.
13+
14+
* Git Repository Sources (`populateDbJob.repoSources`): Specify documents from Git repositories. You can use glob patterns to include or exclude specific file types from these repositories.
15+
+
16+
[TIP]
17+
====
18+
To optimize retrieval quality and performance, restrict Git repository sources to file types that are suitable for semantic search (e.g., `.txt`, `.md`, `.pdf`, `.json`). Avoid including binary files or irrelevant content that could degrade search accuracy.
19+
====
20+
21+
* Web Page Sources (`populateDbJob.webSources`): Include content directly from specified web pages.
22+
23+
[id="embedding-and-llm-inference-models_{context}"]
24+
== Embedding and LLM inference models
25+
26+
The models used for generating embeddings and performing LLM inference are defined in the `values-global.yaml` file:
27+
28+
* LLM Inference Model:Configured under `global.model.vllm`. This specifies the Hugging Face model identifier for the large language model.
29+
* Embedding Model: Configured under `global.model.embedding`. This specifies the Hugging Face model identifier for the text embedding model.
30+
31+
Both models should be compatible with the Hugging Face ecosystem. When deploying in cloud environments such as Azure, carefully consider the VRAM requirements of your chosen models to ensure that your provisioned GPU nodes have sufficient memory for optimal performance and to avoid resource contention.
32+
33+
.Additional resource
34+
* link:https://validatedpatterns.io/blog/2025-06-10-rag-llm-gitops-configuration/[How to Configure the RAG-LLM GitOps Pattern for Your Use Case]

0 commit comments

Comments
 (0)