diff --git a/docs/.nav.yml b/docs/.nav.yml index 63976bc..30ede6f 100644 --- a/docs/.nav.yml +++ b/docs/.nav.yml @@ -15,7 +15,10 @@ nav: - Lineage Tracking: "register-and-refine/lineage-tracking.md" - Prompt Optimization: "register-and-refine/prompt-optimization.md" - Corridor Sync: "register-and-refine/corridor_sync.md" - + - Asset Registration Examples: + - Model Registration: "register-and-refine/example/model.md" + - Prompt Registration: "register-and-refine/example/prompt.md" + - Evaluate and Approve: - Overview: "evaluate-and-approve/index.md" - Simulation: "evaluate-and-approve/simulation.md" diff --git a/docs/register-and-refine/example/image-1.png b/docs/register-and-refine/example/image-1.png new file mode 100644 index 0000000..7495928 Binary files /dev/null and b/docs/register-and-refine/example/image-1.png differ diff --git a/docs/register-and-refine/example/image-19.png b/docs/register-and-refine/example/image-19.png new file mode 100644 index 0000000..3122e58 Binary files /dev/null and b/docs/register-and-refine/example/image-19.png differ diff --git a/docs/register-and-refine/example/image-2.png b/docs/register-and-refine/example/image-2.png new file mode 100644 index 0000000..895d07f Binary files /dev/null and b/docs/register-and-refine/example/image-2.png differ diff --git a/docs/register-and-refine/example/image-21.png b/docs/register-and-refine/example/image-21.png new file mode 100644 index 0000000..681dcf8 Binary files /dev/null and b/docs/register-and-refine/example/image-21.png differ diff --git a/docs/register-and-refine/example/image-22.png b/docs/register-and-refine/example/image-22.png new file mode 100644 index 0000000..76900e2 Binary files /dev/null and b/docs/register-and-refine/example/image-22.png differ diff --git a/docs/register-and-refine/example/image-23.png b/docs/register-and-refine/example/image-23.png new file mode 100644 index 0000000..4d0a8dd Binary files /dev/null and b/docs/register-and-refine/example/image-23.png differ diff --git a/docs/register-and-refine/example/image.png b/docs/register-and-refine/example/image.png new file mode 100644 index 0000000..3b1163d Binary files /dev/null and b/docs/register-and-refine/example/image.png differ diff --git a/docs/register-and-refine/example/model.md b/docs/register-and-refine/example/model.md new file mode 100644 index 0000000..f0b0648 --- /dev/null +++ b/docs/register-and-refine/example/model.md @@ -0,0 +1,155 @@ +# Model Registration: Gemini 2.0 Flash + +This guide covers registering the Gemini 2.0 Flash model on the platform. + +**Gemini 2.0 Flash** is Google's language model for classification and structured output tasks. + +--- + +## Registration Steps + +### Step 1. Navigate to Model Catalog + +Go to **GenAI Studio → Model Catalog** and click the **Create** button. + +### Step 2. Fill in Basic Information + +![alt text](image-21.png) + +**Basic Information** fields help organize and identify your model: + +- **Name:** Human-readable identifier for the model (e.g., "Gemini 2.0 Flash") +- **Description:** Brief explanation of the model's purpose and capabilities +- **Group:** Category for organizing similar models together (e.g., "Foundation LLMs") +- **Permissible Purpose:** Approved use cases and business scenarios for this model +- **Ownership Type:** License type - Proprietary, Open Source, or Internal +- **Model Type:** Classification of the model (e.g., "LLM" for language models) + +### Step 3. Configure Inferencing Logic + +#### Choose Input Type + +**Input Type:** You have two options: + +- **API Based** - Use this when working with models through API providers (OpenAI, Anthropic, Google Vertex AI, etc.) + +- **Python Function** - Use this for custom Python implementations or local models + +For this guide, we'll use **API Based**. + +#### Select Model Provider + +**Model Provider:** Select `Google Vertex AI` from the dropdown + +Once you select a provider, additional fields will appear to configure how the model is called: + +![alt text](image-22.png) + +- **Alias:** Variable name to reference this model in pipeline code (e.g., `gemini_2_0_flash`) +- **Output Type:** Data type returned by the model (e.g., `dict[str, str]`) +- **Input Type:** Choose between API-based (for external providers) or Python Function (for custom code) +- **Model Provider:** Select the API provider hosting the model (Google Vertex AI) +- **Model:** Specific model version from the provider's catalog (Gemini 2.0 Flash) + +#### Define Arguments + +The inputs to the model - messages, system instruction, temperature, etc. + +Click **+ Add Argument** to add each argument: + +| Alias | Type | Is Optional | Default Value | +|-------|------|-------------|---------------| +| `text` | String | ☐ | - | +| `temperature` | Numerical | ☑ | 0 | +| `system_instruction` | String | ☑ | None | + +**Argument Descriptions:** + +- `text`: The input prompt to send to the model + +- `temperature`: Controls randomness (0 = deterministic, 1 = creative) + +- `system_instruction`: Optional system-level instructions for the model + +You can add additional arguments based on your model's requirements. + +#### Write Scoring Logic + +![alt text](image-23.png) + +Provide logic to initialize and score the model: + +```python +import os +from google import genai +from google.genai import types + +client = genai.Client(api_key=os.getenv("GOOGLE_API_TOKEN")) + +config = types.GenerateContentConfig( + temperature=temperature, + seed=2025, + system_instruction=system_instruction +) + +response = client.models.generate_content( + model="gemini-2.0-flash", + contents=text, + config=config +) + +return { + "response": response.text, +} +``` + +**What This Code Does:** + +- Authenticates using the `GOOGLE_API_TOKEN` environment variable (configured in Platform Integrations) +- Sets up generation config with temperature and system instruction +- Calls the Gemini 2.0 Flash model with the input text +- Returns the generated response + +### Step 4. Save the Model + +Add any notes or additional information in the **Additional Information** section, then click **Create** to complete registration. + + +### Step 5. Quick Example Run + +Click **Test Code** to run a sample query. + +![alt text](image-19.png) + +Use the platform's test interface to verify: + +- Verify API authentication is working +- Test with sample inputs before using in production +- Debug any configuration issues +- Validate the output format matches expectations + +## Usage in Pipelines + +Once registered, the model appears in your Resources library and can be selected for any downstream usages. + +**Reference in pipeline code:** +```python +# Call the registered model +response = gemini_2_0_flash( + text=user_prompt, + temperature=0.7, + system_instruction="You are a helpful assistant." +) + +# Access the response +output_text = response["response"] +``` + +--- + +## Related Documentation + +- [Prompt Registration Guide](../prompt/) - Create reusable prompts +- [Google Gemini API Docs](https://ai.google.dev/gemini-api/docs) - Official Google documentation + +--- diff --git a/docs/register-and-refine/example/prompt.md b/docs/register-and-refine/example/prompt.md new file mode 100644 index 0000000..e3ab80a --- /dev/null +++ b/docs/register-and-refine/example/prompt.md @@ -0,0 +1,312 @@ +# Prompt Registration Guide + +This guide covers how to register prompts on the Corridor platform, using an **Intent Classification Prompt** as a working example. + +If you are new to Prompts, then this doc might help you understanding what they are and how do they work -> [Prompts](/docs/register-and-refine/inventory-management/prompts/index.md) + +--- + +## Registration Steps + +### Step 1. Navigate to Prompt Registry + +Go to **GenAI Studio → Prompt Registry** and click the **Create** button. + +### Step 2. Fill in Basic Information + +**Example for Intent Classification:** + +![alt text](image.png) + +**Basic Information** fields help organize and identify your prompt: + +- **Description:** Clear explanation of what the prompt does and its purpose +- **Group:** Category for organizing similar prompts (e.g., "Existing Customer Credit Card Related Prompts") +- **Permissible Purpose:** Approved use cases and business scenarios for this prompt +- **Task Type:** Classification of the prompt's function (e.g., "Classification" for intent detection) +- **Prompt Type:** Format of the prompt (e.g., "System Instruction" for system-level prompts) +- **Prompt Elements:** Optional tags or metadata for additional categorization + +### Step 3. Configure Prompt Template + +![alt text](image-1.png) + +**Alias:** `customer_intent_classification_prompt` + +- A Python variable name to reference this prompt in pipelines + +#### Example Prompt Template + +The **Prompt Template** is where you write the actual instructions for the LLM: + +- Use `{}` placeholders for dynamic variables (e.g., `{customer_utterance}`) +- Write clear, structured instructions for the model to follow +- Include examples to guide the model's behavior +- Define expected output format (e.g., JSON schema) + +**Example Prompt Template for Intent Classification:** + +````markdown +# PERSONA & TONE + +You are a trusted, efficient, and security-conscious digital assistant, +specialized in handling banking-related queries for existing customers +of BankX. + +Maintain a tone that is: + +- Professional: Clear, formal, and polite +- Concise: Direct answers without filler +- Data-driven: Never guess; respond only based on verified data +- English only + +# GOAL + +Accurately predict customer intent from a predefined list of possible intents. + +# TASK INSTRUCTIONS: + +### Step 1: Review Intent Definitions + +Thoroughly understand the predefined list of intents. + +### Step 2: Pre-Defined List of Intents + +#### ACTIVATE CARD + +- Definition: Request to activate a newly issued card +- Examples: + • "How do I activate my new debit card?" + • "Activate my credit card now." + +#### BLOCK CARD + +- Definition: Request to block lost, stolen, or compromised card +- Examples: + • "Block my credit card immediately." + • "I lost my debit card, can you block it?" + +#### CARD DETAILS + +- Definition: Inquiry about card information +- Examples: + • "How many cards do I have?" + • "What is the name on my card?" + +#### CHECK CARD ANNUAL FEE + +- Definition: Inquiry about annual fees +- Examples: + • "What's the annual fee for my credit card?" + • "How much is my card's yearly charge?" + +#### CHECK CURRENT BALANCE ON CARD + +- Definition: Inquiry about available balance +- Examples: + • "What's my credit card balance?" + • "How much money is on my debit card?" + +### Step 3: Disambiguate and Summarize Customer Utterance + +- Overlook grammatical/spelling errors +- Ignore PII (name, age, gender, personal data) +- Focus on main intention in long sentences + +### Step 4: Mapping Query to Intent + +- Map to most suitable intent from predefined list +- Ensure only one intent is chosen +- Recheck classification is in predefined list + +### Step 5: Schema Compliance + +OUTPUT FORMAT: + +```json +{{"classified_intent": "str"}} +``` + +# EXAMPLE SCENARIOS: + +Example 1: +Input: "I need to activate my new credit card." + + REASONING STEPS: + - Review intent definitions + - Understand all available intents + - No disambiguation needed (clear query) + - Maps to "ACTIVATE CARD" intent + - Output in JSON format + + Output: + +```json + {{"classified_intent": "ACTIVATE CARD"}} +``` + +# Customer Query + +Query: {customer_utterance} +```` + +#### Define Arguments + +Arguments are inputs that get passed into the prompt template. + +Click **+ Add Argument** to add: + +| Alias | Type | Is Optional | Default Value | +| -------------- | ------ | ----------- | ------------- | +| `user_message` | String | ☐ No | - | + +**Note:** Use `{customer_utterance}` in the template and map it from `user_message` in Prompt Creation Logic. + +### Step 4. Write Prompt Creation Logic + +**Prompt Creation Logic** allows you to programmatically process arguments before they're inserted into the template. This is useful for: + +- Formatting complex data structures +- Generating dynamic content (like the intent list) +- Applying conditional logic based on inputs +- Validating or transforming user inputs + +**Example - Formatting Intent Definitions:** + +![alt text](image-2.png) + +```python +intent_definitions = [ + { + "Intent": "ACTIVATE CARD", + "Definition": "Request to activate a newly issued card", + "Examples": [ + "How do I activate my new debit card?", + "Activate my credit card now.", + ], + }, + { + "Intent": "BLOCK CARD", + "Definition": "Request to block a lost, stolen, or compromised card", + "Examples": [ + "Block my credit card immediately.", + "I lost my debit card, can you block it?", + ], + }, + { + "Intent": "CARD DETAILS", + "Definition": "Inquiry about card information", + "Examples": [ + "How many cards do I have?", + "What is the name on my card?", + ], + }, + { + "Intent": "CHECK CARD ANNUAL FEE", + "Definition": "Inquiry about annual fees", + "Examples": [ + "What's the annual fee for my credit card?", + "How much is my card's yearly charge?", + ], + }, + { + "Intent": "CHECK CURRENT BALANCE ON CARD", + "Definition": "Inquiry about available balance", + "Examples": [ + "What's my credit card balance?", + "How much money is on my debit card?", + ], + }, +] + +def get_intent_info(data_list): + """Format intent definitions into readable text""" + formatted_list = [] + intent_number = 1 + + for item in data_list: + formatted_list.append(f"#### {intent_number}. {item['Intent'].upper()}") + formatted_list.append(f"- Definition: {item['Definition']}") + formatted_list.append(f"- Examples:") + for example in item["Examples"]: + formatted_list.append(f" • {example}") + formatted_list.append("") # Empty line between intents + intent_number += 1 + + return "\n".join(formatted_list) + +# Fill in the prompt template +return prompt.format( + customer_utterance=user_message, + list_of_intents=get_intent_info(intent_definitions) +) +``` + +**What This Does:** + +1. Defines 5 card-related intent definitions with examples +2. Formats them into a structured, numbered list +3. Fills in `{customer_utterance}` and `{list_of_intents}` placeholders + +### Step 5. Save the Prompt + +Click **Create** to register the prompt. + +The prompt is now: + +- Available in the Prompt Registry +- Usable in pipelines and other objects + +### Analyze and Improve the Prompt using GGX Capability + +After saving the prompt, you can test and refine it directly within **GenAI Studio**: + +- **🔍 Analyze Prompt:** + Click the **Analyze Prompt** button to evaluate how your prompt behaves with different inputs. + This helps you confirm that argument mappings, placeholders, and output formats are working correctly. + +- **✨ Improve with AI:** + Use the **Improve with AI** button to automatically optimize your prompt. + This provides AI-generated suggestions to enhance clarity, tone, and structure — helping improve prompt performance and consistency. + +--- + +## Using Prompts in Pipelines + +Once registered, prompts can be used in downstream applications: + +```python +# Reference the prompt in pipeline code +intent_result = customer_intent_classification_prompt( + user_message=user_input +) + +# Access the classified intent +classified_intent = intent_result["classified_intent"] + +# Use in downstream logic +if classified_intent == "ACTIVATE CARD": + # Handle card activation + pass +elif classified_intent == "BLOCK CARD": + # Handle card blocking + pass +``` + +--- + +## Next Steps + +After registering your prompt: + +1. **Register a model** - If you haven't already, register the LLM to use with this prompt +2. **Build a pipeline** - Combine your prompt with a model and other resources to create a use-case specific pipeline. + +--- + + +## Related Documentation + +- [Model Registration Guide](../model/) - Register LLM models to use with prompts + +---