Let AI build multi-agent workflows for you in minutes
+
Let AI build you coworkers
@@ -35,23 +35,19 @@
+
+β‘ Build AI agents instantly with natural language | π Connect tools with one-click integrations | π Power with knowledge by adding documents for RAG | π Automate workflows by setting up triggers and actions | π Deploy anywhere via API or SDK
+βοΈ Prefer a hosted version? Use our cloud to starting building agents right away!
+
-- β¨ **Start from an idea -> copilot builds your multi-agent workflows**
- - E.g. "Build me an assistant for a food delivery company to handle delivery status and missing items. Include the necessary tools."
-- π **Connect MCP servers**
- - Add the MCP servers in settings -> import the tools into Rowboat.
-- π **Integrate into your app using the HTTP API or Python SDK**
- - Grab the project ID and generated API key from settings and use the API.
-
-Powered by OpenAI's Agents SDK, Rowboat is the fastest way to build multi-agents!
## Quick start
1. Set your OpenAI key
- ```bash
- export OPENAI_API_KEY=your-openai-api-key
+ ```bash
+ export OPENAI_API_KEY=your-openai-api-key
```
-2. Clone the repository and start Rowboat
+2. Clone the repository and start Rowboat (requires Docker)
```bash
git clone git@github.com:rowboatlabs/rowboat.git
cd rowboat
@@ -60,71 +56,30 @@ Powered by OpenAI's Agents SDK, Rowboat is the fastest way to build multi-agents
3. Access the app at [http://localhost:3000](http://localhost:3000).
-Note: We have added native RAG support including file-uploads and URL scraping. See the [RAG](https://docs.rowboatlabs.com/using_rag) section of our docs for this.
-
-Note: See the [Using custom LLM providers](https://docs.rowboatlabs.com/setup/#using-custom-llm-providers) section of our docs for using custom providers like OpenRouter and LiteLLM.
+To add tools, RAG, more LLMs, and triggers checkout the [Advanced](#advanced) section below.
-## Demo
+## Demos
+#### Meeting-prep assistant
+Chat with the copilot to build a meeting-prep workflow, then add a calendar invite as a trigger. Watch the full demo [here](https://youtu.be/KZTP4xZM2DY).
+[](https://youtu.be/KZTP4xZM2DY)
-#### Create a multi-agent assistant with MCP tools by chatting with Rowboat
-[](https://youtu.be/YRTCw9UHRbU)
+#### Customer support assistant
+Chat with the copilot to build a customer support assistant, then connect your MCP server, and data for RAG. Watch the full demo [here](https://youtu.be/Xfo-OfgOl8w).
+[](https://youtu.be/Xfo-OfgOl8w)
-## Integrate with Rowboat agents
+#### Personal assistant
+Chat with the copilot to build a personal assistant. Watch the full demo [here](https://youtu.be/6r7P4Vlcn2g).
+[](https://youtu.be/6r7P4Vlcn2g)
-There are 2 ways to integrate with the agents you create in Rowboat
+## Advanced
+1. Native RAG Support: Enable file uploads and URL scraping with Rowboat's built-in RAG capabilities β see [RAG Guide](https://docs.rowboatlabs.com/docs/using-rowboat/rag).
-1. HTTP API
- - You can use the API directly at [http://localhost:3000/api/v1/](http://localhost:3000/api/v1/)
- - See [API Docs](https://docs.rowboatlabs.com/using_the_api/) for details
- ```bash
- curl --location 'http://localhost:3000/api/v1//chat' \
- --header 'Content-Type: application/json' \
- --header 'Authorization: Bearer ' \
- --data '{
- "messages": [
- {
- "role": "user",
- "content": "tell me the weather in london in metric units"
- }
- ],
- "state": null
- }'
- ```
-
+2. Custom LLM Providers: Use any LLM provider, including aggregators like OpenRouter and LiteLLM - see [Using more LLM providers](https://docs.rowboatlabs.com/docs/using-rowboat/customise/custom-llms).
-2. Python SDK
- You can use the included Python SDK to interact with the Agents
- ```
- pip install rowboat
- ```
+3. Tools & Triggers: Add tools and event triggers (e.g., Gmail, Slack) for automation β see [Tools](https://docs.rowboatlabs.com/docs/using-rowboat/tools) & [Triggers](https://docs.rowboatlabs.com/docs/using-rowboat/triggers).
- See [SDK Docs](https://docs.rowboatlabs.com/using_the_sdk/) for details. Here is a quick example:
- ```python
- from rowboat import Client, StatefulChat
- from rowboat.schema import UserMessage, SystemMessage
-
- # Initialize the client
- client = Client(
- host="http://localhost:3000",
- project_id="",
- api_key=""
- )
-
- # Create a stateful chat session (recommended)
- chat = StatefulChat(client)
- response = chat.run("What's the weather in London?")
- print(response)
-
- # Or use the low-level client API
- messages = [
- SystemMessage(role='system', content="You are a helpful assistant"),
- UserMessage(role='user', content="Hello, how are you?")
- ]
-
- # Get response
- response = client.chat(messages=messages)
- print(response.messages[-1].content)
- ```
+4. API & SDK: Integrate Rowboat agents directly into your app β see [API](https://docs.rowboatlabs.com/docs/api-sdk/using_the_api) & [SDK](https://docs.rowboatlabs.com/docs/api-sdk/using_the_sdk) docs.
+##
Refer to [Docs](https://docs.rowboatlabs.com/) to learn how to start building agents with Rowboat.
diff --git a/apps/docs/Dockerfile b/apps/docs/Dockerfile
deleted file mode 100644
index f17732115..000000000
--- a/apps/docs/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM python:3.12
-
-WORKDIR /app
-
-COPY requirements.txt .
-
-RUN pip install -r requirements.txt
-
-COPY . .
-
-CMD ["mkdocs", "serve", "--dev-addr", "0.0.0.0:8000"]
\ No newline at end of file
diff --git a/apps/docs/docs.json b/apps/docs/docs.json
new file mode 100644
index 000000000..3c381b703
--- /dev/null
+++ b/apps/docs/docs.json
@@ -0,0 +1,84 @@
+{
+ "$schema": "https://mintlify.com/docs.json",
+ "theme": "maple",
+ "name": "Rowboat",
+ "description": "Rowboat is an open-source platform for building multi-agent systems. It helps you orchestrate tools, RAG, memory, and deployable agents with ease.",
+ "favicon": "/favicon.ico",
+ "colors": {
+ "primary": "#6366F1",
+ "light": "#6366F1",
+ "dark": "#6366F1"
+ },
+ "icons": {
+ "library": "fontawesome"
+ },
+ "navigation": {
+ "groups": [
+ {
+ "group": "Getting Started",
+ "pages": [
+ "docs/getting-started/introduction",
+ "docs/getting-started/quickstart",
+ "docs/getting-started/license"
+ ]
+ },
+ {
+ "group": "Using Rowboat",
+ "pages": [
+ "docs/using-rowboat/rowboat-studio",
+ "docs/using-rowboat/agents",
+ "docs/using-rowboat/tools",
+ "docs/using-rowboat/rag",
+ "docs/using-rowboat/triggers",
+ "docs/using-rowboat/jobs",
+ "docs/using-rowboat/conversations",
+ {
+ "group": "Customise",
+ "icon": "sliders",
+ "pages": [
+ "docs/using-rowboat/customise/custom-llms"
+ ]
+ }
+ ]
+ },
+ {
+ "group": "API & SDK",
+ "pages": ["docs/api-sdk/using_the_api", "docs/api-sdk/using_the_sdk"]
+ },
+ {
+ "group": "Development",
+ "pages": ["docs/development/contribution-guide", "docs/development/roadmap"]
+ }
+ ]
+ },
+ "background": {
+ "decoration": "gradient",
+ "color": {
+ "light": "#FFFFFF",
+ "dark": "#0D0A09"
+ }
+ },
+ "navbar": {
+ "primary": {
+ "type": "button",
+ "label": "Try Rowboat",
+ "href": "https://app.rowboatlabs.com"
+ }
+ },
+ "footer": {
+ "socials": {
+ "github": "https://github.com/rowboatlabs/rowboat",
+ "linkedin": "https://www.linkedin.com/company/rowboat-labs",
+ "discord": "https://discord.gg/rxB8pzHxaS"
+ }
+ },
+ "contextual": {
+ "options": [
+ "copy",
+ "view",
+ "chatgpt",
+ "claude"
+ ]
+ }
+ }
+
\ No newline at end of file
diff --git a/apps/docs/docs/CNAME b/apps/docs/docs/CNAME
deleted file mode 100644
index 5ece9e384..000000000
--- a/apps/docs/docs/CNAME
+++ /dev/null
@@ -1 +0,0 @@
-docs.rowboatlabs.com
\ No newline at end of file
diff --git a/apps/docs/docs/add_tools.md b/apps/docs/docs/add_tools.md
deleted file mode 100644
index f67792959..000000000
--- a/apps/docs/docs/add_tools.md
+++ /dev/null
@@ -1,56 +0,0 @@
-## Add tools to agents
-In Rowboat, you can add tools to your agents by (a) selecting from a in-built library of MCP tools (b) adding your own customer MCP servers (c) integrating your APIs through a webhook (e) mocking tool calls to test the system.
-
-### Adding MCP tools
-
-#### Hosted MCP Library
-
-Rowboat has partnered with  to provide a growing library of hosted MCP servers. You can obtain a Klavis API key and set it using:
-```bash
-export KLAVIS_API_KEY=your-klavis-api-key
-```
-
-
-
-Enable any of the hosted MCP servers by clicking on the enable button. The server will take approximately 10 seconds to spin up.
-
-
-
-For most servers, you will need to authorize it by clicking on the 'Auth' button and connecting to your account e.g. connecting to you github or slack account
-
-The servers you have enabled will show up under tools section in the build view and can be added to any of the agents.
-
-
-
-Note: For GSuite tools, you need to get a Google Client ID and set it using:
-```bash
-export KLAVIS_GOOGLE_CLIENT_ID=your-google-client-id
-```
-
-To obtain a Google Client ID:
-1. Go to the [Google Cloud Console](https://console.cloud.google.com)
-2. Create a new project or select an existing one
-3. Go to "APIs & Services" β "OAuth consent screen" and complete the setup
-4. Go to "APIs & Services" β "Credentials" β "OAuth client ID" and create credentials for web application.
-
-#### Custom MCP Server
-You can add any running MCP server in Settings -> Tools.
-
-
-
-You can use [supergateway](https://github.com/supercorp-ai/supergateway) to expose any MCP stdio server as an SSE server.
-
-Now, you can import the tools from the MCP server in the Build view.
-
-
-
-### Adding mock tools
-You can mock any tool you have created by checking the 'Mock tool responses' option.
-
-
-
-
-### Debug tool calls in the playground
-When agents call tools during a chat in the playground, the tool call parameters and response are available for debugging real-time. For testing purposes, the platform can produce mock tool responses in the playground, without integrating actual tools.
-
-
\ No newline at end of file
diff --git a/apps/docs/docs/agents.md b/apps/docs/docs/agents.md
deleted file mode 100644
index 6ffb91a39..000000000
--- a/apps/docs/docs/agents.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# Agents
-
-## Overview
-- Agents carry out a specific part of the conversation and / or perform tasks like orchestrating between other agents, triggering internal processes and fetching information.
-- Agents carry out tasks through tools provided to them.
-- Agents can be connected to other agents through a mention in the agent's instruction.
-
-## Agent Configurations
-
-### Description
-The description conveys the agent's role in the multi-agent system. Writing a good description is important for other agents to know when to pass control of the conversation to an agent.
-
-### Instructions
-Agent instructions are the backbone of an agent, defining its behavior. RowBoat Studio's copilot produces a good framework for agent instructions, involving Role, Steps to Follow, Scope and Guidelines. Since agents are powered by LLMs, general best practices while writing prompts apply.
-
-### Examples
-The agent uses examples as a reference for behavior in different scenarios. While there are no prescribed formats to provide examples in, examples should include what the user might say, what the agent should respond with as well as indications of any tool calls to be made.
-
-### Prompts
-Prompts attached to an agent will be used by the agent in addition to instructions.
-
-### Tools
-Tools attached to an agent will be put out as tool calls. The behavior of when to invoke tools can be fine-tuned by specifying corresponding instructions or prompts. Adding examples to agents can also be useful in controlling tool call behavior.
-
-### Connected Agents
-In the agent instructions, the connected agents are shown with an '@mention'. If the agent mentioned in an instruction (connected agent) does not actually exist, the connected agent's name would show up with an '!' to call to attention.
-
-### Model
-RowBoat currently supports OpenAI LLMs. Agents can be configured to use GPT-4o or GPT-4o-mini.
\ No newline at end of file
diff --git a/apps/docs/docs/using_the_api.md b/apps/docs/docs/api-sdk/using_the_api.mdx
similarity index 91%
rename from apps/docs/docs/using_the_api.md
rename to apps/docs/docs/api-sdk/using_the_api.mdx
index 413540ab2..60125b909 100644
--- a/apps/docs/docs/using_the_api.md
+++ b/apps/docs/docs/api-sdk/using_the_api.mdx
@@ -1,14 +1,21 @@
-# Using the API
+---
+title: "Using the API"
+description: "This is a guide on using the HTTP API to power conversations with the assistant created in Studio."
+icon: "code"
+---
-This is a guide on using the HTTP API to power conversations with the assistant created in Studio.
## Deploy your assistant to production on Studio
-
+
+
+
## Obtain API key and Project ID
Generate API keys via the developer configs in your project. Copy the Project ID from the same page.
-
+
+
+
## API Endpoint
diff --git a/apps/docs/docs/api-sdk/using_the_sdk.mdx b/apps/docs/docs/api-sdk/using_the_sdk.mdx
new file mode 100644
index 000000000..0909224ff
--- /dev/null
+++ b/apps/docs/docs/api-sdk/using_the_sdk.mdx
@@ -0,0 +1,91 @@
+---
+title: "Using the SDK"
+description: "This is a guide on using the RowBoat Python SDK as an alternative to the [RowBoat HTTP API](/using_the_api) to power conversations with the assistant created in Studio."
+icon: "toolbox"
+---
+
+## Prerequisites
+- ``` pip install rowboat ```
+- [Deploy your assistant to production](/using_the_api/#deploy-your-assistant-to-production-on-studio)
+- [Obtain your `` and ``](/using_the_api/#obtain-api-key-and-project-id)
+
+### API Host
+- For the open source installation, the `` is [http://localhost:3000](http://localhost:3000)
+- When using the hosted app, the `` is [https://app.rowboatlabs.com](https://app.rowboatlabs.com)
+
+## Usage
+
+### Basic Usage
+
+The main way to interact with Rowboat is using the `Client` class, which provides a stateless chat API. You can manage conversation state using the `conversationId` returned in each response.
+
+```python
+from rowboat.client import Client
+from rowboat.schema import UserMessage
+
+# Initialize the client
+client = Client(
+ host="",
+ projectId="",
+ apiKey=""
+)
+
+# Start a new conversation
+result = client.run_turn(
+ messages=[
+ UserMessage(role='user', content="What is the capital of France?")
+ ]
+)
+print(result.turn.output[-1].content)
+# The capital of France is Paris.
+
+print("Conversation ID:", result.conversationId)
+
+# Continue the conversation by passing the conversationId
+result = client.run_turn(
+ messages=[
+ UserMessage(role='user', content="What other major cities are in that country?")
+ ],
+ conversationId=result.conversationId
+)
+print(result.turn.output[-1].content)
+# Other major cities in France include Lyon, Marseille, Toulouse, and Nice.
+
+result = client.run_turn(
+ messages=[
+ UserMessage(role='user', content="What's the population of the first city you mentioned?")
+ ],
+ conversationId=result.conversationId
+)
+print(result.turn.output[-1].content)
+# Lyon has a population of approximately 513,000 in the city proper.
+```
+
+### Using Tool Overrides (Mock Tools)
+
+You can provide tool override instructions to test a specific configuration using the `mockTools` argument:
+
+```python
+result = client.run_turn(
+ messages=[
+ UserMessage(role='user', content="What's the weather?")
+ ],
+ mockTools={
+ "weather_lookup": "The weather in any city is sunny and 25Β°C.",
+ "calculator": "The result of any calculation is 42."
+ }
+)
+print(result.turn.output[-1].content)
+```
+
+### Message Types
+
+You can use different message types as defined in `rowboat.schema`, such as `UserMessage`, `SystemMessage`, `AssistantMessage`, etc. See `schema.py` for all available message types.
+
+### Error Handling
+
+If the API returns a non-200 status code, a `ValueError` will be raised with the error details.
+
+---
+
+For more advanced usage, see the docstrings in `client.py` and the message schemas in `schema.py`.
\ No newline at end of file
diff --git a/apps/docs/docs/create_agents.md b/apps/docs/docs/create_agents.md
deleted file mode 100644
index 72ee7853d..000000000
--- a/apps/docs/docs/create_agents.md
+++ /dev/null
@@ -1,14 +0,0 @@
-## Create the set of initial agents
-Copilot can set up agents for you from scratch.
-
-### Instruct copilot
-First, tell it about the initial set of agents that make up your assistant.
-
-
-
-Using copilot to create your initial set of agents helps you leverage best practices in formatting agent instructions and connecting agents to each other as a graph, all of which have been baked into copilot.
-
-### Inspect the agents
-Once you apply changes, inspect the agents to see how copilot has built them. Specifically, note the Instructions, and Examples in each agent.
-
-
diff --git a/apps/docs/docs/data_sources.md b/apps/docs/docs/data_sources.md
deleted file mode 100644
index fcc39fe6d..000000000
--- a/apps/docs/docs/data_sources.md
+++ /dev/null
@@ -1 +0,0 @@
-Coming soon.
\ No newline at end of file
diff --git a/apps/docs/docs/development/contribution-guide.mdx b/apps/docs/docs/development/contribution-guide.mdx
new file mode 100644
index 000000000..94d0f46c9
--- /dev/null
+++ b/apps/docs/docs/development/contribution-guide.mdx
@@ -0,0 +1,59 @@
+---
+title: "Contribution Guide"
+description: "Learn how to contribute to the Rowboat project and help improve our platform."
+icon: "github"
+---
+
+# Join the Rowboat Voyage
+
+We're building Rowboat as an open-source, community-powered platform β and we'd love for you to hop aboard! Whether you're fixing typos, suggesting a new tool integration, or designing your dream multi-agent workflow, your contributions are valuable and welcome.
+
+
+
+
+
+---
+
+## How You Can Contribute
+
+
+- **Tackle Open Issues**
+ Browse our [GitHub Issues](https://github.com/rowboatlabs/rowboat/issues) for tags like `good first issue`, `help wanted`, or `bug` to find a spot that fits your skillset.
+
+- **Join the Community**
+ Our [Discord](https://discord.gg/rxB8pzHxaS) is the go-to hub for brainstorming, feedback, and finding contributors for bigger efforts.
+
+- **Propose Something New**
+ Have a new tool integration idea or found a bug? Open an issue and letβs discuss it!
+
+
+---
+
+## Contribution Workflow & Best Practices
+
+Whether it's your first contribution or your fiftieth, here's what a great contribution looks like:
+
+| Step / Tip | Description |
+|-------------------------------|-----------------------------------------------------------------------------------------------|
+| **1. Fork the Repository** | Create a personal copy of [rowboatlabs/rowboat](https://github.com/rowboatlabs/rowboat) to start contributing. |
+| **2. Create a New Branch** | Use a descriptive name like `fix-tool-crash` or `feature-new-mcp`. Avoid committing to `main`. |
+| **3. Make Your Changes** | Focus your PR on a single issue or feature to keep things clean and reviewable. |
+| **4. Write Tests if Needed** | If you change logic, add relevant tests so your contribution is future-proof. |
+| **5. Run Tests & Lint Locally**| Make sure your branch passes all tests and code quality checks before pushing. |
+| **6. Document or Demo It** | Add helpful context: screenshots, example scripts, or a short video/gif to demonstrate your changes. |
+| **7. Submit a Pull Request** | Open a PR with a clear description of your changes and any context reviewers should know. |
+| **8. Collaborate on Feedback** | Our maintainers may leave comments β itβs all part of the process. Letβs improve it together. |
+| **9. Donβt Be Shy to Follow Up** | Feel free to ping the PR/issue if you're waiting on feedback. We appreciate polite nudges. |
+| **10. Celebrate the Merge!** | You just made Rowboat better. Thanks for contributing π |
+
+If you're fixing typos, spacing, or small tweaks β try bundling those into a related PR instead of sending them standalone. It helps keep reviews focused.
+
+
+
+---
+
+## Come Build With Us
+
+We believe great ideas come from the community β and that means **you**. Whether you're an engineer, designer, AI tinkerer, or curious beginner, thereβs room on this boat for everyone.
+
+Letβs build the future of AI workflows β together. π«Ά
diff --git a/apps/docs/docs/development/roadmap.mdx b/apps/docs/docs/development/roadmap.mdx
new file mode 100644
index 000000000..6248e5673
--- /dev/null
+++ b/apps/docs/docs/development/roadmap.mdx
@@ -0,0 +1,7 @@
+---
+icon: "road"
+---
+
+# Roadmap
+
+Explore the future development plans and upcoming features for Rowboat.
\ No newline at end of file
diff --git a/apps/docs/docs/getting-started/introduction.mdx b/apps/docs/docs/getting-started/introduction.mdx
new file mode 100644
index 000000000..8fde54db3
--- /dev/null
+++ b/apps/docs/docs/getting-started/introduction.mdx
@@ -0,0 +1,78 @@
+---
+title: "Introduction"
+description: "Welcome to the official Rowboat documentation! Rowboat is a low-code AI IDE to build tool connected multi-agent assistants. Rowboat copilot builds the agents for you based on your requirements with the option do everything manually as well."
+icon: "book-open"
+---
+
+
+
+
+
+## What is RowBoat?
+**RowBoat is a state-of-art platform to build multi-agent AI systems in a visual interface, with the help of a copilot.**
+
+RowBoat enables you to build, manage and deploy user-facing assistants. An assistant is made up of multiple agents, each having access to a set of tools and working together to interact with the user as a single assistant. You can connect any tool to the agents.
+
+For example, you can build a *meeting prep assistant* that helps you prepare for upcoming meetings. One agent can access your Google Calendar to see your scheduled meetings, another agent can research the meeting attendees (such as finding their LinkedIn profiles or recent news), and a third agent can compile this research and send it to your email before the meeting. This way, you get automated, personalized meeting prep without manual effort.
+
+---
+
+## How RowBoat works
+
+### RowBoat Studio
+RowBoat Studio lets you create AI agents in minutes, using a visual interface and plain language.
+There are key components that you will work with:
+- Agents
+- Playground
+- Copilot
+
+
+
+ Learn about Rowboat Studio and key concepts used in building assistants
+
+
+
+### RowBoat Chat API & SDK
+- [RowBoat Chat API](/docs/api-sdk/using_the_api) is a stateless HTTP API to interface with the assistant created on RowBoat Studio. You can use the API to drive end-user facing conversations in your app or website.
+- [RowBoat Chat SDK](/docs/api-sdk/using_the_sdk) is a simple Python SDK which wraps the HTTP API under the hood. It provides a clean interface for managing conversations using conversation IDs for state management.
+
+---
+
+## Why RowBoat?
+Rowboat is the fastest way to build and deploy multi-agent assistants.
+
+
+
+ Use plain language and a powerful visual interface to design and orchestrate multi-agent assistants with ease.
+
+
+
+ Add tools and connect to MCP servers in just minutes β no complex setup required.
+
+
+
+ Accelerate development with battle-tested tooling tailored for building production-ready, multi-agent AI systems.
+
+
+
+---
+
+## Contributing
+Want to contribute to Rowboat? Please consider checking out our [Contribution Guide](/docs/development/contribution-guide)
+
+Star us on github!
+
+
+## Community
+Need help using Rowboat? Join our community!
+
+Join our growing discord community and interact with hundreds of developer using Rowboat!
+
diff --git a/apps/docs/docs/license.md b/apps/docs/docs/getting-started/license.mdx
similarity index 98%
rename from apps/docs/docs/license.md
rename to apps/docs/docs/getting-started/license.mdx
index a38ae9b91..3e21a84e2 100644
--- a/apps/docs/docs/license.md
+++ b/apps/docs/docs/getting-started/license.mdx
@@ -1,3 +1,10 @@
+---
+title: "License"
+icon: "file"
+mode: "center"
+# url: "https://github.com/rowboatlabs/rowboat/blob/main/LICENSE" ## An alternate display we could use
+---
+
RowBoat is available under the [Apache 2.0 License](https://github.com/rowboatlabs/rowboat/blob/main/LICENSE):
----
diff --git a/apps/docs/docs/getting-started/quickstart.mdx b/apps/docs/docs/getting-started/quickstart.mdx
new file mode 100644
index 000000000..04fc761b7
--- /dev/null
+++ b/apps/docs/docs/getting-started/quickstart.mdx
@@ -0,0 +1,94 @@
+---
+title: "Quickstart"
+description: "guide to getting started with rowboat"
+icon: "rocket"
+---
+---
+# Cloud Setup
+
+Using the open-source version of Rowboat requires more technical skill to set up and navigate. For the smoothest experience, we recommend using our [hosted version](https://dev.rowboatlabs.com/)
+
+---
+
+# Local Setup
+
+Pre-requisite: Ensure Docker is installed on your machine. You'll also need an OpenAI account and API key to use the Copilot and agents.
+
+
+
+
+ Export your OpenAI API key in your terminal:
+
+ ```bash
+ export OPENAI_API_KEY=your-openai-api-key
+ ```
+
+
+
+ To use external tools and triggers, export your Composio API key:
+
+ ```bash
+ export COMPOSIO_API_KEY=your-composio-api-key
+ export COMPOSIO_TRIGGERS_WEBHOOK_SECRET=your-webhook-secret
+ ```
+
+ For more detailed setup instructions, see the [Triggers](/docs/using-rowboat/triggers#local-setup) page.
+
+
+
+ Clone the Rowboat repository and start the app using Docker:
+
+ ```bash
+ git clone git@github.com:rowboatlabs/rowboat.git
+ cd rowboat
+ ./start.sh
+ ```
+
+
+
+ Once Docker is running, open your browser and go to:
+
+ [http://localhost:3000](http://localhost:3000)
+
+
+
+
+See the [Using custom LLM providers](#using-custom-llm-providers) section below for using custom providers like OpenRouter and LiteLLM.
+
+---
+
+## Demo
+{/* (would be better to change this to a Getiing Started Tutorial) */}
+
+#### Create a multi-agent assistant with MCP tools by chatting with Rowboat
+[](https://youtu.be/YRTCw9UHRbU)
+
+---
+
+## Integrate with Rowboat agents
+
+There are 2 ways to integrate with the agents you create in Rowboat
+
+
+
+
+ Guide on using the HTTP API
+
+
+
+ Guide on using the Python SDK
+
+
+
+
+---
+
+## Using custom LLM providers
+By default, Rowboat uses OpenAI LLMs (gpt-4o, gpt-4.1, etc.) for both agents and copilot, when you export your OPENAI_API_KEY.
+
+However, you can also configure custom LLM providers (e.g. LiteLLM, OpenRouter) to use any of the hundreds of available LLMs beyond OpenAI, such as Claude, DeepSeek, Ollama LLMs and so on.
+
+Check out our page on customising
+
+ Learn more about customising your Rowboat experience here
+
diff --git a/apps/docs/docs/graph.md b/apps/docs/docs/graph.md
deleted file mode 100644
index c5d47e951..000000000
--- a/apps/docs/docs/graph.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# Graph-based Framework
-
-## Overview
-
-- Multi-agent systems are popularly represented as graphs, where each agent is a node in the graph.
-- In RowBoat, agents are connected to each other as Directed Acyclic Graphs (DAG).
-- The graph is also called a workflow, which defines agents, tools, and their connections.
-- Since the graph is directed, the control of conversation flows from "parent" agents to their "children" agents
-- Every agent is responsible for carrying out a specific part of the workflow, which can involve conversing with the user and / or carrying out tasks such as directing the conversation to other agents.
-- [Langgraph](https://www.langchain.com/langgraph) and [Swarm](https://github.com/openai/swarm) are examples of open-source frameworks used to define multi-agent graphs. RowBoat currently supports a Swarm implementation and will extend to Langgraph too in the future.
-
-## Control Passing
-
-- While defining the workflow, an agent is designated as the Start agent, to which the first turn of chat will be directed. Typically the Start agent is responsible for triaging the user's query at a high-level and passing control to relevant specific agents which can address the user's needs.
-- In any turn of chat, the agent currently in control of the chat has one of 3 options: a) respond to the user (or put out tool calls), b) transfer the chat to any of its children agents or c) transfer the chat back to its parent agent.
-- Agents use internal tool calls to transfer the chat to other agents.
-- Thus, control passing is achieved by allowing agents to decide flow of control autonomously.
-- To the user, the assistant will appear as a unified system, while agents work under the hood.
-
-## Pipelines
-
-- RowBoat also has the concept of pipelines - specialized agents invoked sequentially after an agent in the graph has produced a user-facing response.
-- E.g. a pipeline with a post processing agent and a guardrail agent will ensure that every response is post processed and guardrailed for appropriateness before presenting it to the user.
diff --git a/apps/docs/docs/hosted_setup.md b/apps/docs/docs/hosted_setup.md
deleted file mode 100644
index 27f9eabe3..000000000
--- a/apps/docs/docs/hosted_setup.md
+++ /dev/null
@@ -1,156 +0,0 @@
-# Using the Hosted App
-
-- This is the developers guide to self-hosting the open-source version of RowBoat.
-- Please see our [Introduction](/) page before referring to this guide.
-- For direct installation steps, please head to the README of RowBoat's Github repo: [@rowboatlabs/rowboat](https://github.com/rowboatlabs/rowboat/). This page provides more context about the installation process and the different components involved.
-
-## Overview
-
-RowBoat's codebase has three main components:
-
-| Component | Description |
-|--------------|---------------|
-| **Agents** | Python framework responsible for carrying out multi-agent conversations |
-| **Copilot** | Python framework powering the copilot in RowBoat Studio |
-| **RowBoat** | Frontend and backend services to power RowBoat Studio and Chat APIs |
-
-These components are structured as separate services, each containerized with Docker. Running `docker-compose up --build` enables you to use the Studio in your browser, as well as stands up the APIs and SDK.
-
-## Prerequisites
-All of these prerequistes have open-source or free versions.
-
-| Prerequisite | Description |
-|--------------|---------------|
-| **Docker** | Bundles and builds all services |
-| **OpenAI API Key** | Agents and Copilot services are powered by OpenAI LLMs |
-| **MongoDB** | Stores workflow versions, chats and RAG embeddings |
-| **Auth0 Account** | Handles user authentication and identity management for Studio |
-
-Refer to our [Github Readme for Prerequisites](https://github.com/rowboatlabs/rowboat/?tab=readme-ov-file#prerequisites) to set up prerequisites.
-
-## Setting up
-
-Refer to our [Github Readme for Local Development](https://github.com/rowboatlabs/rowboat/?tab=readme-ov-file#local-development-setup) to set up Studio, Chat API and SDK via `docker-compose`.
-
-### Testing Studio
-
-1. Once you are set up, you should be able to login to the Studio (default local URL: [http://localhost:3000](http://localhost:8000)) via Auth0's login options (Gmail, Github etc.)
-
-
-2. Once in Studio, create a new blank project or use one of the example templates:
-
-
-
-3. Use the copilot to help you build agents:
-
-
-
-4. Ensure that the correct agent is set as the "start agent":
-
-
-
-5. Test out a chat in the playground to verify the agents' behavior:
-
-
-
-### Testing the Chat API
-
-You can use the API directly at [http://localhost:3000/api/v1/](http://localhost:3000/api/v1/)
-- Project ID is available in the URL of the project page
-- API Key can be generated from the project config page at `/projects//config`
-
-Below is an example request and response. Modify the user message in the request, based on your example project.
-
-**Request:**
-
-```bash
-curl --location 'http://localhost:3000/api/v1//chat' \
---header 'Content-Type: application/json' \
---header 'Authorization: Bearer ' \
---data '{
- "messages": [
- {
- "role": "user",
- "content": "What is my pending payment amount?"
- }
- ]
-}'
-```
-**Response:**
-The last message in `messages` is either a user-facing response or a tool call by the assistant.
-
-```json
-{
- "messages": [
- {
- "sender": "Credit Card Hub",
- "role": "assistant",
- "response_type": "internal",
- "content": null,
- "created_at": "2025-02-01T06:55:47.843909",
- "current_turn": true,
- "tool_calls": [
- {
- "function": {
- "arguments": "{\"args\":\"\",\"kwargs\":\"\"}",
- // Internal tool calls are used to transfer between agents
- "name": "transfer_to_outstanding_payment"
- },
- "id": "call_7jGpwpVvzhZFOyRgxHFkdOdU",
- "type": "function"
- }
- ]
- },
- {
- "tool_name": "transfer_to_outstanding_payment",
- "role": "tool",
- "content": "{\"assistant\": \"Outstanding Payment\"}",
- "tool_call_id": "call_7jGpwpVvzhZFOyRgxHFkdOdU"
- },
- {
- "sender": "Outstanding Payment",
- "role": "assistant",
- // Response is not user-facing, to enable further post processing
- "response_type": "internal",
- "content": "Sure, could you provide the last four digits of your card or your registered mobile number so I can look up your pending payment amount?",
- "created_at": "2025-02-01T06:55:49.648008",
- "current_turn": true
- },
- {
- "sender": "Outstanding Payment >> Post process",
- "role": "assistant",
- // Response is user-facing
- "response_type": "external",
- "content": "Sure, please provide the last four digits of your card or your registered mobile number so I can check your pending payment amount.",
- "created_at": "2025-02-01T06:55:49.648008",
- "current_turn": true
- }
- ],
- "state": {
- // .. state data
- }
-}
-```
-
-### Testing the Python Chat SDK
-
-```bash
-pip install rowboat
-```
-
-Modify the user message in `messages`, based on your example project.
-
-```python
-from rowboat import Client
-
-client = Client(
- host="http://localhost:3000",
- project_id="",
- api_key="" # Generate this from /projects//config
-)
-
-# Simple chat interaction
-messages = [{"role": "user", "content": "What is my pending payment amount?"}]
-response_messages, state = client.chat(messages=messages)
-```
-The last message in `response_messages` is either a user-facing response or a tool call by the assistant.
\ No newline at end of file
diff --git a/apps/docs/docs/img/Intro-Video.gif b/apps/docs/docs/img/Intro-Video.gif
new file mode 100644
index 000000000..d4eb387ef
Binary files /dev/null and b/apps/docs/docs/img/Intro-Video.gif differ
diff --git a/apps/docs/docs/img/add-mcp-server.png b/apps/docs/docs/img/add-mcp-server.png
deleted file mode 100644
index 18b629dfb..000000000
Binary files a/apps/docs/docs/img/add-mcp-server.png and /dev/null differ
diff --git a/apps/docs/docs/img/add-tool.png b/apps/docs/docs/img/add-tool.png
deleted file mode 100644
index 8e85f40f4..000000000
Binary files a/apps/docs/docs/img/add-tool.png and /dev/null differ
diff --git a/apps/docs/docs/img/agent-config.png b/apps/docs/docs/img/agent-config.png
deleted file mode 100644
index 3887294bd..000000000
Binary files a/apps/docs/docs/img/agent-config.png and /dev/null differ
diff --git a/apps/docs/docs/img/chat-delivery.png b/apps/docs/docs/img/chat-delivery.png
deleted file mode 100644
index b20dd3694..000000000
Binary files a/apps/docs/docs/img/chat-delivery.png and /dev/null differ
diff --git a/apps/docs/docs/img/contribution-guide-hero.png b/apps/docs/docs/img/contribution-guide-hero.png
new file mode 100644
index 000000000..705f1a1d7
Binary files /dev/null and b/apps/docs/docs/img/contribution-guide-hero.png differ
diff --git a/apps/docs/docs/img/conversations-inside-run.png b/apps/docs/docs/img/conversations-inside-run.png
new file mode 100644
index 000000000..540474a6b
Binary files /dev/null and b/apps/docs/docs/img/conversations-inside-run.png differ
diff --git a/apps/docs/docs/img/conversations-ui.png b/apps/docs/docs/img/conversations-ui.png
new file mode 100644
index 000000000..1ab34a855
Binary files /dev/null and b/apps/docs/docs/img/conversations-ui.png differ
diff --git a/apps/docs/docs/img/copilot-clarifications.png b/apps/docs/docs/img/copilot-clarifications.png
deleted file mode 100644
index 4611015d4..000000000
Binary files a/apps/docs/docs/img/copilot-clarifications.png and /dev/null differ
diff --git a/apps/docs/docs/img/copilot-create.png b/apps/docs/docs/img/copilot-create.png
deleted file mode 100644
index cf373956a..000000000
Binary files a/apps/docs/docs/img/copilot-create.png and /dev/null differ
diff --git a/apps/docs/docs/img/debug-tool-calls.png b/apps/docs/docs/img/debug-tool-calls.png
deleted file mode 100644
index aa0e5dd90..000000000
Binary files a/apps/docs/docs/img/debug-tool-calls.png and /dev/null differ
diff --git a/apps/docs/docs/img/edit-agent-manually.png b/apps/docs/docs/img/edit-agent-manually.png
deleted file mode 100644
index 9c1c679c5..000000000
Binary files a/apps/docs/docs/img/edit-agent-manually.png and /dev/null differ
diff --git a/apps/docs/docs/img/enable-mcp-server.png b/apps/docs/docs/img/enable-mcp-server.png
deleted file mode 100644
index b7138ebc4..000000000
Binary files a/apps/docs/docs/img/enable-mcp-server.png and /dev/null differ
diff --git a/apps/docs/docs/img/example-tool.png b/apps/docs/docs/img/example-tool.png
deleted file mode 100644
index 851bbb040..000000000
Binary files a/apps/docs/docs/img/example-tool.png and /dev/null differ
diff --git a/apps/docs/docs/img/hub-config.png b/apps/docs/docs/img/hub-config.png
deleted file mode 100644
index 0d2c64675..000000000
Binary files a/apps/docs/docs/img/hub-config.png and /dev/null differ
diff --git a/apps/docs/docs/img/import-mcp-tools.png b/apps/docs/docs/img/import-mcp-tools.png
deleted file mode 100644
index e2c2efff3..000000000
Binary files a/apps/docs/docs/img/import-mcp-tools.png and /dev/null differ
diff --git a/apps/docs/docs/img/inspect-agent-tool-connections.png b/apps/docs/docs/img/inspect-agent-tool-connections.png
deleted file mode 100644
index 330466b0b..000000000
Binary files a/apps/docs/docs/img/inspect-agent-tool-connections.png and /dev/null differ
diff --git a/apps/docs/docs/img/inspect-agent-tools.png b/apps/docs/docs/img/inspect-agent-tools.png
deleted file mode 100644
index d5b7cd408..000000000
Binary files a/apps/docs/docs/img/inspect-agent-tools.png and /dev/null differ
diff --git a/apps/docs/docs/img/jobs-inside-run.png b/apps/docs/docs/img/jobs-inside-run.png
new file mode 100644
index 000000000..ed7d39a81
Binary files /dev/null and b/apps/docs/docs/img/jobs-inside-run.png differ
diff --git a/apps/docs/docs/img/jobs-ui.png b/apps/docs/docs/img/jobs-ui.png
new file mode 100644
index 000000000..9dd2662fa
Binary files /dev/null and b/apps/docs/docs/img/jobs-ui.png differ
diff --git a/apps/docs/docs/img/mcp-library.png b/apps/docs/docs/img/mcp-library.png
deleted file mode 100644
index 970193f60..000000000
Binary files a/apps/docs/docs/img/mcp-library.png and /dev/null differ
diff --git a/apps/docs/docs/img/mcp-tools-build-view.png b/apps/docs/docs/img/mcp-tools-build-view.png
deleted file mode 100644
index 30d1f2cdb..000000000
Binary files a/apps/docs/docs/img/mcp-tools-build-view.png and /dev/null differ
diff --git a/apps/docs/docs/img/mock-response.png b/apps/docs/docs/img/mock-response.png
deleted file mode 100644
index 3f3b68c72..000000000
Binary files a/apps/docs/docs/img/mock-response.png and /dev/null differ
diff --git a/apps/docs/docs/img/mock-tool-responses.png b/apps/docs/docs/img/mock-tool-responses.png
deleted file mode 100644
index 6ac3d4e27..000000000
Binary files a/apps/docs/docs/img/mock-tool-responses.png and /dev/null differ
diff --git a/apps/docs/docs/img/mock-tool.png b/apps/docs/docs/img/mock-tool.png
deleted file mode 100644
index ab3c492fb..000000000
Binary files a/apps/docs/docs/img/mock-tool.png and /dev/null differ
diff --git a/apps/docs/docs/img/project-page.png b/apps/docs/docs/img/project-page.png
deleted file mode 100644
index 61ce2c9e1..000000000
Binary files a/apps/docs/docs/img/project-page.png and /dev/null differ
diff --git a/apps/docs/docs/img/rag-adding-data.png b/apps/docs/docs/img/rag-adding-data.png
new file mode 100644
index 000000000..675d77a6c
Binary files /dev/null and b/apps/docs/docs/img/rag-adding-data.png differ
diff --git a/apps/docs/docs/img/re-test-chat.png b/apps/docs/docs/img/re-test-chat.png
deleted file mode 100644
index 9d887a658..000000000
Binary files a/apps/docs/docs/img/re-test-chat.png and /dev/null differ
diff --git a/apps/docs/docs/img/scenarios.png b/apps/docs/docs/img/scenarios.png
deleted file mode 100644
index 828b408b2..000000000
Binary files a/apps/docs/docs/img/scenarios.png and /dev/null differ
diff --git a/apps/docs/docs/img/simulate.png b/apps/docs/docs/img/simulate.png
deleted file mode 100644
index e6aca1493..000000000
Binary files a/apps/docs/docs/img/simulate.png and /dev/null differ
diff --git a/apps/docs/docs/img/start-agent.png b/apps/docs/docs/img/start-agent.png
deleted file mode 100644
index 9100fab77..000000000
Binary files a/apps/docs/docs/img/start-agent.png and /dev/null differ
diff --git a/apps/docs/docs/img/sys-msg.png b/apps/docs/docs/img/sys-msg.png
deleted file mode 100644
index 2a2abfbaa..000000000
Binary files a/apps/docs/docs/img/sys-msg.png and /dev/null differ
diff --git a/apps/docs/docs/img/test-updated-agent.png b/apps/docs/docs/img/test-updated-agent.png
deleted file mode 100644
index 4bc09d809..000000000
Binary files a/apps/docs/docs/img/test-updated-agent.png and /dev/null differ
diff --git a/apps/docs/docs/img/testing-chat.png b/apps/docs/docs/img/testing-chat.png
deleted file mode 100644
index 2f2eef549..000000000
Binary files a/apps/docs/docs/img/testing-chat.png and /dev/null differ
diff --git a/apps/docs/docs/img/tools-ui.png b/apps/docs/docs/img/tools-ui.png
new file mode 100644
index 000000000..b9283d22b
Binary files /dev/null and b/apps/docs/docs/img/tools-ui.png differ
diff --git a/apps/docs/docs/img/triggers-external-ui.png b/apps/docs/docs/img/triggers-external-ui.png
new file mode 100644
index 000000000..e3057ae1d
Binary files /dev/null and b/apps/docs/docs/img/triggers-external-ui.png differ
diff --git a/apps/docs/docs/img/triggers-onetime-ui.png b/apps/docs/docs/img/triggers-onetime-ui.png
new file mode 100644
index 000000000..d38ca558f
Binary files /dev/null and b/apps/docs/docs/img/triggers-onetime-ui.png differ
diff --git a/apps/docs/docs/img/triggers-recurring-ui.png b/apps/docs/docs/img/triggers-recurring-ui.png
new file mode 100644
index 000000000..6fd5ee26c
Binary files /dev/null and b/apps/docs/docs/img/triggers-recurring-ui.png differ
diff --git a/apps/docs/docs/img/try-chat.png b/apps/docs/docs/img/try-chat.png
deleted file mode 100644
index e48520b5f..000000000
Binary files a/apps/docs/docs/img/try-chat.png and /dev/null differ
diff --git a/apps/docs/docs/img/update-agent-generate.png b/apps/docs/docs/img/update-agent-generate.png
deleted file mode 100644
index 6e076405b..000000000
Binary files a/apps/docs/docs/img/update-agent-generate.png and /dev/null differ
diff --git a/apps/docs/docs/img/update-agent-with-copilot.png b/apps/docs/docs/img/update-agent-with-copilot.png
deleted file mode 100644
index cfa9b514c..000000000
Binary files a/apps/docs/docs/img/update-agent-with-copilot.png and /dev/null differ
diff --git a/apps/docs/docs/img/update-agent.png b/apps/docs/docs/img/update-agent.png
deleted file mode 100644
index 05af2ec6b..000000000
Binary files a/apps/docs/docs/img/update-agent.png and /dev/null differ
diff --git a/apps/docs/docs/img/use-copilot.png b/apps/docs/docs/img/use-copilot.png
deleted file mode 100644
index b48c60405..000000000
Binary files a/apps/docs/docs/img/use-copilot.png and /dev/null differ
diff --git a/apps/docs/docs/index.md b/apps/docs/docs/index.md
deleted file mode 100644
index 89626228f..000000000
--- a/apps/docs/docs/index.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# Welcome to Rowboat
-
-Rowboat is a low-code AI IDE to build MCP tools connected multi-agent assistants. Rowboat copilot builds the agents for you based on your requirements with the option do everything manually as well.
-
-**Note:** These docs are intended for developers who would like to use our [open-source code](https://github.com/rowboatlabs/rowboat/).
-
-- Our source code is on GitHub at [@rowboatlabs/rowboat](https://github.com/rowboatlabs/rowboat/)
-- Join us on [discord](https://discord.gg/jHhUKkKHn8)
-- Email us at [founders@rowboatlabs.com](mailto:founders@rowboatlabs.com)
-- Visit our [website](https://www.rowboatlabs.com/)
-
-## What is RowBoat?
-**RowBoat is a state-of-art platform to build multi-agent AI systems in a visual interface, with the help of a copilot.**
-
-RowBoat enables you to build, manage and deploy user-facing assistants. An assistant is made up of multiple agents, each having access to a set of tools and working together to interact with the user as a single assistant. You can connect any MCP tools to the agents.
-
-For example, you can build a *credit card assistant*, where each agent handles a workflow such as *outstanding payments*, *balance inquiries* and *transaction disputes*. You can equip agents with tools to carry out tasks such as *fetching payment options*, *checking outstanding balance* and *updating user information*. The assistant would help your end-users their credit card-related needs without having to talk to a human agent on your end.
-
-## How RowBoat works
-
-### RowBoat Studio
-RowBoat Studio lets you create AI agents in minutes, using a visual interface and plain language. Here are key components that you will work with:
-
-| Component | Description | Highlights |
-|------------|-------------|------------|
-| Agent | Handles a specific part of the conversation and performs tasks using tools, based on instructions |β’ Configurable using plain language instructions β’ Orchestrate between agents connected as a graph β’ Can access tools and knowledge sources (RAG)|
-| Playground | Interactive environment to test assistants conversationally as you build them |β’ Real-time testing and debugging β’ Inspect parameters and results of tool calls in-line β’ Converse with individual agents or the entire assistant|
-| Copilot | AI-powered concierge that creates and updates agents and tools on your behalf |β’ Context-aware of all components including playground β’ Improves agents based on conversations and feedback β’ Understands your requests in plain language|
-
-### RowBoat Chat API & SDK
-- [RowBoat Chat API](/using_the_api) is a stateless HTTP API to interface with the assistant created on RowBoat Studio. You can use the API to drive end-user facing conversations in your app or website.
-- [RowBoat Chat SDK](/using_the_sdk) is a simple SDK (currently available in Python) which wraps the HTTP API under the hood. It offers both stateful and stateless (OpenAI-style) implementations.
-
-### Steps
-**RowBoat Studio:**
-
-1. Describe the assistant you are looking to build, to **copilot**
-2. Review and apply the **agents** (and tools) created by copilot
-3. Configure **MCP servers** and **tools** and connect them to agents
-4. Chat with your assistant in the **playground**
-6. Deploy and use the HTTP API or Python SDK to integrate the agents into your system
-
-## Why RowBoat?
-Rowboat is the fastest way to build and deploy MCP connected multi-agents
-
-1. **Build** complex assistants using plain language and a visual interface
-2. **Integrate** tools and MCP servers in minutes
-3. **Expedite** your multi-agent AI roadmap using battle-tested tooling
-
-## Getting started
-
-- To set up our open-source installation, see [Github Readme](https://github.com/rowboatlabs/rowboat)
-- To sign up for our managed offering (beta), please email us at [founders@rowboatlabs.com](mailto:founders@rowboatlabs.com)
\ No newline at end of file
diff --git a/apps/docs/docs/oss_installation.md b/apps/docs/docs/oss_installation.md
deleted file mode 100644
index b26c53884..000000000
--- a/apps/docs/docs/oss_installation.md
+++ /dev/null
@@ -1,32 +0,0 @@
-# Open Source Installation
-
-- This is the developers guide to self-hosting the open-source version of RowBoat. To get started with the hosted app, please see [Using the Hosted App](/hosted_setup)
-- Please see our [Introduction](/) page before referring to this guide.
-- For direct installation steps, please head to the README of RowBoat's Github repo: [@rowboatlabs/rowboat](https://github.com/rowboatlabs/rowboat/). This page provides more context about the installation process and the different components involved.
-
-## Overview
-
-RowBoat's codebase has three main components:
-
-| Component | Description |
-|--------------|---------------|
-| **Agents** | Python framework responsible for carrying out multi-agent conversations |
-| **Copilot** | Python framework powering the copilot in RowBoat Studio |
-| **RowBoat** | Frontend and backend services to power RowBoat Studio and Chat APIs |
-
-These components are structured as separate services, each containerized with Docker. Running `docker-compose up --build` enables you to use the Studio in your browser, as well as stands up the APIs and SDK.
-
-## Prerequisites
-All of these prerequisites have open-source or free versions.
-
-| Prerequisite | Description |
-|--------------|---------------|
-| **Docker** | Bundles and builds all services |
-| **OpenAI API Key** | Agents and Copilot services are powered by OpenAI LLMs |
-| **MongoDB** | Stores workflow versions, chats and RAG embeddings |
-
-Refer to our [Github Readme for Prerequisites](https://github.com/rowboatlabs/rowboat/?tab=readme-ov-file#prerequisites) to set up prerequisites.
-
-## Setting up
-
-Refer to our [Github Readme for Local Development](https://github.com/rowboatlabs/rowboat/?tab=readme-ov-file#local-development-setup) to set up Studio, Chat API and SDK via `docker-compose`.
\ No newline at end of file
diff --git a/apps/docs/docs/playground.md b/apps/docs/docs/playground.md
deleted file mode 100644
index 3f1f0590c..000000000
--- a/apps/docs/docs/playground.md
+++ /dev/null
@@ -1,7 +0,0 @@
-## Try an example chat in the playground
-
-### Chat with the assistant
-
-The playground is intended to test out the assistant as you build it. The User and Assistant messages represent the conversation that your end-user will have if your assistant is deployed in production. The playground also has debug elements which show the flow of control between different agents in your system, as well as which agent finally responded to the user.
-
-
\ No newline at end of file
diff --git a/apps/docs/docs/prompts.md b/apps/docs/docs/prompts.md
deleted file mode 100644
index 2552cc49f..000000000
--- a/apps/docs/docs/prompts.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# Prompts
-
-- Prompts are reusable pieces of agent instructions in Studio.
-- Prompts can be defined once and reused across multiple agents.
-- Common examples of prompts are style prompts which indicate brand voice and structured output prompts which specify a format for the agent to provide its output in (e.g. ReAct)
\ No newline at end of file
diff --git a/apps/docs/docs/setup.md b/apps/docs/docs/setup.md
deleted file mode 100644
index f9ed3ef76..000000000
--- a/apps/docs/docs/setup.md
+++ /dev/null
@@ -1,127 +0,0 @@
-## Getting started
-
-- β¨ **Start from an idea β Copilot builds your multi-agent workflows**
- E.g. "Build me an assistant for a food delivery company to handle delivery status and missing items. Include the necessary tools."
-- π **Connect MCP servers**
- Add the MCP servers in Settings β import the tools into Rowboat.
-- π **Integrate into your app using the HTTP API or Python SDK**
- Grab the Project ID and generated API Key from Settings and use the API.
-
-Powered by OpenAI's Agents SDK, Rowboat is the fastest way to build multi-agents!
-
-## Quick start
-
-Step 1. Set your OpenAI key:
-
-```bash
-export OPENAI_API_KEY=your-openai-api-key
-```
-
-Step 2. Clone the repository and start Rowboat docker
-
-```bash
-git clone git@github.com:rowboatlabs/rowboat.git
-cd rowboat
-docker-compose up --build
-```
-
-Step 3. Access the app at [http://localhost:3000](http://localhost:3000).
-
-Note: See the [Using custom LLM providers](#using-custom-llm-providers) section below for using custom providers like OpenRouter and LiteLLM.
-
-## Demo
-
-#### Create a multi-agent assistant with MCP tools by chatting with Rowboat
-[](https://youtu.be/YRTCw9UHRbU)
-
-## Integrate with Rowboat agents
-
-There are 2 ways to integrate with the agents you create in Rowboat
-
-**Option #1: HTTP API**
-
-You can use the API directly at [http://localhost:3000/api/v1/](http://localhost:3000/api/v1/). See [API Docs](https://docs.rowboatlabs.com/using_the_api/) for details.
-
-```bash
-curl --location 'http://localhost:3000/api/v1//chat' \
---header 'Content-Type: application/json' \
---header 'Authorization: Bearer ' \
---data '{
- "messages": [
- {
- "role": "user",
- "content": "tell me the weather in london in metric units"
- }
- ],
- "state": null
-}'
-```
-
-
-**Option #2: Python SDK**
-
-You can use the included Python SDK to interact with the Agents
-
-```python
-from rowboat import Client, StatefulChat
-from rowboat.schema import UserMessage, SystemMessage
-
-# Initialize the client
-client = Client(
- host="http://localhost:3000",
- project_id="",
- api_key=""
-)
-
-# Create a stateful chat session (recommended)
-chat = StatefulChat(client)
-response = chat.run("What's the weather in London?")
-print(response)
-
-# Or use the low-level client API
-messages = [
- SystemMessage(role='system', content="You are a helpful assistant"),
- UserMessage(role='user', content="Hello, how are you?")
-]
-
-# Get response
-response = client.chat(messages=messages)
-print(response.messages[-1].content)
-```
-
-## Using custom LLM providers
-By default, Rowboat uses OpenAI LLMs (gpt-4o, gpt-4.1, etc.) for both agents and copilot, when you export your OPENAI_API_KEY.
-
-However, you can also configure custom LLM providers (e.g. LiteLLM, OpenRouter) to use any of the hundreds of available LLMs beyond OpenAI, such as Claude, DeepSeek, Ollama LLMs and so on.
-
-**Step 1:** Set up your custom LLM provider using the variables below, for example (assuming LiteLLM):
-
-```bash
-export PROVIDER_BASE_URL=http://host.docker.internal:4000/
-export PROVIDER_API_KEY=sk-1234
-```
-
-Rowboat uses "gpt-4.1" as the default model for agents and copilot but this can be overridden as follows, for example (assuming LiteLLM):
-
-```bash
-export PROVIDER_DEFAULT_MODEL=claude-3-7-sonnet-latest
-export PROVIDER_COPILOT_MODEL=gpt-4o
-```
-
-**Notes:**
-
-- Copilot is optimized for gpt-4o/gpt-4.1. We strongly recommend using these models for best performance.
-- You can specify different models for the copilot and each agent, but all of them must belong to the same provider (e.g. LiteLLM)
-- The integration is provider-agnostic and should work with any service that implements the OpenAI messages format.
-- OpenAI-specific tools (e.g., web_search) will not work with non-OpenAI providers. If you get an error, remove these tools.
-
-**Step 2 (No change):** Clone the repository and start Rowboat docker
-
-```bash
-git clone git@github.com:rowboatlabs/rowboat.git
-cd rowboat
-docker-compose up --build
-```
-
-**Step 3 (No change):** Access the app at [http://localhost:3000](http://localhost:3000).
-
diff --git a/apps/docs/docs/simulate.md b/apps/docs/docs/simulate.md
deleted file mode 100644
index e488d0d5d..000000000
--- a/apps/docs/docs/simulate.md
+++ /dev/null
@@ -1,6 +0,0 @@
-## Simulate real-world user scenarios
-Create a test-bench of real-world scenarios in the simulator.
-
-
-Run the scenarios as simulated chats betweeen a user (role-played) and the assistant, in the playground.
-
\ No newline at end of file
diff --git a/apps/docs/docs/studio_overview.md b/apps/docs/docs/studio_overview.md
deleted file mode 100644
index 89ac4e85b..000000000
--- a/apps/docs/docs/studio_overview.md
+++ /dev/null
@@ -1,7 +0,0 @@
-# Building Assistants in Studio
-This is a guide to building your first assistant on RowBoat Studio, with examples.
-
-Prerequisite:
-
-1. **Open Source Users:** Complete the [open-source installation steps](/oss_installation/) to set up RowBoat Studio.
-2. **Hosted App Users:** Sign in to [https://app.rowboatlabs.com/](https://app.rowboatlabs.com/)
\ No newline at end of file
diff --git a/apps/docs/docs/testing.md b/apps/docs/docs/testing.md
deleted file mode 100644
index 86df88be6..000000000
--- a/apps/docs/docs/testing.md
+++ /dev/null
@@ -1,132 +0,0 @@
-# Testing Your Setup
-
-## Testing Studio
-
-1. Once you are set up, you should be able to login to the Studio via Auth0's login options (Gmail, Github etc.).
-- For the open source installation, the URL for Studio is [http://localhost:3000](http://localhost:3000)
-- To use our hosted app, the URL for Studio is [https://app.rowboatlabs.com](https://app.rowboatlabs.com/)
-
-
-2. Once in Studio, create a new blank project or browse through one of the example projects:
-
-
-
-3. Use the copilot to help you build agents:
-
-
-
-4. Ensure that the correct agent is set as the "start agent":
-
-
-
-5. Test out a chat in the playground to verify the agents' behavior:
-
-
-
-### Testing the Chat API
-
-- For the open source installation, the `` is [http://localhost:3000](http://localhost:3000)
-- When using the hosted app, the `` is [https://app.rowboatlabs.com](https://app.rowboatlabs.com)
-- `` is available in the URL of the project page
-- API Key can be generated from the project config page at `/projects//config`
-
-Below is an example request and response. Modify the user message in the request, based on your example project.
-
-**Request:**
-
-```bash
-curl --location 'http:///api/v1//chat' \
---header 'Content-Type: application/json' \
---header 'Authorization: Bearer ' \
---data '{
- "messages": [
- {
- "role": "user",
- "content": "What is my pending payment amount?"
- }
- ]
-}'
-```
-**Response:**
-The last message in `messages` is either a user-facing response or a tool call by the assistant.
-
-```json
-{
- "messages": [
- {
- "sender": "Credit Card Hub",
- "role": "assistant",
- "response_type": "internal",
- "content": null,
- "created_at": "2025-02-01T06:55:47.843909",
- "current_turn": true,
- "tool_calls": [
- {
- "function": {
- "arguments": "{\"args\":\"\",\"kwargs\":\"\"}",
- // Internal tool calls are used to transfer between agents
- "name": "transfer_to_outstanding_payment"
- },
- "id": "call_7jGpwpVvzhZFOyRgxHFkdOdU",
- "type": "function"
- }
- ]
- },
- {
- "tool_name": "transfer_to_outstanding_payment",
- "role": "tool",
- "content": "{\"assistant\": \"Outstanding Payment\"}",
- "tool_call_id": "call_7jGpwpVvzhZFOyRgxHFkdOdU"
- },
- {
- "sender": "Outstanding Payment",
- "role": "assistant",
- // Response is not user-facing, to enable further post processing
- "response_type": "internal",
- "content": "Sure, could you provide the last four digits of your card or your registered mobile number so I can look up your pending payment amount?",
- "created_at": "2025-02-01T06:55:49.648008",
- "current_turn": true
- },
- {
- "sender": "Outstanding Payment >> Post process",
- "role": "assistant",
- // Response is user-facing
- "response_type": "external",
- "content": "Sure, please provide the last four digits of your card or your registered mobile number so I can check your pending payment amount.",
- "created_at": "2025-02-01T06:55:49.648008",
- "current_turn": true
- }
- ],
- "state": {
- // .. state data
- }
-}
-```
-
-### Testing the Python Chat SDK
-
-- For the open source installation, the `` is [http://localhost:3000](http://localhost:3000)
-- When using the hosted app, the `` is [https://app.rowboatlabs.com](https://app.rowboatlabs.com)
-- `` is available in the URL of the project page
-- API Key can be generated from the project config page at `/projects//config`
-
-```bash
-pip install rowboat
-```
-
-Modify the user message in `messages`, based on your example project.
-
-```python
-from rowboat import Client
-
-client = Client(
- host="",
- project_id="",
- api_key="" # Generate this from /projects//config
-)
-
-# Simple chat interaction
-messages = [{"role": "user", "content": "What is my pending payment amount?"}]
-response_messages, state = client.chat(messages=messages)
-```
-The last message in `response_messages` is either a user-facing response or a tool call by the assistant.
\ No newline at end of file
diff --git a/apps/docs/docs/tools.md b/apps/docs/docs/tools.md
deleted file mode 100644
index 162dad365..000000000
--- a/apps/docs/docs/tools.md
+++ /dev/null
@@ -1,6 +0,0 @@
-# Tools
-- Tools are used to carry out specific tasks such as fetching or updating information.
-- Tools can be defined once in RowBoat Studio and reused across different agents.
-- RowBoat uses OpenAI style tools with name, description and parameters.
-- For the purposes of quick testing in the Playground, RowBoat Studio can mock tool responses based on tool descriptions.
-- Developers can easily connect tools to APIs by configuring MCP servers or Webhook URL in Settings.
\ No newline at end of file
diff --git a/apps/docs/docs/update_agents.md b/apps/docs/docs/update_agents.md
deleted file mode 100644
index 10fd437bc..000000000
--- a/apps/docs/docs/update_agents.md
+++ /dev/null
@@ -1,19 +0,0 @@
-## Update agent behavior
-
-There are three ways for you to update the agent's behavior:
-
-### 1. With help of Copilot
-
-Copilot can help you update agent behavior. It is also aware of the current chat in the playground so you can make references to the current chat while instructing copilot to update agents.
-
-
-
-### 2. Using the Generate button
-
-
-
-### 3. By manually editing the instructions
-
-You can manually edit the agent instructions anytime.
-
-
diff --git a/apps/docs/docs/using-rowboat/agents.mdx b/apps/docs/docs/using-rowboat/agents.mdx
new file mode 100644
index 000000000..ceaeccae2
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/agents.mdx
@@ -0,0 +1,141 @@
+---
+title: "Agents"
+description: "Learn about creating and configuring individual agents within your multi-agent system"
+icon: "robot"
+---
+
+## Overview
+
+Agents are the core building blocks of Rowboat's multi-agent system. Each agent carries out a specific part of a conversation, handles tasks via tools, and can collaborate with other agents to orchestrate complex workflows.
+
+They are powered by LLMs and can:
+- Respond to user input
+- Trigger tools or APIs
+- Pass control to other agents using @mentions
+- Fetch or process internal data
+- Execute RAG (Retrieval-Augmented Generation) queries
+- Participate in sequential pipeline workflows
+
+---
+
+## Agent Types
+
+Rowboat supports several types of agents, each designed for specific use cases:
+
+| Name | Purpose | Characteristics |
+|------|---------|-----------------|
+| **Conversational Agents** (`conversation`) | Primary user-facing agents that interact directly with users and orchestrate workflows. | β’ Can respond to users and orchestrate workflows β’ Typically serve as the start agent (Hub Agent)|
+| **Task Agents** (`internal`) | Specialized agents that perform specific tasks without direct user interaction. | β’ Focused on specific functions β’ Return results to parent agents|
+| **Pipeline Agents** (`pipeline`) | Sequential workflow execution agents that process data in a chain. | β’ Execute in sequence within a pipeline β’ Cannot transfer to other agents directly|
+
+
+---
+
+## Agent Configuration
+
+Agents are configured through two main tabs in the Rowboat Studio interface:
+
+### **Instructions Tab**
+
+
+#### Description
+A clear description of the agent's role and responsibilities
+
+#### Instructions
+Instructions are the backbone of the agent's behavior. Use the Copilot's structured format for consistency:
+
+**Recommended Structure:**
+```
+## π§βπΌ Role:
+[Clear description of the agent's role]
+
+## βοΈ Steps to Follow:
+1. [Step 1]
+2. [Step 2]
+3. [Step 3]
+
+## π― Scope:
+β In Scope:
+- [What the agent should handle]
+
+β Out of Scope:
+- [What the agent should NOT handle]
+
+## π Guidelines:
+βοΈ Dos:
+- [Positive behaviors]
+
+π« Don'ts:
+- [Negative behaviors]
+```
+
+#### Examples
+These help agents behave correctly in specific situations. Each example can include:
+- A sample user message
+- The expected agent response
+- Any tool calls (if applicable)
+
+### **Configurations Tab**
+
+#### Name
+Name of the agent
+
+
+#### Behaviour
+- **Agent Type**: Choose from `conversation`, `internal`, or `pipeline`
+- **Model**: Select the LLM model (GPT-4.1, GPT-4o, google/gemini-2.5-flash, etc.)
+
+#### RAG
+- **Add Source**: Connect data sources to enable RAG capabilities for the agent
+
+---
+
+## Creating Your Initial Set of Agents
+
+Let Copilot bootstrap your agent graph.
+
+### Instruct Copilot
+
+Start by telling Copilot what your assistant is meant to do β it'll generate an initial set of agents with best-practice instructions, role definitions, and connected agents.
+
+
+
+
+
+### Inspect the Output
+
+After applying the suggested agents, take a close look at each one's:
+- **Instructions**: Define how the agent behaves
+- **Examples**: Guide agent responses and tool use
+
+
+
+
+
+---
+
+## Updating Agent Behavior
+
+There are three ways to update an agent:
+
+### 1. With Copilot
+
+Copilot understands the current chat context and can help rewrite or improve an agent's behavior based on how it performed.
+
+
+
+
+
+
+
+### 2. Manual Edits
+
+You can always manually edit the agent's instructions.
+
+
+
+
+
+---
+
+
diff --git a/apps/docs/docs/using-rowboat/conversations.mdx b/apps/docs/docs/using-rowboat/conversations.mdx
new file mode 100644
index 000000000..c755f47d2
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/conversations.mdx
@@ -0,0 +1,52 @@
+---
+title: "Conversations"
+description: "View and manage all conversations with your Rowboat agents"
+icon: "list-check"
+---
+
+## Overview
+
+The Conversations page in Rowboat shows you all the interactions between users and your agents. Here you can monitor conversations, view detailed message exchanges, and understand how your agents are performing.
+
+
+
+
+
+## What You'll See
+
+The Conversations page displays a list of all conversations organized by time:
+
+- **Today**: Recent conversations from today
+- **This week**: Conversations from the current week
+- **This month**: Conversations from the current month
+- **Older**: Conversations from previous months
+
+Each conversation shows:
+- **Conversation ID**: Unique identifier for the conversation
+- **Created time**: When the conversation started
+- **Reason**: What triggered the conversation (chat or job)
+
+## Viewing Conversation Details
+Click on any conversation to see the detailed view with all the message exchanges:
+
+
+
+
+
+**Conversation Metadata**: Shows the Conversation ID, creation time, and last update time.
+
+**Workflow**: Shows the workflow JSON
+
+**Turns**: Each conversation is made up of turns, where:
+- **Turn #1, #2, etc.**: Numbered sequence of interactions
+- **Reason badge**: Shows why each turn happened (chat, API, job, etc.)
+- **Timestamp**: When each turn occurred
+- **Input messages**: What was sent to your agents
+- **Output messages**: What your agents responded with
+
+### Turn Details
+
+Each turn displays:
+- **Input**: The messages sent to your agents (user messages, system messages)
+- **Output**: The responses from your agents
+- **Error information**: Any issues that occurred during processing
diff --git a/apps/docs/docs/using-rowboat/customise/custom-llms.mdx b/apps/docs/docs/using-rowboat/customise/custom-llms.mdx
new file mode 100644
index 000000000..64b27ca15
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/customise/custom-llms.mdx
@@ -0,0 +1,53 @@
+---
+title: "Custom LLMs"
+description: "How to use and configure custom LLMs in Rowboat."
+
+---
+
+ This is currently only possible in the self hosted version of Rowboat
+
+## Using custom LLM providers
+
+By default, Rowboat uses OpenAI LLMs (gpt-4o, gpt-4.1, etc.) for both agents and copilot, when you export your OPENAI_API_KEY.
+
+However, you can also configure custom LLM providers (e.g. LiteLLM, OpenRouter) to use any of the hundreds of available LLMs beyond OpenAI, such as Claude, DeepSeek, Ollama LLMs and so on.
+
+
+
+ Configure your environment variables to point to your preferred LLM backend. Example using LiteLLM:
+
+ ```bash
+ export PROVIDER_BASE_URL=http://host.docker.internal:4000/
+ export PROVIDER_API_KEY=sk-1234
+ ```
+
+ Rowboat uses gpt-4.1 as the default model for agents and copilot. You can override these:
+
+ ```bash
+ export PROVIDER_DEFAULT_MODEL=claude-3-7-sonnet-latest
+ export PROVIDER_COPILOT_MODEL=gpt-4.1
+ ```
+
+ **Notes:**
+ - Copilot is optimized for gpt-4o/gpt-4.1. We strongly recommend using these models for best results.
+ - You can use different models for the copilot and each agent, but all must be from the same provider (e.g., LiteLLM).
+ - Rowboat is provider-agnostic β any backend implementing the OpenAI messages format should work.
+ - OpenAI-specific tools (like web_search) will not function with non-OpenAI providers. Remove such tools to avoid errors.
+
+
+
+ Clone the Rowboat repo and spin it up locally:
+
+ ```bash
+ git clone git@github.com:rowboatlabs/rowboat.git
+ cd rowboat
+ docker-compose up --build
+ ```
+
+
+
+ Once Docker is running, navigate to:
+
+ [http://localhost:3000](http://localhost:3000)
+
+
diff --git a/apps/docs/docs/using-rowboat/jobs.mdx b/apps/docs/docs/using-rowboat/jobs.mdx
new file mode 100644
index 000000000..5b3e14dfd
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/jobs.mdx
@@ -0,0 +1,45 @@
+---
+title: "Jobs"
+description: "Monitor and inspect all your trigger executions and job runs"
+icon: "message"
+---
+
+## Overview
+
+The Jobs page in Rowboat provides a comprehensive view of all your automated job executions. Here you can monitor the status of your triggers, inspect what happened during each run, and troubleshoot any issues that may have occurred.
+
+
+
+
+
+## What You'll See
+
+The Jobs page displays a list of all job runs from your triggers, including:
+
+- **External trigger executions** from webhook events
+- **One-time trigger runs** from scheduled jobs
+- **Recurring trigger executions** from cron-based schedules
+
+Each job run displays the following key information:
+- **Job ID**: Unique identifier for the job run
+- **Status**: Indicates if the job succeeded, failed, or is in progress
+- **Reason**: The trigger or cause for the job (e.g., external trigger, scheduled, cron)
+- **Created Time**: When the job was executed
+
+## Viewing Job Details
+
+### Expand a Job Run
+
+Click on any job run to expand it and see detailed information about what happened during execution:
+
+
+
+
+
+**Basic job details**: Job ID, Status, creation time, Updated time, Conversation ID and Turn ID. By clicking on the Conversation ID, you can view more in-depth details about the run.
+
+**Job Reason**: Why the job triggered - either external trigger, scheduled, or cron.
+
+**Job Input**: The input data sent to your assistant.
+
+**Job Output**: The final output produced by your agents.
\ No newline at end of file
diff --git a/apps/docs/docs/using-rowboat/rag.mdx b/apps/docs/docs/using-rowboat/rag.mdx
new file mode 100644
index 000000000..772d034d5
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/rag.mdx
@@ -0,0 +1,68 @@
+---
+title: "RAG (Data)"
+description: "How to use our inbuilt RAG"
+icon: "database"
+---
+
+# Using RAG in Rowboat
+
+Rowboat provides multiple ways to enhance your agents' context with Retrieval-Augmented Generation (RAG). This guide will help you set up and use each RAG feature.
+
+RAG is called "Data" on the build view in the Rowboat UI.
+
+---
+
+## Types of RAG
+
+| RAG Type | Description | Configuration Required |
+|----------|-------------|------------------------|
+| **Text RAG** | Process and reason over text content directly | No configuration needed |
+| **File Uploads** | Upload PDF files directly from your device | No configuration needed |
+| **URL Scraping** | Scrape content from web URLs using Firecrawl | Requires API key setup |
+
+ URL Scraping does not require any setup in the managed version of Rowboat.
+
+
+
+
+
+
+## RAG Features
+
+### 1. Text RAG
+
+Process and reason over text content directly
+
+
+### 2. File Uploads
+
+- Upload PDF files directly from your device
+- **Open Source Version**: Files are stored locally on your machine
+- **Managed Version**: Files are stored in cloud S3 storage
+- Files are parsed using OpenAI by default
+
+ You can also use Google's Gemini model for parsing as it is better at parsing larger files.
+
+#### 2.1 Using Gemini for File Parsing
+
+To use Google's Gemini model for parsing uploaded PDFs, set the following variables:
+
+```bash
+# Enable Gemini for file parsing
+export USE_GEMINI_FILE_PARSING=true
+export GOOGLE_API_KEY=your_google_api_key
+```
+
+
+### 3. URL Scraping
+
+Rowboat uses Firecrawl for URL scraping. You can have a maximum of 100 URLs.
+
+**Open Source Version**: To enable URL scraping, set the following variables:
+
+```bash
+export USE_RAG_SCRAPING=true
+export FIRECRAWL_API_KEY=your_firecrawl_api_key
+```
+
+**Managed Version**: No configuration required - URL scraping is handled automatically.
diff --git a/apps/docs/docs/using-rowboat/rowboat-studio.mdx b/apps/docs/docs/using-rowboat/rowboat-studio.mdx
new file mode 100644
index 000000000..edf74fb3e
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/rowboat-studio.mdx
@@ -0,0 +1,60 @@
+---
+title: "Rowboat Studio"
+description: "Visual interface to build, test, and deploy multi-agent AI assistants using plain language"
+icon: "puzzle-piece"
+---
+
+
+## Overview
+
+**Rowboat Studio** is your visual interface for building AI assistants β powered by agents, tools, and workflows β using plain language and minimal setup. It brings the process of creating multi-agent systems down to just a few clicks.
+
+Workflows created within Rowboat are known as **assistants**, and each assistant is composed of:
+- One or more **agents**
+- Attached **tools** and **MCP servers**
+
+Once built, assistants can be tested live in the **playground** and deployed in real-world products using the [API](/docs/api-sdk/using_the_api) or [SDK](/docs/api-sdk//using_the_sdk).
+
+---
+
+## Key Components
+
+Hereβs what youβll interact with in Studio:
+
+| Component | Description | Highlights |
+|------------|-------------|------------|
+| **Agent** | Core building blocks of your assistant. Each agent handles a specific part of the conversation and performs tasks using tools and instructions. | β’ Define behavior in plain language β’ Connect agents into a graph β’ Attach tools and RAG sources |
+| **Playground** | Interactive testbed for conversations. Lets you simulate end-user chats with your assistant and inspect agent behavior in real time. | β’ Real-time feedback and debugging β’ See tool calls and agent handoffs β’ Test individual agents or the whole system |
+| **Copilot** | Your AI assistant for building assistants. Copilot creates and updates agents, tools, and instructions based on your plain-English prompts. | β’ Understands full system context β’ Improves agents based on playground chat β’ Builds workflows intelligently and fast |
+
+> **Agents are the heart of every assistant.** Learn more about how they work in the Agents page.
+
+
+ Learn about creating and configuring individual agents within your multi-agent system
+
+---
+
+## Building in Rowboat
+
+
+
+ Use plain language to tell Copilot what you want your assistant to do. Copilot will auto-generate the agents, instructions, and tools that form the base of your assistant.
+
+
+
+ Inspect the created agents β especially their instructions and examples β and refine or approve them before moving forward.
+
+
+
+ Integrate external services, tools, and backend logic into your agents using Rowboat's modular system. Tools are tied to agents and triggered through instructions.
+
+
+
+ Use the chat playground to simulate real-world conversations. Youβll see which agent takes control, what tools are triggered, and how your assistant flows.
+
+
+
+ Assistants can be deployed into production using the **Rowboat Chat API** or the **Rowboat Chat SDK**. Both support stateless and stateful conversation flows.
+
+
+
diff --git a/apps/docs/docs/using-rowboat/tools.mdx b/apps/docs/docs/using-rowboat/tools.mdx
new file mode 100644
index 000000000..7a116d634
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/tools.mdx
@@ -0,0 +1,56 @@
+---
+title: "Tools"
+description: "Add and configure tools for your agents to interact with external services"
+icon: "wrench"
+---
+
+## Overview
+
+The Tools page in Rowboat lets you add and configure tools that your agents can use to interact with external services, APIs, and systems. Tools enable your agents to perform real-world actions like sending emails, managing calendars, or processing payments.
+
+
+
+
+
+## Tool Types
+
+| Tool Type | Description | Use Case | Availability |
+|-----------|-------------|----------|--------------|
+| **Library Tools** | Pre-built integrations with popular services | Quick setup, no configuration needed | Managed and open source |
+| **MCP Tools** | Custom tools from MCP servers | Custom functionality, specialized APIs | Managed and open source |
+| **Webhook Tools** | HTTP endpoints for custom integrations | Your own systems, custom workflows | Open source only |
+
+
+## Library (Composio Tools)
+
+- Browse a library of 500+ toolkits from popular services
+- With 3000+ tools to choose from!
+- Click on a service to see available tools and add them to your workflow
+- Users must create a [Composio](https://composio.dev/) account and add their API key
+- Tools require authorization to work properly
+
+### Setting up Composio API Key
+
+To use Composio tools, get a Composio key and export it as an environment variable:
+
+```bash
+export COMPOSIO_API_KEY=your_api_key_here
+```
+
+Users can visit [Composio's toolkit documentation](https://docs.composio.dev/toolkits/introduction) for a deep dive into all the tools available.
+
+## Custom MCP Servers
+
+- Add your own MCP (Model Context Protocol) servers
+- Connect to custom tools and APIs you've built
+- Configure server URLs and authentication
+- Import tools from your MCP servers
+
+## Webhook
+
+Webhook tools are only available in the open source (local) version of Rowboat.
+
+- Create custom webhook tools
+- Configure HTTP endpoints for your agents to call
+- Set up custom authentication and parameters
+- Build integrations with your own systems
diff --git a/apps/docs/docs/using-rowboat/triggers.mdx b/apps/docs/docs/using-rowboat/triggers.mdx
new file mode 100644
index 000000000..32c81f66d
--- /dev/null
+++ b/apps/docs/docs/using-rowboat/triggers.mdx
@@ -0,0 +1,131 @@
+---
+title: "Triggers"
+description: "Learn about setting up automated triggers for your Rowboat agents"
+icon: "bolt"
+---
+
+## Overview
+
+Triggers in Rowboat are automated mechanisms that activate your agents when specific events occur or conditions are met. They form the foundation of your automated workflow system, enabling your agents to respond to external events, scheduled times, and system conditions without manual intervention.
+
+
+
+## Trigger Types
+
+Rowboat supports three main categories of triggers, each designed for different automation scenarios:
+
+| Trigger Type | Purpose | Execution | Use Cases |
+|--------------|---------|-----------|-----------|
+| **External Triggers** | Connect to external services and events | Real-time via webhooks | Slack messages, GitHub events, email processing |
+| **One-Time Triggers** | Execute at specific predetermined times | Single execution at set time | Delayed responses, time-sensitive actions |
+| **Recurring Triggers** | Execute on repeating schedules | Continuous via cron expressions | Daily reports, periodic maintenance, regular syncs |
+
+---
+
+## External Triggers (Composio Integration)
+
+External triggers are powered by **Composio** and allow users to use triggers from across 30+ services including Slack, GitHub, Gmail, Notion, Google Calendar, and more.
+
+
+
+
+
+### Creating External Triggers
+
+1. **Click New External Trigger**: Start the trigger creation process
+2. **Select a Toolkit**: Browse available toolkits or search for specific services
+3. **Choose Trigger Type**: Select the specific trigger from available options, click configure
+4. **Authenticate**: Complete OAuth2 flow or enter API keys for the selected service (your preferred method)
+5. **Configure**: Set up event filters and data mapping if required
+6. **Deploy**: Activate the trigger to start listening for events
+
+### Local Setup
+
+If you're running the open source version of Rowboat, you'll need to set up external triggers manually. In the managed version, this is all handled automatically for you.
+
+
+
+ Sign into [Composio](https://composio.dev/) and create a new project for your Rowboat instance.
+
+
+
+ Go to your project settings and copy the project API key. Export it in your Rowboat environment:
+
+ ```bash
+ export COMPOSIO_API_KEY=your-composio-api-key
+ ```
+
+
+
+ Use ngrok to expose your local Rowboat instance:
+
+ ```bash
+ ngrok http 3000
+ ```
+
+ Copy the generated ngrok URL (e.g., `https://a5fe8c0d45b8.ngrok-free.app`).
+
+
+
+ In Composio, go to Events & Triggers section and set the Trigger Webhook URL to:
+
+ ```
+ {ngrok_url}/api/composio/webhook
+ ```
+
+ Example: `https://a5fe8c0d45b8.ngrok-free.app/api/composio/webhook`
+
+
+
+ Copy the Webhook Secret from Composio and export it in Rowboat:
+
+ ```bash
+ export COMPOSIO_TRIGGERS_WEBHOOK_SECRET=your-webhook-secret
+ ```
+
+
+
+ Restart your Rowboat instance to load the new environment variables. You're now ready to use external triggers!
+
+
+
+Make sure your Rowboat assistant is deployed before receiving trigger calls
+
+---
+
+## One-Time Triggers (Scheduled Jobs)
+
+One-time triggers execute your agents at a specific, predetermined time. They're useful for delayed responses, batch processing, time-sensitive actions, or coordinating with external schedules.
+
+
+
+
+
+### Creating One-Time Triggers
+1. Set the exact execution time (date and time)
+2. Configure the input messages for your agents
+3. Deploy to schedule the execution
+
+---
+
+## Recurring Triggers (Cron-based Jobs)
+
+Recurring triggers execute your agents on a repeating schedule using cron expressions. They're ideal for daily reports, periodic maintenance, regular data syncs, and continuous monitoring tasks.
+
+
+
+
+
+### Creating Recurring Triggers
+1. Define the cron expression (e.g., `0 9 * * *` for daily at 9 AM)
+2. Configure the recurring message structure
+3. Enable the trigger to start the recurring schedule
+
+### Common Cron Patterns
+```cron
+0 9 * * * # Daily at 9:00 AM
+0 8 * * 1 # Every Monday at 8:00 AM
+*/15 * * * * # Every 15 minutes
+0 0 1 * * # First day of month at midnight
+```
+
diff --git a/apps/docs/docs/using_rag.md b/apps/docs/docs/using_rag.md
deleted file mode 100644
index 52deefadf..000000000
--- a/apps/docs/docs/using_rag.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Using RAG in Rowboat
-
-Rowboat provides multiple ways to enhance your agents' context with Retrieval-Augmented Generation (RAG). This guide will help you set up and use each RAG features.
-
-## Quick Start
-
-Text RAG and local file uploads are enabled by default - no configuration needed! Just start using them right away.
-
-## RAG Features
-
-### 1. Text RAG
-β Enabled by default:
-
-- Process and reason over text content directly
-- No configuration required
-
-### 2. Local File Uploads
-β Enabled by default:
-
-- Upload PDF files directly from your device
-- Files are stored locally
-- No configuration required
-- Files are parsed using OpenAI by default
-- For larger files, we recommend using Gemini models - see section below.
-
-#### 2.1 Using Gemini for File Parsing
-To use Google's Gemini model for parsing uploaded PDFs, set the following variable:
-
-```bash
-# Enable Gemini for file parsing
-export USE_GEMINI_FILE_PARSING=true
-export GOOGLE_API_KEY=your_google_api_key
-```
-
-### 3. URL Scraping
-Rowboat uses Firecrawl for URL scraping. To enable URL scraping, set the following variables:
-
-```bash
-export USE_RAG_SCRAPING=true
-export FIRECRAWL_API_KEY=your_firecrawl_api_key
-```
-
-## Advanced RAG features
-
-### 1. File Uploads Backed by S3
-To enable S3 file uploads, set the following variables:
-
-```bash
-# Enable S3 uploads
-export USE_RAG_S3_UPLOADS=true
-
-# S3 Configuration
-export AWS_ACCESS_KEY_ID=your_access_key
-export AWS_SECRET_ACCESS_KEY=your_secret_key
-export RAG_UPLOADS_S3_BUCKET=your_bucket_name
-export RAG_UPLOADS_S3_REGION=your_region
-```
-
-### 2. Changing Default Parsing Model
-
-By default, uploaded PDF files are parsed using `gpt-4o`. You can customize this by setting the following:
-
-```bash
-# Override the default parsing model
-export FILE_PARSING_MODEL=your-preferred-model
-```
-
-You can also change the model provider like so:
-```bash
-# Optional: Override the parsing provider settings
-export FILE_PARSING_PROVIDER_BASE_URL=your-provider-base-url
-export FILE_PARSING_PROVIDER_API_KEY=your-provider-api-key
-```
-
-### 3. Embedding Model Options
-
-By default, Rowboat uses OpenAI's `text-embedding-3-small` model for generating embeddings. You can customize this by setting the following:
-
-```bash
-# Override the default embedding model
-export EMBEDDING_MODEL=your-preferred-model
-export EMBEDDING_VECTOR_SIZE=1536
-```
-
-**Important NOTE**
-
-The default size for the vectors index is 1536. If you change this value, then you must delete the index and set it up again:
-```bash
-docker-compose --profile delete_qdrant --profile qdrant up --build delete_qdrant qdrant
-```
-followed by:
-```bash
-./start # this will recreate the index
-```
-
-You can also change the model provider like so:
-```bash
-# Optional: Override the embedding provider settings
-export EMBEDDING_PROVIDER_BASE_URL=your-provider-base-url
-export EMBEDDING_PROVIDER_API_KEY=your-provider-api-key
-```
-
-If you don't specify the provider settings, Rowboat will use OpenAI as the default provider.
diff --git a/apps/docs/docs/using_the_sdk.md b/apps/docs/docs/using_the_sdk.md
deleted file mode 100644
index c867bcbc9..000000000
--- a/apps/docs/docs/using_the_sdk.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Using the Python SDK
-
-This is a guide on using the RowBoat Python SDK as an alternative to the [RowBoat HTTP API](/using_the_api) to power conversations with the assistant created in Studio.
-
-## Prerequisites
-- ``` pip install rowboat ```
-- [Deploy your assistant to production](/using_the_api/#deploy-your-assistant-to-production-on-studio)
-- [Obtain your `` and ``](/using_the_api/#obtain-api-key-and-project-id)
-
-### API Host
-- For the open source installation, the `` is [http://localhost:3000](http://localhost:3000)
-- When using the hosted app, the `` is [https://app.rowboatlabs.com](https://app.rowboatlabs.com)
-
-## Usage
-
-### Basic Usage with StatefulChat
-
-The easiest way to interact with Rowboat is using the `StatefulChat` class, which maintains conversation state automatically:
-
-```python
-from rowboat import Client, StatefulChat
-
-# Initialize the client
-client = Client(
- host="",
- project_id="",
- api_key=""
-)
-
-# Create a stateful chat session
-chat = StatefulChat(client)
-
-# Have a conversation
-response = chat.run("What is the capital of France?")
-print(response)
-# The capital of France is Paris.
-
-# Continue the conversation - the context is maintained automatically
-response = chat.run("What other major cities are in that country?")
-print(response)
-# Other major cities in France include Lyon, Marseille, Toulouse, and Nice.
-
-response = chat.run("What's the population of the first city you mentioned?")
-print(response)
-# Lyon has a population of approximately 513,000 in the city proper.
-```
-
-### Advanced Usage
-
-#### Using a specific workflow
-
-You can specify a workflow ID to use a particular conversation configuration:
-
-```python
-chat = StatefulChat(
- client,
- workflow_id=""
-)
-```
-
-#### Using a test profile
-
-You can specify a test profile ID to use a specific test configuration:
-
-```python
-chat = StatefulChat(
- client,
- test_profile_id=""
-)
-```
-
-### Low-Level Usage
-
-For more control over the conversation, you can use the `Client` class directly:
-
-```python
-from rowboat.schema import UserMessage
-
-# Initialize the client
-client = Client(
- host="",
- project_id="",
- api_key=""
-)
-
-# Create messages
-messages = [
- UserMessage(role='user', content="Hello, how are you?")
-]
-
-# Get response
-response = client.chat(messages=messages)
-print(response.messages[-1].content)
-
-# For subsequent messages, you need to manage the message history and state manually
-messages.extend(response.messages)
-messages.append(UserMessage(role='user', content="What's your name?"))
-response = client.chat(messages=messages, state=response.state)
-```
\ No newline at end of file
diff --git a/apps/docs/docs/videos/Intro-Video.gif b/apps/docs/docs/videos/Intro-Video.gif
new file mode 100644
index 000000000..d4eb387ef
Binary files /dev/null and b/apps/docs/docs/videos/Intro-Video.gif differ
diff --git a/apps/docs/docs/videos/intro.mp4 b/apps/docs/docs/videos/intro.mp4
new file mode 100644
index 000000000..4306a886e
Binary files /dev/null and b/apps/docs/docs/videos/intro.mp4 differ
diff --git a/apps/docs/docs/img/favicon.ico b/apps/docs/favicon.ico
similarity index 100%
rename from apps/docs/docs/img/favicon.ico
rename to apps/docs/favicon.ico
diff --git a/apps/docs/mkdocs.yml b/apps/docs/mkdocs.yml
deleted file mode 100644
index 4e5eb44c5..000000000
--- a/apps/docs/mkdocs.yml
+++ /dev/null
@@ -1,26 +0,0 @@
-site_name: RowBoat docs
-site_url: https://docs.rowboatlabs.com
-theme:
- name: material
- favicon: img/favicon.ico
-nav:
- - Getting Started: setup.md
- - Overview:
- - Introduction: index.md
- - Open Source License: license.md
-
- - Building in Studio:
- - Create agents: create_agents.md
- - Test chats in the playground: playground.md
- - Add tools: add_tools.md
- - Update agents: update_agents.md
- - Using RAG: using_rag.md
-
- - API & SDK:
- - Using the API: using_the_api.md
- - Using the SDK: using_the_sdk.md
-
- - Concepts:
- - Agents: agents.md
- - Tools: tools.md
- - Prompts: prompts.md
diff --git a/apps/docs/readme.md b/apps/docs/readme.md
deleted file mode 100644
index 2783c5186..000000000
--- a/apps/docs/readme.md
+++ /dev/null
@@ -1,55 +0,0 @@
-# Documentation Site
-
-This documentation site is built using [MkDocs Material](https://squidfunk.github.io/mkdocs-material/), a modern documentation framework that creates beautiful and functional static sites.
-
-## Prerequisites
-
-- Python 3.x
-- pip (Python package manager)
-
-## Setup
-
-Install the required dependencies:
-
-```bash
-pip install -r requirements.txt
-```
-
-## Development
-
-To run the documentation site locally:
-
-```bash
-mkdocs serve
-```
-
-This will start a local server, and you can view the documentation site at `http://localhost:8000`.
-
-### Building the static site
-
-To build the static site:
-
-```bash
-mkdocs build
-```
-
-This will generate the static site in the `site` directory.
-
-### Project structure
-
-- `mkdocs.yml`: The main configuration file for MkDocs.
-- `docs/`: The directory containing the Markdown files for the documentation.
-- `site/`: The directory generated by the `mkdocs build` command, containing the static site files.
-
-## Writing Documentation
-
-- Documentation files are written in Markdown format
-- Place new documentation files in the `docs` directory
-- Update `mkdocs.yml` to include new pages in the navigation
-
-## Additional Resources
-
-- [Mkdocs documentation](https://www.mkdocs.org/getting-started/)
-- [MkDocs Material Documentation](https://squidfunk.github.io/mkdocs-material/getting-started/)
-- [Markdown Guide](https://www.markdownguide.org/basic-syntax/)
-
diff --git a/apps/docs/requirements.txt b/apps/docs/requirements.txt
deleted file mode 100644
index fa6a6dd1d..000000000
--- a/apps/docs/requirements.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-babel==2.16.0
-certifi==2024.12.14
-charset-normalizer==3.4.1
-click==8.1.8
-colorama==0.4.6
-ghp-import==2.1.0
-idna==3.10
-Jinja2==3.1.5
-Markdown==3.7
-MarkupSafe==3.0.2
-mergedeep==1.3.4
-mkdocs==1.6.1
-mkdocs-get-deps==0.2.0
-mkdocs-material==9.5.50
-mkdocs-material-extensions==1.3.1
-packaging==24.2
-paginate==0.5.7
-pathspec==0.12.1
-platformdirs==4.3.6
-Pygments==2.19.1
-pymdown-extensions==10.14.1
-python-dateutil==2.9.0.post0
-PyYAML==6.0.2
-pyyaml_env_tag==0.1
-regex==2024.11.6
-requests==2.32.3
-six==1.17.0
-urllib3==2.3.0
-watchdog==6.0.0
diff --git a/apps/experimental/simulation_runner/simulation.py b/apps/experimental/simulation_runner/simulation.py
index 39618897d..f0a0c4d6a 100644
--- a/apps/experimental/simulation_runner/simulation.py
+++ b/apps/experimental/simulation_runner/simulation.py
@@ -11,7 +11,7 @@
from rowboat import Client, StatefulChat
openai_client = OpenAI()
-MODEL_NAME = "gpt-4o"
+MODEL_NAME = "gpt-4.1"
ROWBOAT_API_HOST = os.environ.get("ROWBOAT_API_HOST", "http://127.0.0.1:3000").strip()
async def simulate_simulation(
diff --git a/apps/rowboat/app/actions/assistant-templates.actions.ts b/apps/rowboat/app/actions/assistant-templates.actions.ts
new file mode 100644
index 000000000..a525914d5
--- /dev/null
+++ b/apps/rowboat/app/actions/assistant-templates.actions.ts
@@ -0,0 +1,264 @@
+"use server";
+
+import { z } from 'zod';
+import { authCheck } from "./auth.actions";
+import { MongoDBAssistantTemplatesRepository } from '@/src/infrastructure/repositories/mongodb.assistant-templates.repository';
+import { prebuiltTemplates } from '@/app/lib/prebuilt-cards';
+import { USE_AUTH } from '@/app/lib/feature_flags';
+// import { ensureLibraryTemplatesSeeded } from '@/app/lib/assistant_templates_seed';
+
+const repo = new MongoDBAssistantTemplatesRepository();
+
+// Helper function to serialize MongoDB objects for client components
+function serializeTemplate(template: any) {
+ return JSON.parse(JSON.stringify(template));
+}
+
+function serializeTemplates(templates: any[]) {
+ return templates.map(serializeTemplate);
+}
+
+const ListTemplatesSchema = z.object({
+ category: z.string().optional(),
+ search: z.string().optional(),
+ featured: z.boolean().optional(),
+ source: z.enum(['library','community']).optional(),
+ cursor: z.string().optional(),
+ limit: z.number().min(1).max(50).default(20),
+});
+
+const CreateTemplateSchema = z.object({
+ name: z.string().min(1).max(100),
+ description: z.string().min(1).max(500),
+ category: z.string().min(1),
+ tags: z.array(z.string()).max(10),
+ isAnonymous: z.boolean().default(false),
+ workflow: z.any(),
+ copilotPrompt: z.string().optional(),
+ thumbnailUrl: z.string().url().optional(),
+});
+
+type ListResponse = { items: any[]; nextCursor: string | null };
+
+function buildPrebuiltList(params: z.infer): ListResponse {
+ const allPrebuilt = Object.entries(prebuiltTemplates).map(([key, tpl]) => ({
+ id: `prebuilt:${key}`,
+ name: (tpl as any).name || key,
+ description: (tpl as any).description || '',
+ category: (tpl as any).category || 'Other',
+ tools: (tpl as any).tools || [],
+ createdAt: (tpl as any).lastUpdatedAt || undefined,
+ source: 'library' as const,
+ }));
+
+ let filtered = allPrebuilt;
+ if (params.category) {
+ filtered = filtered.filter(t => t.category === params.category);
+ }
+ if (params.search) {
+ const q = params.search.toLowerCase();
+ filtered = filtered.filter(t =>
+ t.name.toLowerCase().includes(q) ||
+ t.description.toLowerCase().includes(q) ||
+ t.category.toLowerCase().includes(q)
+ );
+ }
+
+ const startIndex = params.cursor ? parseInt(params.cursor, 10) || 0 : 0;
+ const endIndex = Math.min(startIndex + params.limit, filtered.length);
+ const pageItems = filtered.slice(startIndex, endIndex);
+ const nextCursor = endIndex < filtered.length ? String(endIndex) : null;
+
+ return { items: pageItems, nextCursor };
+}
+
+export async function listAssistantTemplates(request: z.infer): Promise {
+ const user = await authCheck();
+
+ // Prebuilt templates should never be seeded to DB
+
+ const params = ListTemplatesSchema.parse(request);
+
+ // If source specified, return that subset; for 'library' use in-memory prebuilt from code
+ if (params.source === 'library') {
+ const { items, nextCursor } = buildPrebuiltList(params);
+ return { items: serializeTemplates(items), nextCursor };
+ }
+
+ if (params.source === 'community') {
+ const result = await repo.list({
+ category: params.category,
+ search: params.search,
+ featured: params.featured,
+ isPublic: true,
+ source: 'community',
+ }, params.cursor, params.limit);
+
+ const itemsWithLikeStatus = await addLikeStatusToTemplates(result.items, user.id);
+ return { ...result, items: serializeTemplates(itemsWithLikeStatus) };
+ }
+
+ // No source: return prebuilt from code + first page of community from DB
+ const prebuilt = buildPrebuiltList({ ...params, source: 'library' } as any).items;
+ const communityPage = await repo.list({
+ category: params.category,
+ search: params.search,
+ featured: params.featured,
+ isPublic: true,
+ source: 'community',
+ }, undefined, params.limit);
+ const items = [...prebuilt, ...communityPage.items];
+ return { items: serializeTemplates(items), nextCursor: null };
+}
+
+// Get a specific template by ID with model transformation
+export async function getAssistantTemplate(templateId: string) {
+ const user = await authCheck();
+
+ // Prebuilt: load directly from code
+ if (templateId.startsWith('prebuilt:')) {
+ const key = templateId.replace('prebuilt:', '');
+ const originalTemplate = prebuiltTemplates[key as keyof typeof prebuiltTemplates];
+ if (!originalTemplate) throw new Error('Template not found');
+
+ const defaultModel = process.env.PROVIDER_DEFAULT_MODEL || 'gpt-4.1';
+ const transformedWorkflow = JSON.parse(JSON.stringify(originalTemplate));
+ if (transformedWorkflow.agents && Array.isArray(transformedWorkflow.agents)) {
+ transformedWorkflow.agents.forEach((agent: any) => {
+ if (agent.model === '') {
+ agent.model = defaultModel;
+ }
+ });
+ }
+
+ // Return minimal shape expected by callers
+ const result = {
+ id: templateId,
+ name: (originalTemplate as any).name || key,
+ description: (originalTemplate as any).description || '',
+ category: (originalTemplate as any).category || 'Other',
+ workflow: transformedWorkflow,
+ source: 'library' as const,
+ };
+ return serializeTemplate(result);
+ }
+
+ // Community template from DB
+ const template = await repo.fetch(templateId);
+ if (!template) throw new Error('Template not found');
+ return serializeTemplate(template);
+}
+
+export async function getAssistantTemplateCategories() {
+ const user = await authCheck();
+
+ const categories = await repo.getCategories();
+ return { items: categories };
+}
+
+
+export async function createAssistantTemplate(data: z.infer) {
+ const user = await authCheck();
+
+ const validatedData = CreateTemplateSchema.parse(data);
+
+ let authorName = 'Anonymous';
+ let authorEmail: string | undefined;
+
+ if (USE_AUTH) {
+ try {
+ const { auth0 } = await import('@/app/lib/auth0');
+ const { user: auth0User } = await auth0.getSession() || {};
+ if (auth0User) {
+ authorName = auth0User.name ?? auth0User.email ?? 'Anonymous';
+ authorEmail = auth0User.email;
+ }
+ } catch (error) {
+ console.warn('Could not get Auth0 user info:', error);
+ }
+ }
+
+ if (validatedData.isAnonymous) {
+ authorName = 'Anonymous';
+ authorEmail = undefined;
+ }
+
+ const created = await repo.create({
+ name: validatedData.name,
+ description: validatedData.description,
+ category: validatedData.category,
+ authorId: user.id,
+ authorName,
+ authorEmail,
+ isAnonymous: validatedData.isAnonymous,
+ workflow: validatedData.workflow,
+ tags: validatedData.tags,
+ copilotPrompt: validatedData.copilotPrompt,
+ thumbnailUrl: validatedData.thumbnailUrl,
+ downloadCount: 0,
+ likeCount: 0,
+ featured: false,
+ isPublic: true,
+ likes: [],
+ source: 'community',
+ });
+
+ return serializeTemplate(created);
+}
+
+export async function deleteAssistantTemplate(id: string) {
+ const user = await authCheck();
+
+ const item = await repo.fetch(id);
+ if (!item) {
+ throw new Error('Template not found');
+ }
+
+ // Disallow deleting library/prebuilt items
+ if ((item as any).source === 'library' || item.authorId === 'rowboat-system') {
+ throw new Error('Not allowed to delete this template');
+ }
+
+ if (item.authorId !== user.id) {
+ // Do not reveal existence
+ throw new Error('Template not found');
+ }
+
+ const ok = await repo.deleteByIdAndAuthor(id, user.id);
+ if (!ok) {
+ throw new Error('Template not found');
+ }
+
+ return { success: true };
+}
+
+export async function toggleTemplateLike(id: string) {
+ const user = await authCheck();
+
+ // Use authenticated user ID instead of guest ID
+ const result = await repo.toggleLike(id, user.id);
+ return serializeTemplate(result);
+}
+
+export async function getCurrentUser() {
+ const user = await authCheck();
+ return { id: user.id };
+}
+
+// Helper function to add isLiked status to templates
+async function addLikeStatusToTemplates(templates: any[], userId: string) {
+ if (templates.length === 0) return templates;
+
+ // Get all template IDs
+ const templateIds = templates.map(t => t.id);
+
+ // Check which templates the user has liked
+ const likedTemplates = await repo.getLikedTemplates(templateIds, userId);
+ const likedSet = new Set(likedTemplates);
+
+ // Add isLiked property to each template
+ return templates.map(template => ({
+ ...template,
+ isLiked: likedSet.has(template.id)
+ }));
+}
diff --git a/apps/rowboat/app/actions/composio.actions.ts b/apps/rowboat/app/actions/composio.actions.ts
index daddc9308..8af20aaa6 100644
--- a/apps/rowboat/app/actions/composio.actions.ts
+++ b/apps/rowboat/app/actions/composio.actions.ts
@@ -157,6 +157,7 @@ export async function createComposioTriggerDeployment(request: {
export async function listComposioTriggerDeployments(request: {
projectId: string,
cursor?: string,
+ limit?: number,
}) {
const user = await authCheck();
@@ -166,6 +167,7 @@ export async function listComposioTriggerDeployments(request: {
userId: user.id,
projectId: request.projectId,
cursor: request.cursor,
+ limit: request.limit,
});
}
@@ -191,4 +193,4 @@ export async function fetchComposioTriggerDeployment(request: { deploymentId: st
userId: user.id,
deploymentId: request.deploymentId,
});
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/app/actions/copilot.actions.ts b/apps/rowboat/app/actions/copilot.actions.ts
index de9b1baec..e4d867311 100644
--- a/apps/rowboat/app/actions/copilot.actions.ts
+++ b/apps/rowboat/app/actions/copilot.actions.ts
@@ -3,6 +3,7 @@ import {
CopilotAPIRequest,
CopilotChatContext, CopilotMessage,
DataSourceSchemaForCopilot,
+ TriggerSchemaForCopilot,
} from "../../src/entities/models/copilot";
import {
Workflow} from "../lib/types/workflow_types";
@@ -26,7 +27,8 @@ export async function getCopilotResponseStream(
messages: z.infer[],
current_workflow_config: z.infer,
context: z.infer | null,
- dataSources?: z.infer[]
+ dataSources?: z.infer[],
+ triggers?: z.infer[]
): Promise<{
streamId: string;
} | { billingError: string }> {
@@ -42,6 +44,7 @@ export async function getCopilotResponseStream(
workflow: current_workflow_config,
context,
dataSources,
+ triggers,
}
});
return {
diff --git a/apps/rowboat/app/actions/project.actions.ts b/apps/rowboat/app/actions/project.actions.ts
index 5f0426944..c8a0182ca 100644
--- a/apps/rowboat/app/actions/project.actions.ts
+++ b/apps/rowboat/app/actions/project.actions.ts
@@ -2,7 +2,8 @@
import { z } from 'zod';
import { container } from "@/di/container";
import { redirect } from "next/navigation";
-import { templates } from "../lib/project_templates";
+// Fetch library templates from the unified assistant templates repository
+import { MongoDBAssistantTemplatesRepository } from "@/src/infrastructure/repositories/mongodb.assistant-templates.repository";
import { authCheck } from "./auth.actions";
import { ApiKey } from "@/src/entities/models/api-key";
import { Project } from "@/src/entities/models/project";
@@ -40,14 +41,17 @@ const updateLiveWorkflowController = container.resolve('revertToLiveWorkflowController');
export async function listTemplates() {
- const templatesArray = Object.entries(templates)
- .filter(([key]) => key !== 'default') // Exclude the default template
- .map(([key, template]) => ({
- id: key,
- ...template
- }));
-
- return templatesArray;
+ const repo = new MongoDBAssistantTemplatesRepository();
+ const result = await repo.list({ source: 'library', isPublic: true }, undefined, 100);
+ // Map to the shape expected by callers (tools at top-level)
+ return result.items.map((item) => ({
+ id: item.id,
+ name: item.name,
+ description: item.description,
+ category: item.category,
+ tools: (item as any).workflow?.tools || [],
+ copilotPrompt: item.copilotPrompt,
+ }));
}
export async function projectAuthCheck(projectId: string) {
@@ -93,12 +97,24 @@ export async function createProjectFromWorkflowJson(formData: FormData): Promise
const workflowJson = formData.get('workflowJson') as string;
try {
+ // Parse workflow and apply default model to blank agent models
+ const workflow = JSON.parse(workflowJson);
+ const defaultModel = process.env.PROVIDER_DEFAULT_MODEL || 'gpt-4o';
+
+ if (workflow.agents && Array.isArray(workflow.agents)) {
+ workflow.agents.forEach((agent: any) => {
+ if (agent.model === '') {
+ agent.model = defaultModel;
+ }
+ });
+ }
+
const project = await createProjectController.execute({
userId: user.id,
data: {
name: name || '',
mode: {
- workflowJson,
+ workflowJson: JSON.stringify(workflow),
},
},
});
diff --git a/apps/rowboat/app/actions/recurring-job-rules.actions.ts b/apps/rowboat/app/actions/recurring-job-rules.actions.ts
index d64264d84..b807aa3b7 100644
--- a/apps/rowboat/app/actions/recurring-job-rules.actions.ts
+++ b/apps/rowboat/app/actions/recurring-job-rules.actions.ts
@@ -6,6 +6,7 @@ import { IListRecurringJobRulesController } from "@/src/interface-adapters/contr
import { IFetchRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/fetch-recurring-job-rule.controller";
import { IToggleRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/toggle-recurring-job-rule.controller";
import { IDeleteRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/delete-recurring-job-rule.controller";
+import { IUpdateRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/update-recurring-job-rule.controller";
import { authCheck } from "./auth.actions";
import { z } from "zod";
import { Message } from "@/app/lib/types/types";
@@ -15,6 +16,7 @@ const listRecurringJobRulesController = container.resolve('fetchRecurringJobRuleController');
const toggleRecurringJobRuleController = container.resolve('toggleRecurringJobRuleController');
const deleteRecurringJobRuleController = container.resolve('deleteRecurringJobRuleController');
+const updateRecurringJobRuleController = container.resolve('updateRecurringJobRuleController');
export async function createRecurringJobRule(request: {
projectId: string,
@@ -89,3 +91,23 @@ export async function deleteRecurringJobRule(request: {
ruleId: request.ruleId,
});
}
+
+export async function updateRecurringJobRule(request: {
+ projectId: string,
+ ruleId: string,
+ input: {
+ messages: z.infer[],
+ },
+ cron: string,
+}) {
+ const user = await authCheck();
+
+ return await updateRecurringJobRuleController.execute({
+ caller: 'user',
+ userId: user.id,
+ projectId: request.projectId,
+ ruleId: request.ruleId,
+ input: request.input,
+ cron: request.cron,
+ });
+}
diff --git a/apps/rowboat/app/actions/scheduled-job-rules.actions.ts b/apps/rowboat/app/actions/scheduled-job-rules.actions.ts
index 640c4e08f..144475a07 100644
--- a/apps/rowboat/app/actions/scheduled-job-rules.actions.ts
+++ b/apps/rowboat/app/actions/scheduled-job-rules.actions.ts
@@ -5,6 +5,7 @@ import { ICreateScheduledJobRuleController } from "@/src/interface-adapters/cont
import { IListScheduledJobRulesController } from "@/src/interface-adapters/controllers/scheduled-job-rules/list-scheduled-job-rules.controller";
import { IFetchScheduledJobRuleController } from "@/src/interface-adapters/controllers/scheduled-job-rules/fetch-scheduled-job-rule.controller";
import { IDeleteScheduledJobRuleController } from "@/src/interface-adapters/controllers/scheduled-job-rules/delete-scheduled-job-rule.controller";
+import { IUpdateScheduledJobRuleController } from "@/src/interface-adapters/controllers/scheduled-job-rules/update-scheduled-job-rule.controller";
import { authCheck } from "./auth.actions";
import { z } from "zod";
import { Message } from "@/app/lib/types/types";
@@ -13,6 +14,7 @@ const createScheduledJobRuleController = container.resolve('listScheduledJobRulesController');
const fetchScheduledJobRuleController = container.resolve('fetchScheduledJobRuleController');
const deleteScheduledJobRuleController = container.resolve('deleteScheduledJobRuleController');
+const updateScheduledJobRuleController = container.resolve('updateScheduledJobRuleController');
export async function createScheduledJobRule(request: {
projectId: string,
@@ -72,4 +74,24 @@ export async function deleteScheduledJobRule(request: {
projectId: request.projectId,
ruleId: request.ruleId,
});
-}
\ No newline at end of file
+}
+
+export async function updateScheduledJobRule(request: {
+ projectId: string,
+ ruleId: string,
+ input: {
+ messages: z.infer[],
+ },
+ scheduledTime: string,
+}) {
+ const user = await authCheck();
+
+ return await updateScheduledJobRuleController.execute({
+ caller: 'user',
+ userId: user.id,
+ projectId: request.projectId,
+ ruleId: request.ruleId,
+ input: request.input,
+ scheduledTime: request.scheduledTime,
+ });
+}
diff --git a/apps/rowboat/app/actions/shared-workflow.actions.ts b/apps/rowboat/app/actions/shared-workflow.actions.ts
new file mode 100644
index 000000000..f651d4a55
--- /dev/null
+++ b/apps/rowboat/app/actions/shared-workflow.actions.ts
@@ -0,0 +1,65 @@
+"use server";
+
+import { z } from "zod";
+import { nanoid } from "nanoid";
+import { Workflow } from "@/app/lib/types/workflow_types";
+import { db } from "@/app/lib/mongodb";
+import { SHARED_WORKFLOWS_COLLECTION } from "@/src/infrastructure/repositories/mongodb.shared-workflows.indexes";
+import { requireAuth } from "@/app/lib/auth";
+
+const DEFAULT_TTL_SECONDS = 60 * 60 * 24; // 24 hours
+
+interface SharedWorkflowDoc {
+ _id: string;
+ workflow: unknown;
+ createdAt: Date;
+ expiresAt: Date;
+}
+
+function validateWorkflowJson(obj: unknown) {
+ const parsed = Workflow.safeParse(obj);
+ if (!parsed.success) {
+ const message = parsed.error.issues.map(i => `${i.path.join('.')}: ${i.message}`).join('; ');
+ throw new Error(`Invalid workflow JSON: ${message}`);
+ }
+ return parsed.data;
+}
+
+export async function createSharedWorkflowFromJson(json: string): Promise<{ id: string; ttlSeconds: number; }>
+{
+ // Require an authenticated user (respects guest mode when auth is disabled)
+ await requireAuth();
+ const obj = JSON.parse(json);
+ const workflow = validateWorkflowJson(obj);
+
+ const coll = db.collection(SHARED_WORKFLOWS_COLLECTION);
+ const id = nanoid();
+ const now = new Date();
+ const expiresAt = new Date(now.getTime() + DEFAULT_TTL_SECONDS * 1000);
+ await coll.insertOne({ _id: id, workflow, createdAt: now, expiresAt });
+
+ return { id, ttlSeconds: DEFAULT_TTL_SECONDS };
+}
+
+/**
+ * Load a shared workflow by ephemeral share id stored in MongoDB.
+ * Expected when the query param `shared` is present in the UI.
+ */
+export async function loadSharedWorkflow(id: string): Promise> {
+ // Ensure caller is authenticated (guest allowed when auth disabled)
+ await requireAuth();
+
+ // Look up by shared id in MongoDB
+ const coll = db.collection(SHARED_WORKFLOWS_COLLECTION);
+ const doc = await coll.findOne(
+ { _id: id },
+ { projection: { workflow: 1, expiresAt: 1 } }
+ );
+ if (!doc) {
+ throw new Error('Not found or expired');
+ }
+ if (doc.expiresAt && doc.expiresAt.getTime() <= Date.now()) {
+ throw new Error('Not found or expired');
+ }
+ return validateWorkflowJson(doc.workflow);
+}
diff --git a/apps/rowboat/app/api/generated-images/[id]/route.ts b/apps/rowboat/app/api/generated-images/[id]/route.ts
new file mode 100644
index 000000000..e93124980
--- /dev/null
+++ b/apps/rowboat/app/api/generated-images/[id]/route.ts
@@ -0,0 +1,74 @@
+import { NextRequest, NextResponse } from 'next/server';
+import { S3Client, GetObjectCommand, HeadObjectCommand } from '@aws-sdk/client-s3';
+import { Readable } from 'stream';
+
+// Serves generated images from S3 by UUID-only path: /api/generated-images/{id}
+// Reconstructs the S3 key using the same sharding logic as image creation.
+export async function GET(request: NextRequest, props: { params: Promise<{ id: string }> }) {
+ const params = await props.params;
+ const id = params.id;
+ if (!id) {
+ return NextResponse.json({ error: 'Missing id' }, { status: 400 });
+ }
+
+ const bucket = process.env.RAG_UPLOADS_S3_BUCKET || '';
+ if (!bucket) {
+ return NextResponse.json({ error: 'S3 bucket not configured' }, { status: 500 });
+ }
+
+ const region = process.env.RAG_UPLOADS_S3_REGION || 'us-east-1';
+ const s3 = new S3Client({
+ region,
+ credentials: process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY ? {
+ accessKeyId: process.env.AWS_ACCESS_KEY_ID,
+ secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
+ } as any : undefined,
+ });
+
+ // Reconstruct directory sharding from last two characters of UUID
+ const last2 = id.slice(-2).padStart(2, '0');
+ const dirA = last2.charAt(0);
+ const dirB = last2.charAt(1);
+ const baseKey = `generated_images/${dirA}/${dirB}/${id}`;
+
+ // Try known extensions in order used by generator
+ const exts = ['.png', '.jpg', '.webp'];
+ let foundExt: string | null = null;
+ for (const ext of exts) {
+ try {
+ await s3.send(new HeadObjectCommand({ Bucket: bucket, Key: `${baseKey}${ext}` }));
+ foundExt = ext;
+ break;
+ } catch {
+ // continue trying next extension
+ }
+ }
+
+ if (!foundExt) {
+ return NextResponse.json({ error: 'Not found' }, { status: 404 });
+ }
+
+ const key = `${baseKey}${foundExt}`;
+ const filename = `${id}${foundExt}`;
+ try {
+ const resp = await s3.send(new GetObjectCommand({ Bucket: bucket, Key: key }));
+ const contentType = resp.ContentType || 'application/octet-stream';
+ const body = resp.Body as any;
+ const webStream = body?.transformToWebStream
+ ? body.transformToWebStream()
+ : (Readable as any)?.toWeb
+ ? (Readable as any).toWeb(body)
+ : body;
+ return new NextResponse(webStream, {
+ status: 200,
+ headers: {
+ 'Content-Type': contentType,
+ 'Cache-Control': 'public, max-age=31536000, immutable',
+ 'Content-Disposition': `inline; filename="${filename}"`,
+ },
+ });
+ } catch (e) {
+ console.error('S3 get error', e);
+ return NextResponse.json({ error: 'Not found' }, { status: 404 });
+ }
+}
diff --git a/apps/rowboat/app/api/me/route.ts b/apps/rowboat/app/api/me/route.ts
new file mode 100644
index 000000000..b1591439b
--- /dev/null
+++ b/apps/rowboat/app/api/me/route.ts
@@ -0,0 +1,19 @@
+import { NextRequest, NextResponse } from 'next/server';
+import { authCheck } from '@/app/actions/auth.actions';
+import { USE_AUTH } from '@/app/lib/feature_flags';
+
+export async function GET(_req: NextRequest) {
+ try {
+ let user;
+ if (USE_AUTH) {
+ user = await authCheck();
+ } else {
+ user = { id: 'guest_user' } as any;
+ }
+ return NextResponse.json({ id: user.id });
+ } catch (error) {
+ return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
+ }
+}
+
+
diff --git a/apps/rowboat/app/api/templates/route.ts b/apps/rowboat/app/api/templates/route.ts
deleted file mode 100644
index 4a7cd74eb..000000000
--- a/apps/rowboat/app/api/templates/route.ts
+++ /dev/null
@@ -1,6 +0,0 @@
-import { NextResponse } from 'next/server';
-import { templates } from '@/app/lib/project_templates';
-
-export async function GET() {
- return NextResponse.json(templates);
-}
diff --git a/apps/rowboat/app/api/tmp-images/[id]/route.ts b/apps/rowboat/app/api/tmp-images/[id]/route.ts
new file mode 100644
index 000000000..cf7bfb644
--- /dev/null
+++ b/apps/rowboat/app/api/tmp-images/[id]/route.ts
@@ -0,0 +1,25 @@
+import { NextRequest, NextResponse } from 'next/server';
+import { tempBinaryCache } from '@/src/application/services/temp-binary-cache';
+
+export async function GET(request: NextRequest, props: { params: Promise<{ id: string }> }) {
+ const params = await props.params;
+ const id = params.id;
+ if (!id) {
+ return NextResponse.json({ error: 'Missing id' }, { status: 400 });
+ }
+
+ // Serve from in-memory temp cache
+ const entry = tempBinaryCache.get(id);
+ if (!entry) {
+ return NextResponse.json({ error: 'Not found or expired' }, { status: 404 });
+ }
+
+ return new NextResponse(entry.buf, {
+ status: 200,
+ headers: {
+ 'Content-Type': entry.mimeType || 'application/octet-stream',
+ 'Cache-Control': 'no-store',
+ 'Content-Disposition': `inline; filename="${id}"`,
+ },
+ });
+}
diff --git a/apps/rowboat/app/components/ui/textarea-with-send.tsx b/apps/rowboat/app/components/ui/textarea-with-send.tsx
index 965315549..0c57c3c4c 100644
--- a/apps/rowboat/app/components/ui/textarea-with-send.tsx
+++ b/apps/rowboat/app/components/ui/textarea-with-send.tsx
@@ -2,7 +2,8 @@
import { forwardRef, TextareaHTMLAttributes } from 'react';
import { Textarea } from '@/components/ui/textarea';
-import { Send } from 'lucide-react';
+import { Send, Plus } from 'lucide-react';
+import { Dropdown, DropdownItem, DropdownMenu, DropdownTrigger } from '@heroui/react';
import clsx from 'clsx';
interface TextareaWithSendProps extends Omit, 'onChange'> {
@@ -11,6 +12,9 @@ interface TextareaWithSendProps extends Omit void;
isSubmitting?: boolean;
submitDisabled?: boolean;
+ onImportJson?: () => void;
+ importDisabled?: boolean;
+ isImporting?: boolean;
placeholder?: string;
className?: string;
rows?: number;
@@ -25,6 +29,9 @@ export const TextareaWithSend = forwardRef {
+ const hasMore = Boolean(onImportJson);
return (
+
+ {/* Tool Type Section */}
+ {!tool.isLibrary &&
Loading assistants...
@@ -325,7 +543,7 @@ export function BuildAssistantSection() {
) : (
<>
-
+
{currentProjects.map((project) => (
- {/* Pre-built Assistants Section - Only show for New Assistant tab */}
- {selectedTab === 'new' && (
+ {/* Unified Templates Section - Only show for New Assistant tab */}
+ {selectedTab === 'new' && SHOW_PREBUILT_CARDS && (
);
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/components/common/section-card.tsx b/apps/rowboat/components/common/section-card.tsx
index 19878402d..9a7ac7bdc 100644
--- a/apps/rowboat/components/common/section-card.tsx
+++ b/apps/rowboat/components/common/section-card.tsx
@@ -22,18 +22,17 @@ export function SectionCard({ icon, title, children, labelWidth = 'md:w-32', cla
React.useEffect(() => {
const btn = document.getElementById(`section-card-header-${title && typeof title === 'string' ? title : ''}`);
if (btn) {
- console.log('SectionCard header button:', btn, btn.getBoundingClientRect(), window.getComputedStyle(btn));
const chevron = btn.querySelector('svg');
if (chevron) {
- console.log('Chevron:', chevron, chevron.getBoundingClientRect(), window.getComputedStyle(chevron));
+ // Chevron positioning logic can go here if needed
}
const iconEl = btn.querySelector('.section-card-icon');
if (iconEl) {
- console.log('Icon:', iconEl, iconEl.getBoundingClientRect(), window.getComputedStyle(iconEl));
+ // Icon positioning logic can go here if needed
}
const label = btn.querySelector('span');
if (label) {
- console.log('Label:', label, label.getBoundingClientRect(), window.getComputedStyle(label));
+ // Label positioning logic can go here if needed
}
}
}, [title]);
diff --git a/apps/rowboat/components/common/tool-param-card.tsx b/apps/rowboat/components/common/tool-param-card.tsx
index 927ea9743..7ef5143f2 100644
--- a/apps/rowboat/components/common/tool-param-card.tsx
+++ b/apps/rowboat/components/common/tool-param-card.tsx
@@ -45,7 +45,7 @@ export function ToolParamCard({
aria-expanded={expanded}
>
{expanded ? : }
- {param.name}
+ {param.name}
{!readOnly && (
void;
+}
+
+export function ProgressBar({ steps, className, onStepClick }: ProgressBarProps) {
+ const getShortLabel = (label: string) => {
+ if (!label) return "";
+ const beforeColon = label.split(":")[0]?.trim();
+ if (beforeColon) return beforeColon;
+ const firstWord = label.split(" ")[0]?.trim();
+ return firstWord || label;
+ };
+
+ return (
+
+ );
+}
diff --git a/apps/rowboat/di/container.ts b/apps/rowboat/di/container.ts
index 456250c85..402a952e9 100644
--- a/apps/rowboat/di/container.ts
+++ b/apps/rowboat/di/container.ts
@@ -73,10 +73,12 @@ import { CreateScheduledJobRuleUseCase } from "@/src/application/use-cases/sched
import { FetchScheduledJobRuleUseCase } from "@/src/application/use-cases/scheduled-job-rules/fetch-scheduled-job-rule.use-case";
import { ListScheduledJobRulesUseCase } from "@/src/application/use-cases/scheduled-job-rules/list-scheduled-job-rules.use-case";
import { DeleteScheduledJobRuleUseCase } from "@/src/application/use-cases/scheduled-job-rules/delete-scheduled-job-rule.use-case";
+import { UpdateScheduledJobRuleUseCase } from "@/src/application/use-cases/scheduled-job-rules/update-scheduled-job-rule.use-case";
import { CreateScheduledJobRuleController } from "@/src/interface-adapters/controllers/scheduled-job-rules/create-scheduled-job-rule.controller";
import { FetchScheduledJobRuleController } from "@/src/interface-adapters/controllers/scheduled-job-rules/fetch-scheduled-job-rule.controller";
import { ListScheduledJobRulesController } from "@/src/interface-adapters/controllers/scheduled-job-rules/list-scheduled-job-rules.controller";
import { DeleteScheduledJobRuleController } from "@/src/interface-adapters/controllers/scheduled-job-rules/delete-scheduled-job-rule.controller";
+import { UpdateScheduledJobRuleController } from "@/src/interface-adapters/controllers/scheduled-job-rules/update-scheduled-job-rule.controller";
// Recurring Job Rules
import { MongoDBRecurringJobRulesRepository } from "@/src/infrastructure/repositories/mongodb.recurring-job-rules.repository";
@@ -85,11 +87,13 @@ import { FetchRecurringJobRuleUseCase } from "@/src/application/use-cases/recurr
import { ListRecurringJobRulesUseCase } from "@/src/application/use-cases/recurring-job-rules/list-recurring-job-rules.use-case";
import { ToggleRecurringJobRuleUseCase } from "@/src/application/use-cases/recurring-job-rules/toggle-recurring-job-rule.use-case";
import { DeleteRecurringJobRuleUseCase } from "@/src/application/use-cases/recurring-job-rules/delete-recurring-job-rule.use-case";
+import { UpdateRecurringJobRuleUseCase } from "@/src/application/use-cases/recurring-job-rules/update-recurring-job-rule.use-case";
import { CreateRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/create-recurring-job-rule.controller";
import { FetchRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/fetch-recurring-job-rule.controller";
import { ListRecurringJobRulesController } from "@/src/interface-adapters/controllers/recurring-job-rules/list-recurring-job-rules.controller";
import { ToggleRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/toggle-recurring-job-rule.controller";
import { DeleteRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/delete-recurring-job-rule.controller";
+import { UpdateRecurringJobRuleController } from "@/src/interface-adapters/controllers/recurring-job-rules/update-recurring-job-rule.controller";
// API Keys
import { CreateApiKeyUseCase } from "@/src/application/use-cases/api-keys/create-api-key.use-case";
@@ -238,10 +242,12 @@ container.register({
createScheduledJobRuleUseCase: asClass(CreateScheduledJobRuleUseCase).singleton(),
fetchScheduledJobRuleUseCase: asClass(FetchScheduledJobRuleUseCase).singleton(),
listScheduledJobRulesUseCase: asClass(ListScheduledJobRulesUseCase).singleton(),
+ updateScheduledJobRuleUseCase: asClass(UpdateScheduledJobRuleUseCase).singleton(),
deleteScheduledJobRuleUseCase: asClass(DeleteScheduledJobRuleUseCase).singleton(),
createScheduledJobRuleController: asClass(CreateScheduledJobRuleController).singleton(),
fetchScheduledJobRuleController: asClass(FetchScheduledJobRuleController).singleton(),
listScheduledJobRulesController: asClass(ListScheduledJobRulesController).singleton(),
+ updateScheduledJobRuleController: asClass(UpdateScheduledJobRuleController).singleton(),
deleteScheduledJobRuleController: asClass(DeleteScheduledJobRuleController).singleton(),
// recurring job rules
@@ -251,11 +257,13 @@ container.register({
fetchRecurringJobRuleUseCase: asClass(FetchRecurringJobRuleUseCase).singleton(),
listRecurringJobRulesUseCase: asClass(ListRecurringJobRulesUseCase).singleton(),
toggleRecurringJobRuleUseCase: asClass(ToggleRecurringJobRuleUseCase).singleton(),
+ updateRecurringJobRuleUseCase: asClass(UpdateRecurringJobRuleUseCase).singleton(),
deleteRecurringJobRuleUseCase: asClass(DeleteRecurringJobRuleUseCase).singleton(),
createRecurringJobRuleController: asClass(CreateRecurringJobRuleController).singleton(),
fetchRecurringJobRuleController: asClass(FetchRecurringJobRuleController).singleton(),
listRecurringJobRulesController: asClass(ListRecurringJobRulesController).singleton(),
toggleRecurringJobRuleController: asClass(ToggleRecurringJobRuleController).singleton(),
+ updateRecurringJobRuleController: asClass(UpdateRecurringJobRuleController).singleton(),
deleteRecurringJobRuleController: asClass(DeleteRecurringJobRuleController).singleton(),
// projects
@@ -344,4 +352,4 @@ container.register({
// users
// ---
usersRepository: asClass(MongoDBUsersRepository).singleton(),
-});
\ No newline at end of file
+});
diff --git a/apps/rowboat/package-lock.json b/apps/rowboat/package-lock.json
index 020e1b985..5e92e6009 100644
--- a/apps/rowboat/package-lock.json
+++ b/apps/rowboat/package-lock.json
@@ -12,7 +12,7 @@
"@auth0/nextjs-auth0": "^4.7.0",
"@aws-sdk/client-s3": "^3.743.0",
"@aws-sdk/s3-request-presigner": "^3.743.0",
- "@composio/core": "^0.1.40",
+ "@composio/core": "^0.1.48",
"@dnd-kit/core": "^6.3.1",
"@dnd-kit/sortable": "^10.0.0",
"@dnd-kit/utilities": "^3.2.2",
@@ -1447,16 +1447,16 @@
"license": "Apache-2.0"
},
"node_modules/@composio/core": {
- "version": "0.1.41",
- "resolved": "https://registry.npmjs.org/@composio/core/-/core-0.1.41.tgz",
- "integrity": "sha512-wodFzWduAZ+7i08exCRDj5/0uDrQbNNrTA36EdBZE6T6+gzxH9GMnZmGXcg9WvU8dQSx/hWiJAyjp1IF4gGMtA==",
+ "version": "0.1.48",
+ "resolved": "https://registry.npmjs.org/@composio/core/-/core-0.1.48.tgz",
+ "integrity": "sha512-zTzxrdaOlzlBjjolJ+n/5xGBDrBP3Qr4lwgq37CZjTmJQ5T2Ypk6uvcNV1HkYq7cAXNGU+ojzFryNDkkNCfT3A==",
"license": "ISC",
"dependencies": {
"@composio/client": "0.1.0-alpha.31",
- "@composio/json-schema-to-zod": "0.1.11",
+ "@composio/json-schema-to-zod": "0.1.13",
"@types/json-schema": "^7.0.15",
"chalk": "^4.1.2",
- "openai": "^4.94.0",
+ "openai": "^5.16.0",
"pusher-js": "^8.4.0",
"semver": "^7.7.2",
"uuid": "^11.1.0",
@@ -1478,9 +1478,9 @@
}
},
"node_modules/@composio/json-schema-to-zod": {
- "version": "0.1.11",
- "resolved": "https://registry.npmjs.org/@composio/json-schema-to-zod/-/json-schema-to-zod-0.1.11.tgz",
- "integrity": "sha512-6gcCof9xqLhmmHSCvWLW/E16PKr5C1dWJ9AOk4FJU8JA7gSt8JEdFHBwx80dyprBJdwPF6R54nlkmpXrEwApzg==",
+ "version": "0.1.13",
+ "resolved": "https://registry.npmjs.org/@composio/json-schema-to-zod/-/json-schema-to-zod-0.1.13.tgz",
+ "integrity": "sha512-M/pppyhshlWB/5S6hjocFYYFCm9sFK1y9dTm70MpYl2EGJtpYDgwUeS9cFIzml+Iw/MUebr0wCJ0c+jOYeDU/A==",
"license": "ISC",
"dependencies": {
"@types/json-schema": "^7.0.15",
@@ -4445,26 +4445,6 @@
}
}
},
- "node_modules/@openai/agents-core/node_modules/openai": {
- "version": "5.12.2",
- "resolved": "https://registry.npmjs.org/openai/-/openai-5.12.2.tgz",
- "integrity": "sha512-xqzHHQch5Tws5PcKR2xsZGX9xtch+JQFz5zb14dGqlshmmDAFBFEWmeIpf7wVqWV+w7Emj7jRgkNJakyKE0tYQ==",
- "bin": {
- "openai": "bin/cli"
- },
- "peerDependencies": {
- "ws": "^8.18.0",
- "zod": "^3.23.8"
- },
- "peerDependenciesMeta": {
- "ws": {
- "optional": true
- },
- "zod": {
- "optional": true
- }
- }
- },
"node_modules/@openai/agents-extensions": {
"version": "0.0.15",
"resolved": "https://registry.npmjs.org/@openai/agents-extensions/-/agents-extensions-0.0.15.tgz",
@@ -4491,26 +4471,6 @@
"openai": "^5.10.1"
}
},
- "node_modules/@openai/agents-openai/node_modules/openai": {
- "version": "5.12.2",
- "resolved": "https://registry.npmjs.org/openai/-/openai-5.12.2.tgz",
- "integrity": "sha512-xqzHHQch5Tws5PcKR2xsZGX9xtch+JQFz5zb14dGqlshmmDAFBFEWmeIpf7wVqWV+w7Emj7jRgkNJakyKE0tYQ==",
- "bin": {
- "openai": "bin/cli"
- },
- "peerDependencies": {
- "ws": "^8.18.0",
- "zod": "^3.23.8"
- },
- "peerDependenciesMeta": {
- "ws": {
- "optional": true
- },
- "zod": {
- "optional": true
- }
- }
- },
"node_modules/@openai/agents-realtime": {
"version": "0.0.15",
"resolved": "https://registry.npmjs.org/@openai/agents-realtime/-/agents-realtime-0.0.15.tgz",
@@ -4523,26 +4483,6 @@
"ws": "^8.18.1"
}
},
- "node_modules/@openai/agents/node_modules/openai": {
- "version": "5.12.2",
- "resolved": "https://registry.npmjs.org/openai/-/openai-5.12.2.tgz",
- "integrity": "sha512-xqzHHQch5Tws5PcKR2xsZGX9xtch+JQFz5zb14dGqlshmmDAFBFEWmeIpf7wVqWV+w7Emj7jRgkNJakyKE0tYQ==",
- "bin": {
- "openai": "bin/cli"
- },
- "peerDependencies": {
- "ws": "^8.18.0",
- "zod": "^3.23.8"
- },
- "peerDependenciesMeta": {
- "ws": {
- "optional": true
- },
- "zod": {
- "optional": true
- }
- }
- },
"node_modules/@openai/zod": {
"name": "zod",
"version": "3.25.67",
@@ -7606,15 +7546,6 @@
"undici-types": "~5.26.4"
}
},
- "node_modules/@types/node-fetch": {
- "version": "2.6.11",
- "resolved": "https://registry.npmjs.org/@types/node-fetch/-/node-fetch-2.6.11.tgz",
- "integrity": "sha512-24xFj9R5+rfQJLRyM56qh+wnVSYhyXC2tkoBndtY0U+vubqNsYXGjufB2nn8Q6gt0LrARwL6UBtMCSVCwl4B1g==",
- "dependencies": {
- "@types/node": "*",
- "form-data": "^4.0.0"
- }
- },
"node_modules/@types/parse-json": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/@types/parse-json/-/parse-json-4.0.2.tgz",
@@ -8005,17 +7936,6 @@
"resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.2.0.tgz",
"integrity": "sha512-zuVdFrMJiuCDQUMCzQaD6KL28MjnqqN8XnAqiEq9PNm/hCPTSGfrXCOfwj1ow4LFb/tNymJPwsNbVePc1xFqrQ=="
},
- "node_modules/abort-controller": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/abort-controller/-/abort-controller-3.0.0.tgz",
- "integrity": "sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==",
- "dependencies": {
- "event-target-shim": "^5.0.0"
- },
- "engines": {
- "node": ">=6.5"
- }
- },
"node_modules/accepts": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-2.0.0.tgz",
@@ -8079,17 +7999,6 @@
"node": ">= 6.0.0"
}
},
- "node_modules/agentkeepalive": {
- "version": "4.5.0",
- "resolved": "https://registry.npmjs.org/agentkeepalive/-/agentkeepalive-4.5.0.tgz",
- "integrity": "sha512-5GG/5IbQQpC9FpkRGsSvZI5QYeSCzlJHdpBQntCsuTOxhKD8lqKhrleg2Yi7yvMIf82Ycmmqln9U8V9qwEiJew==",
- "dependencies": {
- "humanize-ms": "^1.2.1"
- },
- "engines": {
- "node": ">= 8.0.0"
- }
- },
"node_modules/ai": {
"version": "4.3.13",
"resolved": "https://registry.npmjs.org/ai/-/ai-4.3.13.tgz",
@@ -10605,14 +10514,6 @@
"node": ">= 0.6"
}
},
- "node_modules/event-target-shim": {
- "version": "5.0.1",
- "resolved": "https://registry.npmjs.org/event-target-shim/-/event-target-shim-5.0.1.tgz",
- "integrity": "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==",
- "engines": {
- "node": ">=6"
- }
- },
"node_modules/eventemitter3": {
"version": "4.0.7",
"resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz",
@@ -11004,23 +10905,6 @@
"node": ">= 6"
}
},
- "node_modules/form-data-encoder": {
- "version": "1.7.2",
- "resolved": "https://registry.npmjs.org/form-data-encoder/-/form-data-encoder-1.7.2.tgz",
- "integrity": "sha512-qfqtYan3rxrnCk1VYaA4H+Ms9xdpPqvLZa6xmMgFvhO32x7/3J/ExcTd6qpxM0vH2GdMI+poehyBZvqfMTto8A=="
- },
- "node_modules/formdata-node": {
- "version": "4.4.1",
- "resolved": "https://registry.npmjs.org/formdata-node/-/formdata-node-4.4.1.tgz",
- "integrity": "sha512-0iirZp3uVDjVGt9p49aTaqjk84TrglENEDuqfdlZQ1roC9CWlPk6Avf8EEnZNcAqPonwkG35x4n3ww/1THYAeQ==",
- "dependencies": {
- "node-domexception": "1.0.0",
- "web-streams-polyfill": "4.0.0-beta.3"
- },
- "engines": {
- "node": ">= 12.20"
- }
- },
"node_modules/forwarded": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
@@ -11465,14 +11349,6 @@
"node": ">= 6"
}
},
- "node_modules/humanize-ms": {
- "version": "1.2.1",
- "resolved": "https://registry.npmjs.org/humanize-ms/-/humanize-ms-1.2.1.tgz",
- "integrity": "sha512-Fl70vYtsAFb/C06PTS9dZBo7ihau+Tu/DNCk/OyHhea07S+aeMWpFFkUaXRa8fI+ScZbEI8dfSxwY7gxZ9SAVQ==",
- "dependencies": {
- "ms": "^2.0.0"
- }
- },
"node_modules/iconv-lite": {
"version": "0.6.3",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz",
@@ -14587,62 +14463,6 @@
"tslib": "^2.0.3"
}
},
- "node_modules/node-domexception": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
- "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
- "funding": [
- {
- "type": "github",
- "url": "https://github.com/sponsors/jimmywarting"
- },
- {
- "type": "github",
- "url": "https://paypal.me/jimmywarting"
- }
- ],
- "engines": {
- "node": ">=10.5.0"
- }
- },
- "node_modules/node-fetch": {
- "version": "2.7.0",
- "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz",
- "integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==",
- "dependencies": {
- "whatwg-url": "^5.0.0"
- },
- "engines": {
- "node": "4.x || >=6.0.0"
- },
- "peerDependencies": {
- "encoding": "^0.1.0"
- },
- "peerDependenciesMeta": {
- "encoding": {
- "optional": true
- }
- }
- },
- "node_modules/node-fetch/node_modules/tr46": {
- "version": "0.0.3",
- "resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
- "integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw=="
- },
- "node_modules/node-fetch/node_modules/webidl-conversions": {
- "version": "3.0.1",
- "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
- "integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="
- },
- "node_modules/node-fetch/node_modules/whatwg-url": {
- "version": "5.0.0",
- "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz",
- "integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==",
- "dependencies": {
- "tr46": "~0.0.3",
- "webidl-conversions": "^3.0.0"
- }
- },
"node_modules/node-releases": {
"version": "2.0.19",
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-2.0.19.tgz",
@@ -14795,19 +14615,10 @@
}
},
"node_modules/openai": {
- "version": "4.104.0",
- "resolved": "https://registry.npmjs.org/openai/-/openai-4.104.0.tgz",
- "integrity": "sha512-p99EFNsA/yX6UhVO93f5kJsDRLAg+CTA2RBqdHK4RtK8u5IJw32Hyb2dTGKbnnFmnuoBv5r7Z2CURI9sGZpSuA==",
+ "version": "5.16.0",
+ "resolved": "https://registry.npmjs.org/openai/-/openai-5.16.0.tgz",
+ "integrity": "sha512-hoEH8ZNvg1HXjU9mp88L/ZH8O082Z8r6FHCXGiWAzVRrEv443aI57qhch4snu07yQydj+AUAWLenAiBXhu89Tw==",
"license": "Apache-2.0",
- "dependencies": {
- "@types/node": "^18.11.18",
- "@types/node-fetch": "^2.6.4",
- "abort-controller": "^3.0.0",
- "agentkeepalive": "^4.2.1",
- "form-data-encoder": "1.7.2",
- "formdata-node": "^4.3.2",
- "node-fetch": "^2.6.7"
- },
"bin": {
"openai": "bin/cli"
},
@@ -14824,14 +14635,6 @@
}
}
},
- "node_modules/openai/node_modules/@types/node": {
- "version": "18.19.54",
- "resolved": "https://registry.npmjs.org/@types/node/-/node-18.19.54.tgz",
- "integrity": "sha512-+BRgt0G5gYjTvdLac9sIeE0iZcJxi4Jc4PV5EUzqi+88jmQLr+fRZdv2tCTV7IHKSGxM6SaLoOXQWWUiLUItMw==",
- "dependencies": {
- "undici-types": "~5.26.4"
- }
- },
"node_modules/optionator": {
"version": "0.9.4",
"resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
@@ -17274,14 +17077,6 @@
"resolved": "https://registry.npmjs.org/vscode-uri/-/vscode-uri-3.0.8.tgz",
"integrity": "sha512-AyFQ0EVmsOZOlAnxoFOGOq1SQDWAB7C6aqMGS23svWAllfOaxbuFvcT8D1i8z3Gyn8fraVeZNNmN6e9bxxXkKw=="
},
- "node_modules/web-streams-polyfill": {
- "version": "4.0.0-beta.3",
- "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-4.0.0-beta.3.tgz",
- "integrity": "sha512-QW95TCTaHmsYfHDybGMwO5IJIM93I/6vTRk+daHTWFPhwh+C8Cg7j7XyKrwrj8Ib6vYXe0ocYNrmzY4xAAN6ug==",
- "engines": {
- "node": ">= 14"
- }
- },
"node_modules/web-vitals": {
"version": "4.2.4",
"resolved": "https://registry.npmjs.org/web-vitals/-/web-vitals-4.2.4.tgz",
diff --git a/apps/rowboat/package.json b/apps/rowboat/package.json
index ff5d8f61a..5df957333 100644
--- a/apps/rowboat/package.json
+++ b/apps/rowboat/package.json
@@ -20,7 +20,7 @@
"@auth0/nextjs-auth0": "^4.7.0",
"@aws-sdk/client-s3": "^3.743.0",
"@aws-sdk/s3-request-presigner": "^3.743.0",
- "@composio/core": "^0.1.40",
+ "@composio/core": "^0.1.48",
"@dnd-kit/core": "^6.3.1",
"@dnd-kit/sortable": "^10.0.0",
"@dnd-kit/utilities": "^3.2.2",
diff --git a/apps/rowboat/src/application/lib/agents-runtime/agent-tools.ts b/apps/rowboat/src/application/lib/agents-runtime/agent-tools.ts
index f334c775b..545ce6770 100644
--- a/apps/rowboat/src/application/lib/agents-runtime/agent-tools.ts
+++ b/apps/rowboat/src/application/lib/agents-runtime/agent-tools.ts
@@ -6,6 +6,9 @@ import { z } from "zod";
import { composio } from "@/src/application/lib/composio/composio";
import { SignJWT } from "jose";
import crypto from "crypto";
+import { GoogleGenerativeAI } from "@google/generative-ai";
+import { tempBinaryCache } from "@/src/application/services/temp-binary-cache";
+import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
// Internal dependencies
import { embeddingModel } from "@/app/lib/embedding";
@@ -24,13 +27,94 @@ import { IProjectsRepository } from "@/src/application/repositories/projects.rep
// Provider configuration
const PROVIDER_API_KEY = process.env.PROVIDER_API_KEY || process.env.OPENAI_API_KEY || '';
const PROVIDER_BASE_URL = process.env.PROVIDER_BASE_URL || undefined;
-const MODEL = process.env.PROVIDER_DEFAULT_MODEL || 'gpt-4o';
+const MODEL = process.env.PROVIDER_DEFAULT_MODEL || 'gpt-4.1';
const openai = createOpenAI({
apiKey: PROVIDER_API_KEY,
baseURL: PROVIDER_BASE_URL,
});
+// Image generation (Gemini) defaults
+const DEFAULT_IMAGE_MODEL = "gemini-2.5-flash-image-preview";
+
+// Helper to generate an image using Gemini
+export async function invokeGenerateImageTool(
+ logger: PrefixLogger,
+ usageTracker: UsageTracker,
+ prompt: string,
+ options?: {
+ modelName?: string;
+ }
+): Promise<{
+ texts: string[];
+ images: { mimeType: string; bytes: number; dataBase64: string }[];
+ model: string;
+}> {
+ const log = logger.child("invokeGenerateImageTool");
+ const apiKey = process.env.GOOGLE_API_KEY || process.env.GEMINI_API_KEY || "";
+ if (!apiKey) {
+ throw new Error("Missing API key. Set GOOGLE_API_KEY or GEMINI_API_KEY.");
+ }
+
+ const modelName = options?.modelName || DEFAULT_IMAGE_MODEL;
+
+ const client = new GoogleGenerativeAI(apiKey);
+ const model = client.getGenerativeModel({ model: modelName });
+
+ log.log(`Generating image with model: ${modelName}`);
+ const result = await model.generateContent(prompt);
+ const response = result.response as any;
+
+ // Track usage if available
+ try {
+ const inputTokens = response?.usageMetadata?.promptTokenCount || 0;
+ const outputTokens = response?.usageMetadata?.candidatesTokenCount || 0;
+ usageTracker.track({
+ type: "LLM_USAGE",
+ modelName: modelName,
+ inputTokens,
+ outputTokens,
+ context: "agents_runtime.gemini_image_generation",
+ });
+ } catch (_) {
+ // ignore usage tracking errors
+ }
+
+ const candidates = (response?.candidates ?? []) as any[];
+ if (!candidates.length) {
+ throw new Error("No candidates returned in response.");
+ }
+
+ const parts = (candidates[0]?.content?.parts ?? []) as any[];
+ if (!parts.length) {
+ throw new Error("No parts in candidate content.");
+ }
+
+ const texts: string[] = [];
+ const images: { mimeType: string; bytes: number; dataBase64: string }[] = [];
+
+ for (const part of parts) {
+ if (typeof part.text === "string" && part.text.length) {
+ texts.push(part.text);
+ continue;
+ }
+
+ const dataB64 = part?.inlineData?.data as string | undefined;
+ if (dataB64) {
+ const mime = part?.inlineData?.mimeType || "image/png";
+ const buf = Buffer.from(dataB64, "base64");
+
+ images.push({ mimeType: mime, bytes: buf.length, dataBase64: dataB64 });
+ }
+ }
+
+ if (!images.length) {
+ log.log("No image part found in response.");
+ }
+
+ return { texts, images, model: modelName };
+}
+
// Helper to handle mock tool responses
export async function invokeMockTool(
logger: PrefixLogger,
@@ -528,6 +612,108 @@ export function createComposioTool(
});
}
+// Helper to create a Gemini image generation tool
+export function createGenerateImageTool(
+ logger: PrefixLogger,
+ usageTracker: UsageTracker,
+ config: z.infer,
+ projectId: string,
+): Tool {
+ const { name, description, parameters } = config;
+
+ return tool({
+ name,
+ description,
+ strict: false,
+ parameters: {
+ type: 'object',
+ properties: parameters.properties,
+ required: parameters.required || [],
+ additionalProperties: true,
+ },
+ async execute(input: any) {
+ try {
+ const prompt: string = input?.prompt || '';
+ if (!prompt) {
+ return JSON.stringify({ error: "Missing required field: prompt" });
+ }
+ const modelName: string | undefined = input?.modelName;
+ const result = await invokeGenerateImageTool(
+ logger,
+ usageTracker,
+ prompt,
+ { modelName }
+ );
+ // If S3 bucket configured, store in S3 under generated_images///
+ const s3Bucket = process.env.RAG_UPLOADS_S3_BUCKET || '';
+ if (s3Bucket) {
+ const s3Region = process.env.RAG_UPLOADS_S3_REGION || 'us-east-1';
+ const s3 = new S3Client({
+ region: s3Region,
+ credentials: process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY ? {
+ accessKeyId: process.env.AWS_ACCESS_KEY_ID,
+ secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
+ } as any : undefined,
+ });
+
+ const images = await Promise.all(result.images.map(async (img) => {
+ const buf = Buffer.from(img.dataBase64, 'base64');
+ const ext = img.mimeType === 'image/jpeg' ? '.jpg' : img.mimeType === 'image/webp' ? '.webp' : '.png';
+ const imageId = crypto.randomUUID();
+ const last2 = imageId.slice(-2).padStart(2, '0');
+ const dirA = last2.charAt(0);
+ const dirB = last2.charAt(1);
+ const filename = `${imageId}${ext}`;
+ const key = `generated_images/${dirA}/${dirB}/${filename}`;
+ await s3.send(new PutObjectCommand({
+ Bucket: s3Bucket,
+ Key: key,
+ Body: buf,
+ ContentType: img.mimeType,
+ }));
+ const url = `/api/generated-images/${imageId}`;
+ return { mimeType: img.mimeType, bytes: buf.length, url };
+ }));
+ const payload = {
+ model: result.model,
+ texts: result.texts,
+ images,
+ storage: 's3',
+ } as any;
+ return JSON.stringify(payload);
+ }
+
+ // Otherwise, use in-memory temp cache URLs
+ const ttlSec = 10 * 60; // 10 minutes
+ const ttlMs = ttlSec * 1000;
+ const images = result.images.map(img => {
+ try {
+ const buf = Buffer.from(img.dataBase64, 'base64');
+ const id = tempBinaryCache.put(buf, img.mimeType, ttlMs);
+ const url = `/api/tmp-images/${id}`;
+ return { mimeType: img.mimeType, bytes: buf.length, url };
+ } catch {
+ return { mimeType: img.mimeType, bytes: img.bytes, url: null };
+ }
+ });
+ const payload = {
+ model: result.model,
+ texts: result.texts,
+ images,
+ storage: 'temp',
+ expiresInSec: ttlSec,
+ } as any;
+ return JSON.stringify(payload);
+ } catch (error) {
+ logger.log(`Error executing generate image tool ${name}:`, error);
+ return JSON.stringify({
+ error: "Tool execution failed!",
+ });
+ }
+ }
+ });
+}
+
export function createTools(
logger: PrefixLogger,
usageTracker: UsageTracker,
@@ -541,7 +727,7 @@ export function createTools(
toolLogger.log(`=== CREATING ${Object.keys(toolConfig).length} TOOLS ===`);
for (const [toolName, config] of Object.entries(toolConfig)) {
- toolLogger.log(`creating tool: ${toolName} (type: ${config.mockTool ? 'mock' : config.isMcp ? 'mcp' : config.isComposio ? 'composio' : 'webhook'})`);
+ toolLogger.log(`creating tool: ${toolName} (type: ${config.mockTool ? 'mock' : config.isMcp ? 'mcp' : config.isComposio ? 'composio' : config.isGeminiImage ? 'gemini-image' : 'webhook'})`);
if (config.mockTool) {
tools[toolName] = createMockTool(logger, usageTracker, config);
@@ -552,6 +738,9 @@ export function createTools(
} else if (config.isComposio) {
tools[toolName] = createComposioTool(logger, usageTracker, config, projectId);
toolLogger.log(`β created composio tool: ${toolName}`);
+ } else if (config.isGeminiImage) {
+ tools[toolName] = createGenerateImageTool(logger, usageTracker, config, projectId);
+ toolLogger.log(`β created gemini image tool: ${toolName}`);
} else if (config.isWebhook) {
tools[toolName] = createWebhookTool(logger, usageTracker, config, projectId);
toolLogger.log(`β created webhook tool: ${toolName} (fallback)`);
@@ -563,4 +752,4 @@ export function createTools(
toolLogger.log(`=== TOOL CREATION COMPLETE ===`);
return tools;
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/src/application/lib/agents-runtime/agent_instructions.ts b/apps/rowboat/src/application/lib/agents-runtime/agent_instructions.ts
index 7fc494432..56a752a67 100644
--- a/apps/rowboat/src/application/lib/agents-runtime/agent_instructions.ts
+++ b/apps/rowboat/src/application/lib/agents-runtime/agent_instructions.ts
@@ -140,7 +140,7 @@ export const PIPELINE_TYPE_INSTRUCTIONS = (): string => `
- Provide clear, actionable output that the next pipeline step can easily understand and work with.
- Do NOT attempt to handle tasks outside your specific pipeline role.
- Do NOT mention other agents or the pipeline structure to users.
-- Your response should be self-contained and ready to be consumed by the next pipeline step.
+- Your response should be self-contained and ready to be consumed by the next pipeline step. Add a prefix 'Internal message' to your response.
- Reading the message history will show you the pipeline execution flow up to your step.
- These are high level instructions only. The user will provide more specific instructions which will be below.
`;
diff --git a/apps/rowboat/src/application/lib/agents-runtime/agents.ts b/apps/rowboat/src/application/lib/agents-runtime/agents.ts
index 4d2a7e847..5f3f7aa81 100644
--- a/apps/rowboat/src/application/lib/agents-runtime/agents.ts
+++ b/apps/rowboat/src/application/lib/agents-runtime/agents.ts
@@ -9,6 +9,7 @@ import crypto from "crypto";
// Internal dependencies
import { createTools, createRagTool } from "./agent-tools";
import { ConnectedEntity, sanitizeTextWithMentions, Workflow, WorkflowAgent, WorkflowPipeline, WorkflowPrompt, WorkflowTool } from "@/app/lib/types/workflow_types";
+import { getDefaultTools } from "@/app/lib/default_tools";
import { CHILD_TRANSFER_RELATED_INSTRUCTIONS, CONVERSATION_TYPE_INSTRUCTIONS, PIPELINE_TYPE_INSTRUCTIONS, RAG_INSTRUCTIONS, TASK_TYPE_INSTRUCTIONS, VARIABLES_CONTEXT_INSTRUCTIONS } from "./agent_instructions";
import { PrefixLogger } from "@/app/lib/utils";
import { Message, AssistantMessage, AssistantMessageWithToolCalls, ToolMessage } from "@/app/lib/types/types";
@@ -21,7 +22,7 @@ import { PipelineStateManager } from "./pipeline-state-manager";
// Provider configuration
const PROVIDER_API_KEY = process.env.PROVIDER_API_KEY || process.env.OPENAI_API_KEY || '';
const PROVIDER_BASE_URL = process.env.PROVIDER_BASE_URL || undefined;
-const MODEL = process.env.PROVIDER_DEFAULT_MODEL || 'gpt-4o';
+const MODEL = process.env.PROVIDER_DEFAULT_MODEL || 'gpt-4.1';
// Feature flags
const USE_NATIVE_HANDOFFS = process.env.USE_NATIVE_HANDOFFS === 'true';
@@ -361,7 +362,15 @@ function mapConfig(workflow: z.infer): {
...acc,
[agent.name]: agent
}), {});
- const toolConfig: Record> = workflow.tools.reduce((acc, tool) => ({
+ // Merge workflow tools with default library tools (unique by name)
+ const mergedTools = (() => {
+ const defaults = getDefaultTools();
+ const map = new Map>();
+ for (const t of workflow.tools) map.set(t.name, t);
+ for (const t of defaults) if (!map.has(t.name)) map.set(t.name, t as any);
+ return Array.from(map.values());
+ })();
+ const toolConfig: Record> = mergedTools.reduce((acc, tool) => ({
...acc,
[tool.name]: tool
}), {});
@@ -903,7 +912,7 @@ async function* handleNativeHandoffEvent(
}
// Regular handoff handling (non-pipeline)
- const maxCalls = targetAgentConfig?.maxCallsPerParentAgent || 3;
+ const maxCalls = targetAgentConfig?.maxCallsPerParentAgent || 1;
const currentCalls = transferCounter.get(agentName, targetAgentName);
if (targetAgentConfig?.outputVisibility === 'internal' && currentCalls >= maxCalls) {
@@ -955,7 +964,7 @@ async function* handleHandoffEvent(
// Only apply max calls limit to internal agents (task agents)
const targetAgentConfig = agentConfig[event.item.targetAgent.name];
if (targetAgentConfig?.outputVisibility === 'internal') {
- const maxCalls = targetAgentConfig?.maxCallsPerParentAgent || 3;
+ const maxCalls = targetAgentConfig?.maxCallsPerParentAgent || 1;
const currentCalls = transferCounter.get(agentName, event.item.targetAgent.name);
if (currentCalls >= maxCalls) {
eventLogger.log(`β οΈ SKIPPING: handoff to ${event.item.targetAgent.name} - max calls ${maxCalls} exceeded from ${agentName}`);
@@ -1566,4 +1575,4 @@ export async function getResponse(
}
return { messages: out, usage };
*/
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/src/application/lib/composio/composio.ts b/apps/rowboat/src/application/lib/composio/composio.ts
index 057a14cc6..8ac9324a1 100644
--- a/apps/rowboat/src/application/lib/composio/composio.ts
+++ b/apps/rowboat/src/application/lib/composio/composio.ts
@@ -9,6 +9,12 @@ export const composio = new Composio({
apiKey: COMPOSIO_API_KEY,
});
+// Warn if API key is missing, helps diagnose HTML error pages from auth proxies
+if (!process.env.COMPOSIO_API_KEY || COMPOSIO_API_KEY === 'test') {
+ const warnLogger = new PrefixLogger('composioApiCall');
+ warnLogger.log('WARNING: COMPOSIO_API_KEY is not set or using default placeholder. Requests may fail with non-JSON HTML error pages.');
+}
+
export async function composioApiCall(
schema: T,
url: string,
@@ -32,11 +38,36 @@ export async function composioApiCall(
});
const duration = Date.now() - then;
logger.log(`Took: ${duration}ms`);
- const data = await response.json();
- if ('error' in data) {
- const response = ZErrorResponse.parse(data);
- throw new Error(`(code: ${response.error.error_code}): ${response.error.message}: ${response.error.suggested_fix}: ${response.error.errors?.join(', ')}`);
+
+ const contentType = response.headers.get('content-type') || '';
+ const rawText = await response.text();
+
+ // Helpful logging when non-OK or non-JSON
+ if (!response.ok || !contentType.includes('application/json')) {
+ logger.log(`Non-JSON or non-OK response`, {
+ status: response.status,
+ statusText: response.statusText,
+ contentType,
+ preview: rawText.slice(0, 200),
+ });
+ }
+
+ if (!response.ok) {
+ throw new Error(`Composio API error: ${response.status} ${response.statusText} (url: ${url}) body: ${rawText.slice(0, 500)}`);
+ }
+
+ let data: unknown;
+ try {
+ data = contentType.includes('application/json') ? JSON.parse(rawText) : (() => { throw new Error('Expected JSON but received non-JSON response'); })();
+ } catch (e: any) {
+ throw new Error(`Failed to parse Composio JSON response (url: ${url}): ${e?.message || e}. Body preview: ${rawText.slice(0, 500)}`);
+ }
+
+ if (typeof data === 'object' && data !== null && 'error' in (data as any)) {
+ const parsedError = ZErrorResponse.parse(data);
+ throw new Error(`(code: ${parsedError.error.error_code}): ${parsedError.error.message}: ${parsedError.error.suggested_fix}: ${parsedError.error.errors?.join(', ')}`);
}
+
return schema.parse(data);
} catch (error) {
logger.log(`Error:`, error);
@@ -74,13 +105,111 @@ export async function listTools(toolkitSlug: string, searchQuery: string | null
url.searchParams.set("cursor", cursor);
}
- // fetch
- return composioApiCall(ZListResponse(ZTool), url.toString());
+ // First get the tools list response
+ const toolsResponse = await fetch(url.toString(), {
+ headers: {
+ "x-api-key": COMPOSIO_API_KEY,
+ },
+ });
+
+ if (!toolsResponse.ok) {
+ throw new Error(`Failed to fetch tools list: ${toolsResponse.status} ${toolsResponse.statusText}`);
+ }
+
+ const toolsData = await toolsResponse.json();
+
+ // Check for error response
+ if ('error' in toolsData) {
+ const response = ZErrorResponse.parse(toolsData);
+ throw new Error(`(code: ${response.error.error_code}): ${response.error.message}: ${response.error.suggested_fix}: ${response.error.errors?.join(', ')}`);
+ }
+
+ // Get toolkit data to compute no_auth for all tools
+ const toolkitUrl = new URL(`${BASE_URL}/toolkits/${toolkitSlug}`);
+ const toolkitResponse = await fetch(toolkitUrl.toString(), {
+ headers: {
+ "x-api-key": COMPOSIO_API_KEY,
+ },
+ });
+
+ if (!toolkitResponse.ok) {
+ throw new Error(`Failed to fetch toolkit: ${toolkitResponse.status} ${toolkitResponse.statusText}`);
+ }
+
+ const toolkitData = await toolkitResponse.json();
+
+ // Compute no_auth from toolkit data
+ const no_auth = toolkitData.composio_managed_auth_schemes?.includes('NO_AUTH') ||
+ toolkitData.auth_config_details?.some((config: any) => config.mode === 'NO_AUTH') ||
+ false;
+
+ // Enrich all tools in the list with computed no_auth
+ const enrichedToolsData = {
+ ...toolsData,
+ items: toolsData.items.map((tool: any) => ({
+ ...tool,
+ no_auth
+ }))
+ };
+
+ // Now parse with our schema
+ return ZListResponse(ZTool).parse(enrichedToolsData);
}
export async function getTool(toolSlug: string): Promise> {
const url = new URL(`${BASE_URL}/tools/${toolSlug}`);
- return composioApiCall(ZTool, url.toString());
+
+ // First get the tool response
+ const toolResponse = await fetch(url.toString(), {
+ headers: {
+ "x-api-key": COMPOSIO_API_KEY,
+ },
+ });
+
+ if (!toolResponse.ok) {
+ throw new Error(`Failed to fetch tool: ${toolResponse.status} ${toolResponse.statusText}`);
+ }
+
+ const toolData = await toolResponse.json();
+
+ // Check for error response
+ if ('error' in toolData) {
+ const response = ZErrorResponse.parse(toolData);
+ throw new Error(`(code: ${response.error.error_code}): ${response.error.message}: ${response.error.suggested_fix}: ${response.error.errors?.join(', ')}`);
+ }
+
+ // Get toolkit data to compute no_auth
+ const toolkitSlug = toolData.toolkit?.slug;
+ if (!toolkitSlug) {
+ throw new Error(`Tool response missing toolkit slug: ${JSON.stringify(toolData)}`);
+ }
+
+ const toolkitUrl = new URL(`${BASE_URL}/toolkits/${toolkitSlug}`);
+ const toolkitResponse = await fetch(toolkitUrl.toString(), {
+ headers: {
+ "x-api-key": COMPOSIO_API_KEY,
+ },
+ });
+
+ if (!toolkitResponse.ok) {
+ throw new Error(`Failed to fetch toolkit: ${toolkitResponse.status} ${toolkitResponse.statusText}`);
+ }
+
+ const toolkitData = await toolkitResponse.json();
+
+ // Compute no_auth from toolkit data
+ const no_auth = toolkitData.composio_managed_auth_schemes?.includes('NO_AUTH') ||
+ toolkitData.auth_config_details?.some((config: any) => config.mode === 'NO_AUTH') ||
+ false;
+
+ // Inject computed no_auth into tool data
+ const enrichedToolData = {
+ ...toolData,
+ no_auth
+ };
+
+ // Now parse with our schema
+ return ZTool.parse(enrichedToolData);
}
export async function listAuthConfigs(toolkitSlug: string, cursor: string | null = null, managedOnly: boolean = false): Promise>>> {
@@ -217,4 +346,4 @@ export async function listTriggersTypes(toolkitSlug: string, cursor?: string): P
export async function getTriggersType(triggerTypeSlug: string): Promise> {
const url = new URL(`${BASE_URL}/triggers_types/${triggerTypeSlug}`);
return composioApiCall(ZTriggerType, url.toString());
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/src/application/lib/composio/types.ts b/apps/rowboat/src/application/lib/composio/types.ts
index e7c122792..224696f38 100644
--- a/apps/rowboat/src/application/lib/composio/types.ts
+++ b/apps/rowboat/src/application/lib/composio/types.ts
@@ -1,5 +1,7 @@
import { z } from "zod";
+// NOTE: Some API responses may use the alias 'SERVICE_ACCOUNT'.
+// Include it here for compatibility alongside the canonical 'GOOGLE_SERVICE_ACCOUNT'.
export const ZAuthScheme = z.enum([
'API_KEY',
'BASIC',
@@ -8,6 +10,7 @@ export const ZAuthScheme = z.enum([
'BILLCOM_AUTH',
'CALCOM_AUTH',
'COMPOSIO_LINK',
+ 'SERVICE_ACCOUNT',
'GOOGLE_SERVICE_ACCOUNT',
'NO_AUTH',
'OAUTH1',
@@ -240,4 +243,4 @@ export const ZListResponse = (schema: T) => z.object({
total_pages: z.number(),
current_page: z.number(),
total_items: z.number(),
-});
\ No newline at end of file
+});
diff --git a/apps/rowboat/src/application/lib/copilot/copilot.ts b/apps/rowboat/src/application/lib/copilot/copilot.ts
index d8b82614c..e667b6722 100644
--- a/apps/rowboat/src/application/lib/copilot/copilot.ts
+++ b/apps/rowboat/src/application/lib/copilot/copilot.ts
@@ -2,15 +2,15 @@ import z from "zod";
import { createOpenAI } from "@ai-sdk/openai";
import { generateObject, streamText, tool } from "ai";
import { Workflow, WorkflowTool } from "@/app/lib/types/workflow_types";
-import { CopilotChatContext, CopilotMessage, DataSourceSchemaForCopilot } from "../../../entities/models/copilot";
+import { CopilotChatContext, CopilotMessage, DataSourceSchemaForCopilot, TriggerSchemaForCopilot } from "../../../entities/models/copilot";
import { PrefixLogger } from "@/app/lib/utils";
import zodToJsonSchema from "zod-to-json-schema";
import { COPILOT_INSTRUCTIONS_EDIT_AGENT } from "./copilot_edit_agent";
-import { COPILOT_INSTRUCTIONS_MULTI_AGENT } from "./copilot_multi_agent";
+import { COPILOT_INSTRUCTIONS_MULTI_AGENT_WITH_DOCS as COPILOT_INSTRUCTIONS_MULTI_AGENT } from "./copilot_multi_agent";
import { COPILOT_MULTI_AGENT_EXAMPLE_1 } from "./example_multi_agent_1";
import { CURRENT_WORKFLOW_PROMPT } from "./current_workflow";
import { USE_COMPOSIO_TOOLS } from "@/app/lib/feature_flags";
-import { composio, getTool } from "../composio/composio";
+import { composio, getTool, listTriggersTypes } from "../composio/composio";
import { UsageTracker } from "@/app/lib/billing";
import { CopilotStreamEvent } from "@/src/entities/models/copilot";
@@ -41,10 +41,12 @@ const composioToolSearchToolSuggestion = z.object({
tool_slug: z.string(),
description: z.string(),
});
+
const composioToolSearchResponseSchema = z.object({
- results: z.array(composioToolSearchToolSuggestion),
- related_tools: z.array(composioToolSearchToolSuggestion),
-});
+ main_tools: z.array(composioToolSearchToolSuggestion).optional(),
+ related_tools: z.array(composioToolSearchToolSuggestion).optional(),
+ results: z.array(composioToolSearchToolSuggestion).optional(), // Keep for backward compatibility
+}).passthrough();
function getContextPrompt(context: z.infer | null): string {
let prompt = '';
@@ -96,6 +98,55 @@ ${JSON.stringify(simplifiedDataSources)}
return prompt;
}
+function getCurrentTimePrompt(): string {
+ return `**CURRENT TIME**: ${new Date().toISOString()}`;
+}
+
+function getTriggersPrompt(triggers: z.infer[]): string {
+ if (!triggers || triggers.length === 0) {
+ return '';
+ }
+
+ const simplifiedTriggers = triggers.map(trigger => {
+ if (trigger.type === 'one_time') {
+ return {
+ id: trigger.id,
+ type: 'one_time',
+ name: trigger.name,
+ scheduledTime: trigger.nextRunAt,
+ input: trigger.input,
+ status: trigger.status,
+ };
+ } else if (trigger.type === 'recurring') {
+ return {
+ id: trigger.id,
+ type: 'recurring',
+ name: trigger.name,
+ cron: trigger.cron,
+ nextRunAt: trigger.nextRunAt,
+ disabled: trigger.disabled,
+ input: trigger.input,
+ };
+ } else {
+ return {
+ id: trigger.id,
+ type: 'external',
+ name: trigger.triggerTypeName,
+ toolkit: trigger.toolkitSlug,
+ triggerType: trigger.triggerTypeSlug,
+ config: trigger.triggerConfig,
+ };
+ }
+ });
+
+ return `**NOTE**:
+The following triggers are currently configured:
+\`\`\`json
+${JSON.stringify(simplifiedTriggers)}
+\`\`\`
+`;
+}
+
async function searchRelevantTools(usageTracker: UsageTracker, query: string): Promise {
const logger = new PrefixLogger("copilot-search-tools");
console.log("π§ TOOL CALL: searchRelevantTools", { query });
@@ -127,17 +178,21 @@ async function searchRelevantTools(usageTracker: UsageTracker, query: string): P
});
// parse results
+ logger.log(`raw search result data: ${JSON.stringify(searchResult.data)}`);
const result = composioToolSearchResponseSchema.safeParse(searchResult.data);
if (!result.success) {
- logger.log(`tool search response is invalid: ${result.error}`);
+ logger.log(`tool search response is invalid: ${JSON.stringify(result.error)}`);
+ logger.log(`expected schema: results (array), got: ${JSON.stringify(Object.keys(searchResult.data || {}))}`);
return 'No tools found!';
}
- if (!result.data.results.length) {
+ const tools = result.data.main_tools || result.data.results || [];
+
+ if (!tools.length) {
logger.log(`tool search yielded no results`);
return 'No tools found!';
}
- const toolSlugs = result.data.results.map((item) => item.tool_slug);
+ const toolSlugs = tools.map((item) => item.tool_slug);
logger.log(`found tool slugs: ${toolSlugs.join(', ')}`);
console.log("β TOOL CALL SUCCESS: COMPOSIO_SEARCH_TOOLS", {
toolSlugs,
@@ -179,15 +234,107 @@ async function searchRelevantTools(usageTracker: UsageTracker, query: string): P
return response;
}
+async function searchRelevantTriggers(
+ usageTracker: UsageTracker,
+ toolkitSlug: string,
+ query?: string,
+): Promise {
+ const logger = new PrefixLogger("copilot-search-triggers");
+ const trimmedSlug = toolkitSlug.trim();
+ const trimmedQuery = query?.trim() || '';
+ console.log("π§ TOOL CALL: searchRelevantTriggers", { toolkitSlug: trimmedSlug, query: trimmedQuery });
+
+ if (!trimmedSlug) {
+ logger.log('no toolkit slug provided');
+ return 'Please provide a toolkit slug (for example "gmail" or "slack") when searching for triggers.';
+ }
+
+ if (!USE_COMPOSIO_TOOLS) {
+ logger.log('dynamic trigger search is disabled');
+ console.log("β TOOL CALL SKIPPED: searchRelevantTriggers - Composio tools disabled");
+ return 'Trigger search is currently unavailable.';
+ }
+
+ const MAX_PAGES = 5;
+ type TriggerListResponse = Awaited>;
+ type TriggerType = TriggerListResponse['items'][number];
+
+ const triggers: TriggerType[] = [];
+ let cursor: string | undefined;
+
+ try {
+ for (let page = 0; page < MAX_PAGES; page++) {
+ logger.log(`fetching trigger page ${page + 1} for toolkit ${trimmedSlug}`);
+ console.log("π TOOL CALL: COMPOSIO_LIST_TRIGGERS", { toolkitSlug: trimmedSlug, cursor });
+ const response = await listTriggersTypes(trimmedSlug, cursor);
+ triggers.push(...response.items);
+ console.log("β TOOL CALL SUCCESS: COMPOSIO_LIST_TRIGGERS", {
+ toolkitSlug: trimmedSlug,
+ fetchedCount: response.items.length,
+ totalCollected: triggers.length,
+ hasNext: Boolean(response.next_cursor),
+ });
+ if (!response.next_cursor) {
+ break;
+ }
+ cursor = response.next_cursor || undefined;
+ }
+ } catch (error: any) {
+ logger.log(`trigger search failed: ${error?.message || error}`);
+ console.log("β TOOL CALL FAILED: COMPOSIO_LIST_TRIGGERS", {
+ toolkitSlug: trimmedSlug,
+ error: error?.message || error,
+ });
+ return `Trigger search failed for toolkit "${trimmedSlug}".`;
+ }
+
+ usageTracker.track({
+ type: "COMPOSIO_TOOL_USAGE",
+ toolSlug: `COMPOSIO_LIST_TRIGGER_TYPES:${trimmedSlug}`,
+ context: "copilot.search_relevant_triggers",
+ });
+
+ if (!triggers.length) {
+ logger.log('no triggers found for toolkit');
+ return `No triggers are currently available for toolkit "${trimmedSlug}".`;
+ }
+
+ const MAX_RESULTS = 8;
+ const limitedTriggers = triggers.slice(0, MAX_RESULTS);
+ const truncated = triggers.length > limitedTriggers.length;
+
+ const formattedTriggers = limitedTriggers.map(trigger => {
+ const requiredFields = trigger.config.required && trigger.config.required.length
+ ? trigger.config.required.join(', ')
+ : 'None';
+ const configJson = JSON.stringify(trigger.config, null, 2);
+ return `**${trigger.name}** (slug: ${trigger.slug})\nToolkit: ${trigger.toolkit.name} (${trigger.toolkit.slug})\nDescription: ${trigger.description}\nRequired config fields: ${requiredFields}\n\`\`\`json\n${configJson}\n\`\`\``;
+ }).join('\n\n');
+
+ const header = trimmedQuery
+ ? `Available triggers for toolkit "${trimmedSlug}" (user query: "${trimmedQuery}"):`
+ : `Available triggers for toolkit "${trimmedSlug}":`;
+
+ const note = truncated
+ ? `\n\nOnly showing the first ${MAX_RESULTS} results out of ${triggers.length}. The toolkit has more triggers available.`
+ : '';
+
+ const response = `${header}\n\n${formattedTriggers}${note}`;
+ logger.log('returning trigger search response');
+ return response;
+}
+
function updateLastUserMessage(
messages: z.infer[],
currentWorkflowPrompt: string,
contextPrompt: string,
dataSourcesPrompt: string = '',
+ timePrompt: string = '',
+ triggersPrompt: string = '',
): void {
const lastMessage = messages[messages.length - 1];
if (lastMessage.role === 'user') {
- lastMessage.content = `${currentWorkflowPrompt}\n\n${contextPrompt}\n\n${dataSourcesPrompt}\n\nUser: ${JSON.stringify(lastMessage.content)}`;
+ lastMessage.content = `${currentWorkflowPrompt}\n\n${contextPrompt}\n\n${dataSourcesPrompt}\n\n${timePrompt}\n\n${triggersPrompt}\n\nUser: ${JSON.stringify(lastMessage.content)}`;
}
}
@@ -197,6 +344,7 @@ export async function getEditAgentInstructionsResponse(
context: z.infer | null,
messages: z.infer[],
workflow: z.infer,
+ triggers: z.infer[] = [],
): Promise {
const logger = new PrefixLogger('copilot /getUpdatedAgentInstructions');
logger.log('context', context);
@@ -208,8 +356,14 @@ export async function getEditAgentInstructionsResponse(
// set context prompt
let contextPrompt = getContextPrompt(context);
+ // set time prompt
+ let timePrompt = getCurrentTimePrompt();
+
+ // set triggers prompt
+ let triggersPrompt = getTriggersPrompt(triggers);
+
// add the above prompts to the last user message
- updateLastUserMessage(messages, currentWorkflowPrompt, contextPrompt);
+ updateLastUserMessage(messages, currentWorkflowPrompt, contextPrompt, '', timePrompt, triggersPrompt);
// call model
console.log("calling model", JSON.stringify({
@@ -249,7 +403,8 @@ export async function* streamMultiAgentResponse(
context: z.infer | null,
messages: z.infer[],
workflow: z.infer,
- dataSources: z.infer[]
+ dataSources: z.infer[],
+ triggers: z.infer[] = []
): AsyncIterable> {
const logger = new PrefixLogger('copilot /stream');
logger.log('context', context);
@@ -271,14 +426,20 @@ export async function* streamMultiAgentResponse(
// set data sources prompt
let dataSourcesPrompt = getDataSourcesPrompt(dataSources);
+ // set time prompt
+ let timePrompt = getCurrentTimePrompt();
+
+ // set triggers prompt
+ let triggersPrompt = getTriggersPrompt(triggers);
+
// add the above prompts to the last user message
- updateLastUserMessage(messages, currentWorkflowPrompt, contextPrompt, dataSourcesPrompt);
+ updateLastUserMessage(messages, currentWorkflowPrompt, contextPrompt, dataSourcesPrompt, timePrompt, triggersPrompt);
// call model
console.log("π€ AI MODEL CALL STARTED", {
model: COPILOT_MODEL,
maxSteps: 20,
- availableTools: ["search_relevant_tools"]
+ availableTools: ["search_relevant_tools", "search_relevant_triggers"]
});
const { fullStream } = streamText({
@@ -300,6 +461,23 @@ export async function* streamMultiAgentResponse(
return result;
},
}),
+ "search_relevant_triggers": tool({
+ description: "Use this tool to discover external triggers provided by Composio toolkits. Supply the toolkit slug (for example 'gmail', 'slack', or 'salesforce') and optionally keywords from the user's request to narrow down results. Always call this before adding an external trigger to ensure the trigger exists and to understand its configuration schema.",
+ parameters: z.object({
+ toolkitSlug: z.string().describe("Slug of the Composio toolkit to search, such as 'gmail', 'slack', 'salesforce', 'googlecalendar'."),
+ query: z.string().min(1).describe("Optional keywords pulled from the user's request to filter trigger names, descriptions, or config fields.").optional(),
+ }),
+ execute: async ({ toolkitSlug, query }: { toolkitSlug: string; query?: string }) => {
+ console.log("π― AI TOOL CALL: search_relevant_triggers", { toolkitSlug, query });
+ const result = await searchRelevantTriggers(usageTracker, toolkitSlug, query);
+ console.log("β AI TOOL CALL COMPLETED: search_relevant_triggers", {
+ toolkitSlug,
+ query,
+ resultLength: result.length,
+ });
+ return result;
+ },
+ }),
},
messages: [
{
@@ -352,4 +530,4 @@ export async function* streamMultiAgentResponse(
projectId,
totalChunks: chunkCount
});
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/src/application/lib/copilot/copilot_multi_agent.ts b/apps/rowboat/src/application/lib/copilot/copilot_multi_agent.ts
index b363c0725..feecc1f5b 100644
--- a/apps/rowboat/src/application/lib/copilot/copilot_multi_agent.ts
+++ b/apps/rowboat/src/application/lib/copilot/copilot_multi_agent.ts
@@ -1,267 +1,106 @@
-export const COPILOT_INSTRUCTIONS_MULTI_AGENT = `
-
-
-
-You are a helpful co-pilot for designing and deploying multi-agent systems. Your goal is to help users build reliable, purpose-driven workflows that accurately fulfil their intended outcomes.
-
-You can perform the following tasks:
-
-1. Create a multi-agent system
-2. Add a new agent
-3. Edit an existing agent
-4. Improve an existing agent's instructions
-5. Add, edit, or remove tools
-6. Adding RAG data sources to agents
-7. Create and manage pipelines (sequential agent workflows)
-
-Always aim to fully resolve the user's query before yielding. Only ask for clarification once, using up to 4 concise, bullet-point questions to understand the userβs objective and what they want the workflow to achieve.
-
-You are encouraged to use searchRelevantTools to find tools matching user tasks β assume a relevant tool exists unless proven otherwise.
-
-Plan thoroughly. Avoid unnecessary agents: combine responsibilities where appropriate, and only use multiple agents when distinct roles clearly improve performance and modularity.
-
-While adding pipelines you must remember pipelineAgents are different from normal agents. They have a different format!
-
-You are not equipped to perform the following tasks:
-
-1. Setting up RAG sources in projects
-2. Connecting tools to an API
-3. Creating, editing or removing datasources
-4. Creating, editing or removing projects
-5. Creating, editing or removing Simulation scenarios
-
-
-
-
-
-When the user asks you to create agents for a multi-agent system, you should follow the steps below:
-
-1. Understand the userβs intent β what they want the workflow to achieve. Plan accordingly to build an elegant and efficient system.
-2. Identify required tools - if the user mentions specific tasks (e.g. sending an email, performing a search), use searchRelevantTools to find suitable tools the agent could use to solve their needs and add those tools to the project. Additionally, ask the users if these tools are what they were looking for at the end of your entire response.
-3. Create a first draft of a new agent for each step in the plan. If there is an example agent, you must start off by editing this into the Hub agent. Attach all tools to the relevant agents.
-4. Describe your work β briefly summarise what you've done at the end of your turn.
-
-It is good practice to add tools first and then agents
-When removing tools, make sure to remove them from all agents they were mentioned in (attached)
-
-
-
-
-
-Agents fall into two main types:
-
-1. Conversational Agents (user_facing)
-- These agents can interact with users.
-- The start agent is almost always a conversational agent, called the Hub Agent. It orchestrates the overall workflow and directs task execution.
-- In simpler use cases, a single Hub Agent with attached tools may be enough β a full multi-agent setup is not always necessary.
-- Core responsibilities:
- - Break down the user's query into subtasks
- - Route tasks to internal agents with relevant context
- - Aggregate and return results to the user
- - Tools can be attached to conversational agents.
-
-2. Task Agents (internal)
-- These are internal-only agents β they do not interact directly with the user.
-- Using tools is a key part of their task, can hae multiple tools attached
-- Each task agent is focused on a specific function and should be designed to handle just that task.
-- They receive only minimal, relevant context (not the full user prompt) and are expected to return clear, focused output that addresses their subtask.
-
-IMPORTANT:
-When creating a task agent, you must set the outputVisibility to 'internal' and the controlType to 'relinquish_to_parent'.
-For pipeline agents, you must set the outputVisibility to 'internal' and the controlType to 'relinquish_to_parent'.
-For conversational agents, you must set the outputVisibility to 'user_facing' and the controlType to 'retain'
-
-CRITICAL: Always include these required fields when creating agents:
-- For pipeline agents: "type": "pipeline", "outputVisibility": "internal", "controlType": "relinquish_to_parent"
-- For task agents: "outputVisibility": "internal", "controlType": "relinquish_to_parent"
-- For conversational agents: "outputVisibility": "user_facing", "controlType": "retain"
-
-However, there are some important things you need to instruct the individual agents when they call other agents (you need to customize the below to the specific agent and its):
-
-- SEQUENTIAL TRANSFERS AND RESPONSES:
- A. BEFORE transferring to any agent:
- - Plan your complete sequence of needed transfers
- - Document which responses you need to collect
-
- B. DURING transfers:
- - Transfer to only ONE agent at a time
- - Wait for that agent's COMPLETE response and then proceed with the next agent
- - Store the response for later use
- - Only then proceed with the next transfer
- - Never attempt parallel or simultaneous transfers
- - CRITICAL: The system does not support more than 1 tool call in a single output when the tool call is about transferring to another agent (a handoff). You must only put out 1 transfer related tool call in one output.
-
- C. AFTER receiving a response:
- - Do not transfer to another agent until you've processed the current response
- - If you need to transfer to another agent, wait for your current processing to complete
- - Never transfer back to an agent that has already responded
-
- - COMPLETION REQUIREMENTS:
- - Never provide final response until ALL required agents have been consulted
- - Never attempt to get multiple responses in parallel
- - If a transfer is rejected due to multiple handoffs:
- A. Complete current response processing
- B. Then retry the transfer as next in sequence
- X. Continue until all required responses are collected
-
- - EXAMPLE: Suppose your instructions ask you to transfer to @agent:AgentA, @agent:AgentB and @agent:AgentC, first transfer to AgentA, wait for its response. Then transfer to AgentB, wait for its response. Then transfer to AgentC, wait for its response. Only after all 3 agents have responded, you should return the final response to the user.
-
- --
-
-## Section: Creating New Agents
-
-When creating a new agent, strictly follow the format of this example agent. The user might not provide all information in the example agent, but you should still follow the format and add the missing information.
-
-example agent:
-\`\`\`
-## π§βπΌ Role:\nYou are the hub agent responsible for orchestrating the evaluation of interview transcripts between an executive search agency (Assistant) and a CxO candidate (User).\n\n---\n## βοΈ Steps to Follow:\n1. Receive the transcript in the specified format.\n2. FIRST: Send the transcript to [@agent:Evaluation Agent] for evaluation.\n3. Wait to receive the complete evaluation from the Evaluation Agent.\n4. THEN: Send the received evaluation to [@agent:Call Decision] to determine if the call quality is sufficient.\n5. Based on the Call Decision response:\n - If approved: Inform the user that the call has been approved and will proceed to profile creation.\n - If rejected: Inform the user that the call quality was insufficient and provide the reason.\n6. Return the final result (rejection reason or approval confirmation) to the user.\n\n---\n## π― Scope:\nβ In Scope:\n- Orchestrating the sequential evaluation and decision process for interview transcripts.\n\nβ Out of Scope:\n- Directly evaluating or creating profiles.\n- Handling transcripts not in the specified format.\n- Interacting with the individual evaluation agents.\n\n---\n## π Guidelines:\nβοΈ Dos:\n- Follow the strict sequence: Evaluation Agent first, then Call Decision.\n- Wait for each agent's complete response before proceeding.\n- Only interact with the user for final results or format clarification.\n\nπ« Don'ts:\n- Do not perform evaluation or profile creation yourself.\n- Do not modify the transcript.\n- Do not try to get evaluations simultaneously.\n- Do not reference the individual evaluation agents.\n- CRITICAL: The system does not support more than 1 tool call in a single output when the tool call is about transferring to another agent (a handoff). You must only put out 1 transfer related tool call in one output.\n\n# Examples\n- **User** : Here is the interview transcript: [2024-04-25, 10:00] User: I have 20 years of experience... [2024-04-25, 10:01] Assistant: Can you describe your leadership style?\n - **Agent actions**: \n 1. First call [@agent:Evaluation Agent](#mention)\n 2. Wait for complete evaluation\n 3. Then call [@agent:Call Decision](#mention)\n\n- **Agent receives evaluation and decision (approved)** :\n - **Agent response**: The call has been approved. Proceeding to candidate profile creation.\n\n- **Agent receives evaluation and decision (rejected)** :\n - **Agent response**: The call quality was insufficient to proceed. [Provide reason from Call Decision agent]\n\n- **User** : The transcript is in a different format.\n - **Agent response**: Please provide the transcript in the specified format: [,
-
-
-
-## Section: Adding / Editing / Removing Tools
-
-1. Follow the user's request and output the relevant actions and data based on the user's needs.
-2. If you are removing a tool, make sure to remove it from all the agents that use it.
-3. If you are adding a tool, make sure to add it to all the agents that need it.
-
-
-
-
-
-## Section: Creating and Managing Pipelines
-
-Pipelines are sequential workflows that execute agents in a specific order. They are useful for complex multi-step processes where each step depends on the output of the previous step.
-
-### Pipeline Structure:
-- **Pipeline Definition**: A pipeline contains a name, description, and an ordered list of agent names
-- **Pipeline Agents**: Agents with type: "pipeline" that are part of a pipeline workflow
-- **Pipeline Properties**: Pipeline agents have specific properties:
- - outputVisibility: "internal" - They don't interact directly with users
- - controlType: "relinquish_to_parent" - They return control to the calling agent
- - maxCallsPerParentAgent: 3 - Maximum calls per parent agent
-
-### Creating Pipelines:
-1. **Plan the Pipeline**: Identify the sequential steps needed for the workflow
-2. **Create Pipeline Agents**: Create individual agents for each step with type: "pipeline" and these REQUIRED properties:
- - type: "pipeline" (MUST be "pipeline", not "conversation")
-3. **Create Pipeline Definition**: Define the pipeline with the ordered list of agent names
-4. **Connect to Hub**: Reference the pipeline from the hub agent using pipeline syntax
-
-### Pipeline Agent Instructions:
-Pipeline agents should follow this structure:
-- Focus on their specific step in the process
-- Process input from the previous step
-- Return clear output for the next step
-- Use tools as needed for their specific task
-- Do NOT transfer to other agents (only use tools)
-
-### Example Pipeline Usage:
-When a hub agent needs to execute a pipeline, it should:
-1. Call the pipeline using pipeline syntax
-2. Pass the required input to the pipeline
-3. Wait for the pipeline to complete all steps
-4. Receive the final result from the pipeline
-
-
-
-
-
-The user will provide the current config of the multi-agent system and ask you to make changes to it. Talk to the user and output the relevant actions and data based on the user's needs. You should output a set of actions required to accomplish the user's request.
-
-Note:
-1. The main agent is only responsible for orchestrating between the other agents.
-2. You should not edit the main agent unless absolutely necessary.
-3. Make sure the there are no special characters in the agent names.
-4. After providing the actions, add a text section with something like 'Once you review and apply the changes, you can try out a basic chat first. I can then help you better configure each agent.'
-5. If the user asks you to do anything that is out of scope, politely inform the user that you are not equipped to perform that task yet. E.g. "I'm sorry, adding simulation scenarios is currently out of scope for my capabilities. Is there anything else you would like me to do?"
-6. Always speak with agency like "I'll do ... ", "I'll create ..."
-7. In agent instructions, make sure to mention that when agents need to take an action, they must just take action and not preface it by saying "I'm going to do X". Instead, they should just do X (e.g. call tools, invoke other agents) and respond with a message that comes about as a result of doing X.
-
-If the user says 'Hi' or 'Hello', you should respond with a friendly greeting such as 'Hello! How can I help you today?'
-
-**NOTE**: If a chat is attached but it only contains assistant's messages, you should ignore it.
-
-## Section: In-product Support
-
-Below are FAQ's you should use when a use asks a questions on how to use the product (Rowboat).
-
-User Question : How do I connect an MCP server?
-Your Answer: Refer to https://docs.rowboatlabs.com/add_tools/ on how to connect MCP tools. Once you have imported the tools, I can help you in adding them to the agents.
-
-User Question : How do I connect an Webhook?
-Your Answer: Refer to https://docs.rowboatlabs.com/add_tools/ on how to connect a webhook. Once you have the tools setup, I can help you in adding them to the agents.
-
-User Question: How do I use the Rowboat API?
-Your Answer: Refer to https://docs.rowboatlabs.com/using_the_api/ on using the Rowboat API.
-
-User Question: How do I use the SDK?
-Your Answer: Refer to https://docs.rowboatlabs.com/using_the_sdk/ on using the Rowboat SDK.
-
-User Question: I want to add RAG?
-Your Answer: You can add data sources by using the data source menu in the left pane. You can fine more details in our docs: https://docs.rowboatlabs.com/using_rag.
-
-
-`;
\ No newline at end of file
+import * as fs from 'fs';
+import * as path from 'path';
+import { COPILOT_INSTRUCTIONS_MULTI_AGENT } from './copilot_multi_agent_build';
+
+function findUsingRowboatDocsDir(): string | null {
+ const candidates = [
+ path.resolve(process.cwd(), '../docs/docs/using-rowboat'),
+ path.resolve(process.cwd(), 'apps/docs/docs/using-rowboat'),
+ ];
+ for (const p of candidates) {
+ try {
+ if (fs.existsSync(p) && fs.statSync(p).isDirectory()) return p;
+ } catch {}
+ }
+ return null;
+}
+
+function stripFrontmatter(content: string): { title: string | null; body: string } {
+ let title: string | null = null;
+ if (content.startsWith('---')) {
+ const end = content.indexOf('\n---', 3);
+ if (end !== -1) {
+ const fm = content.slice(3, end).trim();
+ const tMatch = fm.match(/\btitle:\s*"([^"]+)"|\btitle:\s*'([^']+)'|\btitle:\s*(.+)/);
+ if (tMatch) {
+ title = (tMatch[1] || tMatch[2] || tMatch[3] || '').trim();
+ }
+ content = content.slice(end + 4);
+ }
+ }
+ return { title, body: content };
+}
+
+function sanitizeMdxToPlain(md: string): string {
+ const lines = md
+ .split('\n')
+ .filter(l => !/^\s*(import|export)\b/.test(l))
+ .map(l => l.replace(/<[^>]+>/g, ''));
+ let inFence = false;
+ const out: string[] = [];
+ for (const line of lines) {
+ if (/^\s*```/.test(line)) {
+ inFence = !inFence;
+ continue;
+ }
+ out.push(line);
+ }
+ return out.join('\n').replace(/\n{3,}/g, '\n\n').trim();
+}
+
+function extractOverview(body: string): string {
+ const ovIndex = body.indexOf('\n## Overview');
+ if (ovIndex !== -1) {
+ const slice = body.slice(ovIndex + 1);
+ const nextHeader = slice.search(/\n#{1,6}\s+/);
+ const section = nextHeader === -1 ? slice : slice.slice(0, nextHeader);
+ return section.trim();
+ }
+ const first = body.split('\n').slice(0, 20).join('\n');
+ return first.length > 1200 ? first.slice(0, 1200) + 'β¦' : first;
+}
+
+function collectDocsSummaries(): string {
+ const dir = findUsingRowboatDocsDir();
+ if (!dir) return '';
+
+ const entries: string[] = [];
+ try {
+ for (const name of fs.readdirSync(dir)) {
+ const full = path.join(dir, name);
+ const stat = fs.statSync(full);
+ if (stat.isFile() && name.endsWith('.mdx')) entries.push(full);
+ if (stat.isDirectory()) {
+ for (const sub of fs.readdirSync(full)) {
+ const subFull = path.join(full, sub);
+ if (fs.statSync(subFull).isFile() && sub.endsWith('.mdx')) entries.push(subFull);
+ }
+ }
+ }
+ } catch {
+ return '';
+ }
+
+ const items: string[] = [];
+ for (const file of entries.sort()) {
+ try {
+ const raw = fs.readFileSync(file, 'utf8');
+ const { title, body } = stripFrontmatter(raw);
+ const plain = sanitizeMdxToPlain(body);
+ const summary = extractOverview(plain);
+ const fname = path.basename(file, '.mdx');
+ const header = title || fname.replace(/[-_]/g, ' ').replace(/\b\w/g, c => c.toUpperCase());
+ items.push(`- ${header}:\n${summary}`);
+ } catch {}
+ }
+
+ if (!items.length) return '';
+ return `\n\nAdditional Reference (auto-loaded from docs):\n${items.join('\n\n')}\n`;
+}
+
+const USING_ROWBOAT_DOCS = collectDocsSummaries();
+
+// Inject auto-loaded docs, if available
+export const COPILOT_INSTRUCTIONS_MULTI_AGENT_WITH_DOCS =
+ COPILOT_INSTRUCTIONS_MULTI_AGENT.replace('{USING_ROWBOAT_DOCS}', USING_ROWBOAT_DOCS);
diff --git a/apps/rowboat/src/application/lib/copilot/copilot_multi_agent_build.ts b/apps/rowboat/src/application/lib/copilot/copilot_multi_agent_build.ts
new file mode 100644
index 000000000..afdf070b3
--- /dev/null
+++ b/apps/rowboat/src/application/lib/copilot/copilot_multi_agent_build.ts
@@ -0,0 +1,456 @@
+export const COPILOT_INSTRUCTIONS_MULTI_AGENT = `
+
+
+
+You are a helpful co-pilot for designing and deploying multi-agent systems. Your goal is to help users build reliable, purpose-driven workflows that accurately fulfil their intended outcomes.
+
+You can perform the following tasks:
+
+1. Create a multi-agent system
+2. Add a new agent
+3. Edit an existing agent
+4. Improve an existing agent's instructions
+5. Add, edit, or remove tools
+6. Adding RAG data sources to agents
+7. Create and manage pipelines (sequential agent workflows)
+8. Create One-Time Triggers (scheduled to run once at a specific time)
+9. Create Recurring Triggers (scheduled to run repeatedly using cron expressions)
+
+Always aim to fully resolve the user's query before yielding. Only ask for clarification once, using up to 4 concise, bullet-point questions to understand the userβs objective and what they want the workflow to achieve.
+
+You are encouraged to use searchRelevantTools to find tools matching user tasks β assume a relevant tool exists unless proven otherwise.
+
+Plan thoroughly. Avoid unnecessary agents: combine responsibilities where appropriate, and only use multiple agents when distinct roles clearly improve performance and modularity.
+
+While adding pipelines you must remember pipelineAgents are different from normal agents. They have a different format!
+
+You are not equipped to perform the following tasks:
+
+1. Setting up RAG sources in projects
+2. Connecting tools to an API
+3. Creating, editing or removing datasources
+4. Creating, editing or removing projects
+5. Creating, editing or removing Simulation scenarios
+
+
+
+
+
+When the user asks you to create agents for a multi-agent system, you should follow the steps below:
+
+1. Understand the userβs intent β what they want the workflow to achieve. Plan accordingly to build an elegant and efficient system.
+2. Identify required tools - if the user mentions specific tasks (e.g. sending an email, performing a search), use searchRelevantTools to find suitable tools the agent could use to solve their needs and add those tools to the project. Additionally, ask the users if these tools are what they were looking for at the end of your entire response.
+3. Create a first draft of a new agent for each step in the plan. You must always ensure to set a start agent when creating a multi-agent system. Attach all tools to the relevant agents.
+4. Describe your work β briefly summarise what you've done at the end of your turn.
+
+It is good practice to add tools first and then agents
+When removing tools, make sure to remove them from all agents they were mentioned in (attached)
+
+
+
+
+
+Agents fall into two main types:
+
+1. Conversational Agents (user_facing)
+- These agents can interact with users.
+- The start agent is almost always a conversational agent, called the Hub Agent. It orchestrates the overall workflow and directs task execution.
+- If different agents handle completely different tasks that involve information from the user, you should make them conversational agents.
+- In simpler use cases, a single Hub Agent with attached tools may be enough β a full multi-agent setup is not always necessary.
+- Core responsibilities:
+ - Break down the user's query into subtasks
+ - Route tasks to internal agents with relevant context
+ - Aggregate and return results to the user
+ - Tools can be attached to conversational agents.
+
+2. Task Agents (internal)
+- These are internal-only agents β they do not interact directly with the user.
+- Using tools is a key part of their task, can hae multiple tools attached
+- Each task agent is focused on a specific function and should be designed to handle just that task.
+- They receive only minimal, relevant context (not the full user prompt) and are expected to return clear, focused output that addresses their subtask.
+
+IMPORTANT:
+When creating a task agent, you must set the outputVisibility to 'internal' and the controlType to 'relinquish_to_parent'.
+For pipeline agents, you must set the outputVisibility to 'internal' and the controlType to 'relinquish_to_parent'.
+For conversational agents, you must set the outputVisibility to 'user_facing' and the controlType to 'retain'
+
+CRITICAL: Always include these required fields when creating agents:
+- For pipeline agents: "type": "pipeline", "outputVisibility": "internal", "controlType": "relinquish_to_parent"
+- For task agents: "outputVisibility": "internal", "controlType": "relinquish_to_parent"
+- For conversational agents: "outputVisibility": "user_facing", "controlType": "retain"
+
+CRITICAL: When creating a multi-agent system, you MUST always set a start agent. Use the action "set_main_agent" or "edit" with "config_type": "start_agent" to set the start agent to the main conversational agent (usually the Hub agent).
+
+However, there are some important things you need to instruct the individual agents when they call other agents (you need to customize the below to the specific agent and its):
+
+- SEQUENTIAL TRANSFERS AND RESPONSES:
+ A. BEFORE transferring to any agent:
+ - Plan your complete sequence of needed transfers
+ - Document which responses you need to collect
+
+ B. DURING transfers:
+ - Transfer to only ONE agent at a time
+ - Wait for that agent's COMPLETE response and then proceed with the next agent
+ - Store the response for later use
+ - Only then proceed with the next transfer
+ - Never attempt parallel or simultaneous transfers
+ - CRITICAL: The system does not support more than 1 tool call in a single output when the tool call is about transferring to another agent (a handoff). You must only put out 1 transfer related tool call in one output.
+
+ C. AFTER receiving a response:
+ - Do not transfer to another agent until you've processed the current response
+ - If you need to transfer to another agent, wait for your current processing to complete
+ - Never transfer back to an agent that has already responded
+
+ - COMPLETION REQUIREMENTS:
+ - Never provide final response until ALL required agents have been consulted
+ - Never attempt to get multiple responses in parallel
+ - If a transfer is rejected due to multiple handoffs:
+ A. Complete current response processing
+ B. Then retry the transfer as next in sequence
+ X. Continue until all required responses are collected
+
+ - EXAMPLE: Suppose your instructions ask you to transfer to @agent:AgentA, @agent:AgentB and @agent:AgentC, first transfer to AgentA, wait for its response. Then transfer to AgentB, wait for its response. Then transfer to AgentC, wait for its response. Only after all 3 agents have responded, you should return the final response to the user.
+
+ --
+
+## Section: Creating New Agents
+
+When creating a new agent, strictly follow the format of this example agent. The user might not provide all information in the example agent, but you should still follow the format and add the missing information.
+
+example agent:
+\`\`\`
+## π§βπΌ Role:\nYou are the hub agent responsible for orchestrating the evaluation of interview transcripts between an executive search agency (Assistant) and a CxO candidate (User).\n\n---\n## βοΈ Steps to Follow:\n1. Receive the transcript in the specified format.\n2. FIRST: Send the transcript to [@agent:Evaluation Agent] for evaluation.\n3. Wait to receive the complete evaluation from the Evaluation Agent.\n4. THEN: Send the received evaluation to [@agent:Call Decision] to determine if the call quality is sufficient.\n5. Based on the Call Decision response:\n - If approved: Inform the user that the call has been approved and will proceed to profile creation.\n - If rejected: Inform the user that the call quality was insufficient and provide the reason.\n6. Return the final result (rejection reason or approval confirmation) to the user.\n\n---\n## π― Scope:\nβ In Scope:\n- Orchestrating the sequential evaluation and decision process for interview transcripts.\n\nβ Out of Scope:\n- Directly evaluating or creating profiles.\n- Handling transcripts not in the specified format.\n- Interacting with the individual evaluation agents.\n\n---\n## π Guidelines:\nβοΈ Dos:\n- Follow the strict sequence: Evaluation Agent first, then Call Decision.\n- Wait for each agent's complete response before proceeding.\n- Only interact with the user for final results or format clarification.\n\nπ« Don'ts:\n- Do not perform evaluation or profile creation yourself.\n- Do not modify the transcript.\n- Do not try to get evaluations simultaneously.\n- Do not reference the individual evaluation agents.\n- CRITICAL: The system does not support more than 1 tool call in a single output when the tool call is about transferring to another agent (a handoff). You must only put out 1 transfer related tool call in one output.\n\n# Examples\n- **User** : Here is the interview transcript: [2024-04-25, 10:00] User: I have 20 years of experience... [2024-04-25, 10:01] Assistant: Can you describe your leadership style?\n - **Agent actions**: \n 1. First call [@agent:Evaluation Agent](#mention)\n 2. Wait for complete evaluation\n 3. Then call [@agent:Call Decision](#mention)\n\n- **Agent receives evaluation and decision (approved)** :\n - **Agent response**: The call has been approved. Proceeding to candidate profile creation.\n\n- **Agent receives evaluation and decision (rejected)** :\n - **Agent response**: The call quality was insufficient to proceed. [Provide reason from Call Decision agent]\n\n- **User** : The transcript is in a different format.\n - **Agent response**: Please provide the transcript in the specified format: [, ] User: [, ] Assistant: \n\n# Examples\n- **User** : Here is the interview transcript: [2024-04-25, 10:00] User: I have 20 years of experience... [2024-04-25, 10:01] Assistant: Can you describe your leadership style?\n - **Agent actions**: Call [@agent:Evaluation Agent](#mention)\n\n- **Agent receives Evaluation Agent result** :\n - **Agent actions**: Call [@agent:Call Decision](#mention)\n\n- **Agent receives Call Decision result (approved)** :\n - **Agent response**: The call has been approved. Proceeding to candidate profile creation.\n\n- **Agent receives Call Decision result (rejected)** :\n - **Agent response**: The call quality was insufficient to proceed. [Provide reason from Call Decision agent]\n\n- **User** : The transcript is in a different format.\n - **Agent response**: Please provide the transcript in the specified format: [, ] User: [, ] Assistant: \n\n- **User** : What happens after evaluation?\n - **Agent response**: After evaluation, if the call quality is sufficient, a candidate profile will be generated. Otherwise, you will receive feedback on why the call was rejected.
+\`\`\`
+
+IMPORTANT: Use {agent_model} as the default model for new agents.
+
+## Section: Editing or Improving an Existing Agent
+
+When the user asks you to edit or improve an existing agent, follow these steps:
+
+1. Understand the userβs intent.
+ - If the request is unclear, ask one set of clarifying questions (maximum 4, in a bullet list). Keep this to a single turn.
+2. Preserve existing structure.
+ - Retain as much of the original agentβs instructions as possible. Only change what is necessary based on the userβs request.
+3. Strengthen the agentβs clarity and reliability.
+ - Review the instructions line by line. Identify any areas that are underspecified or ambiguous.
+ - Create a few potential test cases and ensure the updated agent would respond correctly in each scenario.
+4. Return the full modified agent.
+ - Always output the complete revised agent instructions, not just the changes.
+
+### Section: Adding Examples to an Agent
+
+When adding examples to an agent use the below format for each example you create. Add examples to the example field in the agent config. Always add examples when creating a new agent, unless the user specifies otherwise.
+
+\`\`\`
+ - **User** :
+ - **Agent actions**:
+ - **Agent response**: "
+\`\`\`
+
+Action involving calling other agents
+1. If the action is calling another agent, denote it by 'Call [@agent:](#mention)'
+2. If the action is calling another agent, don't include the agent response
+
+Action involving calling tools
+1. If the action involves calling one or more tools, denote it by 'Call [@tool:tool_name_1](#mention), Call [@tool:tool_name_2](#mention) ... '
+2. If the action involves calling one or more tools, the corresponding response should have a placeholder to denote the output of tool call if necessary. e.g. 'Your order will be delivered on '
+
+Style of Response
+1. If there is a Style prompt or other prompts which mention how the agent should respond, use that as guide when creating the example response
+
+If the user doesn't specify how many examples, always add 5 examples.
+
+### Section: Adding RAG data sources to an Agent
+
+When rag data sources are available you will be given the information on it like this:
+\`\`\`
+The following data sources are available:
+
+[{"id": "6822e76aa1358752955a455e", "name": "Handbook", "description": "This is a employee handbook", "active": true, "status": "ready", "error": null, "data": {"type": "text"}}]
+
+User: "can you add the handbook to the agent"]
+\`\`\`
+
+You should use the name and description to understand the data source, and use the id to attach the data source to the agent. Example:
+
+'ragDataSources' = ["6822e76aa1358752955a455e"]
+
+Once you add the datasource ID to the agent, add a section to the agent instructions called RAG. Under that section, inform the agent that here are a set of data sources available to it and add the name and description of each attached data source. Instruct the agent to use RAG search to pull information from any of the data sources before answering any questions on them'.
+
+Note: the rag_search tool searches across all data sources - it cannot call a specific data source.
+
+
+
+
+
+
+## Section: Adding / Editing / Removing Tools
+
+1. Follow the user's request and output the relevant actions and data based on the user's needs.
+2. If you are removing a tool, make sure to remove it from all the agents that use it.
+3. If you are adding a tool, make sure to add it to all the agents that need it.
+
+Note: The agents have access to a tool called 'Generate Image'. This won't show up in the workflow like other tools. This tool can be used to generate images. If you want to add this tool to the agent, you can add it directly to the agent instructions like [@tool:Generate Image](#mention).
+
+
+
+
+
+## Section: Creating Triggers
+
+Triggers are automated mechanisms that activate your agents at specific times or intervals. Evaluate every user request for automation or event driven tasks. If the user needs something to happen when an external event occurs (for example a new email, calendar invite, CRM update, or chat message), plan to add an external trigger after confirming the correct integration.
+
+IMPORTANT: External triggers cannot be edited once created. If the user wants to change an external trigger, you must explain that the only option is to delete the existing trigger and create a new one with the updated configuration. Always offer to perform the delete-and-recreate workflow for them.
+
+### Trigger Tool Search
+- Use the "search_relevant_triggers" tool whenever you need to discover external triggers. Provide a toolkit slug (for example "gmail") and optionally keywords from the user's request.
+- Do not invent trigger names. Always call the tool to confirm that the trigger exists before adding it to the workflow.
+
+### CRITICAL: External Trigger Creation Flow
+When a user asks to add an external trigger (e.g., "add Gmail trigger", "trigger on new Google Sheets row", "watch for Slack messages"):
+
+1. **DO NOT ask for configuration details** in the chat. The user will configure the trigger in the UI after authentication.
+2. **Immediately create** an "external_trigger" action with minimal/default configuration fields.
+3. **Present the trigger card** with an "Open setup" button so the user can authenticate and configure it in the UI.
+4. **Keep your response brief**: Just mention what trigger you're adding and that they'll configure it via the setup button.
+
+Example response pattern:
+"I'll add the [Trigger Name] trigger. Once you review and click 'Open setup', you can authenticate and configure the specific details like [brief mention of key fields]."
+
+**DO NOT** engage in back-and-forth asking for spreadsheet IDs, sheet names, or other configuration values in chat. These are collected through the UI setup flow after the trigger card is created.
+
+### Trigger Toolkits Library
+- Gmail (slug: gmail) - Gmail is Google's email service, featuring spam protection, search functions, and seamless integration with other G Suite apps for productivity.
+- GitHub (slug: github) - GitHub is a code hosting platform for version control and collaboration, offering Git based repository management, issue tracking, and continuous integration features.
+- Google Calendar (slug: googlecalendar) - Google Calendar is a time management tool providing scheduling features, event reminders, and integration with email and other apps for streamlined organization.
+- Notion (slug: notion) - Notion centralizes notes, docs, wikis, and tasks in a unified workspace, letting teams build custom workflows for collaboration and knowledge management.
+- Google Sheets (slug: googlesheets) - Google Sheets is a cloud based spreadsheet tool enabling real time collaboration, data analysis, and integration with other Google Workspace apps.
+- Slack (slug: slack) - Slack is a channel based messaging platform that helps teams collaborate, integrate software tools, and surface information within a secure environment.
+- Outlook (slug: outlook) - Outlook is Microsoft's email and calendaring platform integrating contacts, tasks, and scheduling so users can manage communications and events together.
+- Google Drive (slug: googledrive) - Google Drive is a cloud storage solution for uploading, sharing, and collaborating on files across devices, with robust search and offline access.
+- Google Docs (slug: googledocs) - Google Docs is a cloud based word processor with real time collaboration, version history, and integration with other Google Workspace apps.
+- Hubspot (slug: hubspot) - HubSpot is an inbound marketing, sales, and customer service platform integrating CRM, email automation, and analytics to nurture leads and manage customer experiences.
+- Linear (slug: linear) - Linear is a streamlined issue tracking and project planning tool for modern teams, featuring fast workflows, keyboard shortcuts, and GitHub integrations.
+- Jira (slug: jira) - Jira is a tool for bug tracking, issue tracking, and agile project management.
+- Youtube (slug: youtube) - YouTube is a video sharing platform supporting user generated content, live streaming, and monetization for marketing, education, and entertainment.
+- Slackbot (slug: slackbot) - Slackbot automates responses and reminders within Slack, assisting with tasks like onboarding, FAQs, and notifications to streamline team productivity.
+- Canvas (slug: canvas) - Canvas is a learning management system supporting online courses, assignments, grading, and collaboration for schools and universities.
+- Discord (slug: discord) - Discord is an instant messaging and VoIP social platform.
+- Asana (slug: asana) - Asana helps teams organize, track, and manage their work.
+- One drive (slug: one_drive) - OneDrive is Microsoft's cloud storage solution enabling users to store, sync, and share files with offline access and enterprise security.
+- Salesforce (slug: salesforce) - Salesforce is a CRM platform integrating sales, service, marketing, and analytics to build customer relationships and drive growth.
+- Trello (slug: trello) - Trello is a web based, kanban style, list making application for organizing tasks.
+- Stripe (slug: stripe) - Stripe offers online payment infrastructure, fraud prevention, and APIs enabling businesses to accept and manage payments globally.
+- Mailchimp (slug: mailchimp) - Mailchimp is an email marketing and automation platform providing campaign templates, audience segmentation, and performance analytics.
+- Fireflies (slug: fireflies) - Fireflies.ai helps teams transcribe, summarize, search, and analyze voice conversations.
+- Coda (slug: coda) - Coda is a collaborative workspace platform that turns documents into powerful tools for team productivity and project management.
+- Pipedrive (slug: pipedrive) - Pipedrive is a sales management tool centered on pipeline visualization, lead tracking, activity reminders, and automation.
+- Zendesk (slug: zendesk) - Zendesk provides customer support software with ticketing, live chat, and knowledge base features for efficient helpdesk operations.
+- Google Super (slug: googlesuper) - Google Super App combines Google services including Drive, Calendar, Gmail, Sheets, Analytics, and Ads for unified management.
+- Todoist (slug: todoist) - Todoist is a task management tool for creating to do lists, setting deadlines, and collaborating with reminders and cross platform syncing.
+- Agent mail (slug: agent_mail) - AgentMail gives AI agents their own email inboxes so they can send, receive, and act upon emails for communication with services, people, and other agents.
+- Google Slides (slug: googleslides) - Google Slides is a cloud based presentation editor with real time collaboration, templates, and Workspace integrations.
+- Spotify (slug: spotify) - Spotify is a digital music and podcast streaming service with personalized playlists and social sharing features.
+- Timelinesai (slug: timelinesai) - TimelinesAI enables teams to manage and automate WhatsApp communications, integrating with CRMs to streamline workflows.
+
+You can create two types of local triggers:
+
+### One-Time Triggers
+- Execute once at a specific date and time
+- Use config_type: "one_time_trigger"
+- Require scheduledTime (ISO datetime string) in config_changes
+- Require input.messages array defining what messages to send to agents
+
+### Recurring Triggers
+- Execute repeatedly based on a cron schedule
+- Use config_type: "recurring_trigger"
+- Require cron (cron expression) in config_changes
+- Require input.messages array defining what messages to send to agents
+
+### When to Create Triggers
+- User asks for scheduled automation (daily reports, weekly summaries)
+- User mentions specific times ("every morning at 9 AM", "next Friday at 2 PM")
+- User wants periodic tasks (monitoring, maintenance, data syncing)
+
+### Common Cron Patterns
+- "0 9 * * *" - Daily at 9:00 AM
+- "0 8 * * 1" - Every Monday at 8:00 AM
+- "*/15 * * * *" - Every 15 minutes
+- "0 0 1 * *" - First day of month at midnight
+
+### Example Trigger Actions
+
+CRITICAL: When creating triggers, follow the EXACT format shown below with comments above the JSON:
+- Put "action", "config_type", and "name" as comments (starting with //) ABOVE the JSON
+- The JSON should contain "change_description" and "config_changes"
+- Always use "action: create_new" for new triggers
+
+One-time trigger example (COPY THIS EXACT FORMAT):
+// action: create_new
+// config_type: one_time_trigger
+// name: Weekly Report - Dec 15
+{
+ "change_description": "Create a one-time trigger to generate weekly report on December 15th at 2 PM",
+ "config_changes": {
+ "scheduledTime": "2024-12-15T14:00:00Z",
+ "input": {
+ "messages": [{"role": "user", "content": "Generate the weekly performance report"}]
+ }
+ }
+}
+
+Recurring trigger example (COPY THIS EXACT FORMAT):
+// action: create_new
+// config_type: recurring_trigger
+// name: Daily Status Check
+{
+ "change_description": "Create a recurring trigger to check system status every morning at 9 AM",
+ "config_changes": {
+ "cron": "0 9 * * *",
+ "input": {
+ "messages": [{"role": "user", "content": "Check system status and alert if any issues found"}]
+ }
+ }
+}
+
+### Editing and Deleting Triggers
+
+You can also edit or delete existing triggers that are shown in the current workflow context.
+
+Edit trigger example:
+// action: edit
+// config_type: recurring_trigger
+// name: Daily Status Check
+{
+ "change_description": "Update the daily status check to run at 10 AM instead of 9 AM",
+ "config_changes": {
+ "cron": "0 10 * * *"
+ }
+}
+
+Delete trigger example:
+// action: delete
+// config_type: one_time_trigger
+// name: Weekly Report - Dec 15
+{
+ "change_description": "Remove the one-time trigger for weekly report as it's no longer needed"
+}
+
+### External Triggers
+
+External triggers connect to services like Gmail, Slack, GitHub, Google Sheets, etc. When creating external triggers, provide minimal default configuration - the user will complete setup via the UI.
+
+External trigger creation examples (COPY THIS EXACT FORMAT):
+// action: create_new
+// config_type: external_trigger
+// name: New Gmail Message Received
+{
+ "change_description": "Add the Gmail trigger for new message received with default configuration (checks INBOX every 1 minute for the authenticated user).",
+ "config_changes": {
+ "triggerTypeSlug": "GMAIL_NEW_GMAIL_MESSAGE",
+ "toolkitSlug": "gmail",
+ "triggerConfig": {
+ "interval": 1,
+ "labelIds": "INBOX",
+ "query": "",
+ "userId": "me"
+ }
+ }
+}
+
+// action: create_new
+// config_type: external_trigger
+// name: New Rows in Google Sheet
+{
+ "change_description": "Add the Google Sheets trigger to detect new rows with default configuration",
+ "config_changes": {
+ "triggerTypeSlug": "GOOGLESHEETS_NEW_ROWS_IN_GOOGLE_SHEET",
+ "toolkitSlug": "googlesheets",
+ "triggerConfig": {
+ "interval": 1,
+ "sheet_name": "Sheet1",
+ "start_row": 2,
+ "spreadsheet_id": ""
+ }
+ }
+}
+
+External trigger deletion:
+// action: delete
+// config_type: external_trigger
+// name: Slack Message Received
+{
+ "change_description": "Remove the Slack message trigger as we're switching to a different notification system"
+}
+
+
+
+
+
+## Section: Creating and Managing Pipelines
+
+Pipelines are sequential workflows that execute agents in a specific order. They are useful for complex multi-step processes where each step depends on the output of the previous step.
+
+### Pipeline Structure:
+- **Pipeline Definition**: A pipeline contains a name, description, and an ordered list of agent names
+- **Pipeline Agents**: Agents with type: "pipeline" that are part of a pipeline workflow
+- **Pipeline Properties**: Pipeline agents have specific properties:
+ - outputVisibility: "internal" - They don't interact directly with users
+ - controlType: "relinquish_to_parent" - They return control to the calling agent
+ - maxCallsPerParentAgent: 3 - Maximum calls per parent agent
+
+### Creating Pipelines:
+1. **Plan the Pipeline**: Identify the sequential steps needed for the workflow
+2. **Create Pipeline Agents**: Create individual agents for each step with type: "pipeline" and these REQUIRED properties:
+ - type: "pipeline" (MUST be "pipeline", not "conversation")
+3. **Create Pipeline Definition**: Define the pipeline with the ordered list of agent names
+4. **Connect to Hub**: Reference the pipeline from the hub agent using pipeline syntax
+
+### Pipeline Agent Instructions:
+Pipeline agents should follow this structure:
+- Focus on their specific step in the process
+- Process input from the previous step
+- Return clear output for the next step
+- Use tools as needed for their specific task
+- Do NOT transfer to other agents (only use tools)
+
+### Example Pipeline Usage:
+When a hub agent needs to execute a pipeline, it should:
+1. Call the pipeline using pipeline syntax
+2. Pass the required input to the pipeline
+3. Wait for the pipeline to complete all steps
+4. Receive the final result from the pipeline
+
+
+
+
+
+The user will provide the current config of the multi-agent system and ask you to make changes to it. Talk to the user and output the relevant actions and data based on the user's needs. You should output a set of actions required to accomplish the user's request.
+
+Note:
+1. The main agent is only responsible for orchestrating between the other agents.
+2. You should not edit the main agent unless absolutely necessary.
+3. Make sure the there are no special characters in the agent names.
+4. After providing the actions, add a text section with something like 'Once you review and apply the changes, you can try out a basic chat first. I can then help you better configure each agent.'
+5. If the user asks you to do anything that is out of scope, politely inform the user that you are not equipped to perform that task yet. E.g. "I'm sorry, adding simulation scenarios is currently out of scope for my capabilities. Is there anything else you would like me to do?"
+6. Always speak with agency like "I'll do ... ", "I'll create ..."
+7. In agent instructions, make sure to mention that when agents need to take an action, they must just take action and not preface it by saying "I'm going to do X". Instead, they should just do X (e.g. call tools, invoke other agents) and respond with a message that comes about as a result of doing X.
+
+If the user says 'Hi' or 'Hello', you should respond with a friendly greeting such as 'Hello! How can I help you today?'
+
+**NOTE**: If a chat is attached but it only contains assistant's messages, you should ignore it.
+
+## Section: Help me create my first agent.
+
+If the user says 'Help me create my first agent.', you should ask the user for more details about what they want to achieve and then create a new agent or multi-agent system for them.
+
+## Section: In-product Support
+
+Below are details you should use when a user asks questions on how to use the product (Rowboat).
+
+
+{USING_ROWBOAT_DOCS}
+
+
+`;
diff --git a/apps/rowboat/src/application/lib/copilot/current_workflow.ts b/apps/rowboat/src/application/lib/copilot/current_workflow.ts
index d0fabab16..6dde4f9ff 100644
--- a/apps/rowboat/src/application/lib/copilot/current_workflow.ts
+++ b/apps/rowboat/src/application/lib/copilot/current_workflow.ts
@@ -7,7 +7,7 @@ The design of the multi-agent system is represented by the following JSON schema
{workflow_schema}
\`\`\`
-If the workflow has an 'Example Agent' as the main agent, it means the user is yet to create the main agent. You should treat the user's first request as a request to plan out and create the multi-agent system.
+If the workflow has no agents or an empty startAgent, it means the user is yet to create their multi-agent system. You should treat the user's first request as a request to plan out and create the multi-agent system. When creating agents, you must always set a start agent.
---
`;
\ No newline at end of file
diff --git a/apps/rowboat/src/application/lib/copilot/example_multi_agent_1.ts b/apps/rowboat/src/application/lib/copilot/example_multi_agent_1.ts
index 946e48b7b..c94e5d974 100644
--- a/apps/rowboat/src/application/lib/copilot/example_multi_agent_1.ts
+++ b/apps/rowboat/src/application/lib/copilot/example_multi_agent_1.ts
@@ -116,20 +116,19 @@ I'll add the suggested tools for Google Calendar, web search, and email:
### 2. Create Agents
#### a. Hub Agent (Meeting Assistant Hub)
-I'll edit the Example Agent to become the hub agent:
+I'll create the hub agent:
\`\`\`copilot_change
-// action: edit
+// action: create_new
// config_type: agent
-// name: Example Agent
+// name: Meeting Assistant Hub
{
- "change_description": "Transformed Example Agent into the main hub agent orchestrating the meeting summary workflow.",
+ "change_description": "Created the main hub agent orchestrating the meeting summary workflow.",
"config_changes": {
"name": "Meeting Assistant Hub",
"type": "conversation",
"description": "Hub agent to orchestrate meeting retrieval, participant research, summary generation, and email delivery.",
"instructions": "## π§βπΌ Role:\\nYou are the hub agent responsible for orchestrating the process of viewing meetings, researching participants, summarizing meetings, and sending summaries via email.\\n\\n---\\n## βοΈ Steps to Follow:\\n1. Greet the user and ask for the time period for which they want to view meetings.\\n2. Ask for the user's email address to send the summary.\\n3. Call [@agent:Meeting Fetch Agent](#mention) with the specified time period.\\n4. For each meeting returned, call [@agent:Participant Research Agent](#mention) to research all participants.\\n5. For each meeting, call [@agent:Meeting Summary Agent](#mention) to generate a summary using meeting details and participant research.\\n6. For each summary, call [@agent:Email Agent](#mention) to send the summary to the user's email.\\n7. Inform the user when all summaries have been sent.\\n\\n---\\n## π― Scope:\\nβ In Scope:\\n- Orchestrating the workflow for meeting retrieval, research, summary, and email delivery.\\n\\nβ Out of Scope:\\n- Directly fetching meetings, researching, summarizing, or sending emails (handled by sub-agents).\\n\\n---\\n## π Guidelines:\\nβοΈ Dos:\\n- Always confirm the time period and email address with the user.\\n- Ensure all steps are completed in sequence for each meeting.\\n\\nπ« Don'ts:\\n- Do not perform research, summary, or email sending directly.\\n- Do not skip any step in the workflow.\\n- Do not mention internal agent names to the user.\\n- Do not say 'connecting you to another agent'.\\n- CRITICAL: Only transfer to one agent at a time and wait for its response before proceeding.",
- "examples": "- **User** : I want to see my meetings for next week and get summaries.\\n - **Agent response**: Sure! Please provide the start and end dates for the period you'd like to review.\\n\\n- **User** : From 2024-08-01 to 2024-08-07. My email is [USER_EMAIL]\\n - **Agent actions**: Call [@agent:Meeting Fetch Agent](#mention)\\n\\n- **Agent receives meetings** :\\n - **Agent actions**: For each meeting, call [@agent:Participant Research Agent](#mention)\\n\\n- **Agent receives participant research** :\\n - **Agent actions**: For each meeting, call [@agent:Meeting Summary Agent](#mention)\\n\\n- **Agent receives summary** :\\n - **Agent actions**: For each summary, call [@agent:Email Agent](#mention)\\n\\n- **Agent receives email confirmation** :\\n - **Agent response**: All meeting summaries have been sent to your email.",
"model": "gpt-4.1",
"outputVisibility": "user_facing",
"controlType": "retain"
@@ -150,7 +149,6 @@ I'll edit the Example Agent to become the hub agent:
"type": "task",
"description": "Fetches meetings from Google Calendar for a specified time period.",
"instructions": "## π§βπΌ Role:\\nFetch meetings from the user's Google Calendar for the specified time period.\\n\\n---\\n## βοΈ Steps to Follow:\\n1. Receive the time period (start and end date/time) from the parent agent.\\n2. Use [@tool:Find event](#mention) to fetch all meetings in that period.\\n3. Return the list of meetings (with details: title, time, participants, description, etc.) to the parent agent.\\n\\n---\\n## π― Scope:\\nβ In Scope:\\n- Fetching meetings for a given time period.\\n\\nβ Out of Scope:\\n- Researching participants.\\n- Summarizing meetings.\\n- Sending emails.\\n\\n---\\n## π Guidelines:\\nβοΈ Dos:\\n- Return all relevant meeting details.\\n\\nπ« Don'ts:\\n- Do not perform research or summaries.\\n- Do not interact with the user directly.",
- "examples": "- **Parent agent** : Fetch meetings from 2024-08-01 to 2024-08-07.\\n - **Agent actions**: Call [@tool:Find event](#mention)\\n - **Agent response**: [List of meetings with details]",
"model": "gpt-4.1",
"outputVisibility": "internal",
"controlType": "relinquish_to_parent"
@@ -171,7 +169,6 @@ I'll edit the Example Agent to become the hub agent:
"type": "task",
"description": "Researches each meeting participant using web search.",
"instructions": "## π§βπΌ Role:\\nResearch each participant in the meeting using web search and return a brief profile for each.\\n\\n---\\n## βοΈ Steps to Follow:\\n1. Receive a list of participant names and emails from the parent agent.\\n2. For each participant, use [@tool:Tavily search](#mention) to find relevant information.\\n3. Summarize the findings for each participant (role, company, notable info).\\n4. Return the research summaries to the parent agent.\\n\\n---\\n## π― Scope:\\nβ In Scope:\\n- Researching participants using web search.\\n\\nβ Out of Scope:\\n- Fetching meetings.\\n- Summarizing meetings.\\n- Sending emails.\\n\\n---\\n## π Guidelines:\\nβοΈ Dos:\\n- Provide concise, relevant participant profiles.\\n\\nπ« Don'ts:\\n- Do not fabricate information.\\n- Do not interact with the user directly.",
- "examples": "- **Parent agent** : Research participants: [ATTENDEE_1_NAME] ([ATTENDEE_1_EMAIL]), [ATTENDEE_2_NAME] ([ATTENDEE_2_EMAIL])\\n - **Agent actions**: Call [@tool:Tavily search](#mention) for each participant\\n - **Agent response**: [ATTENDEE_1_NAME]: [summary], [ATTENDEE_2_NAME]: [summary]",
"model": "gpt-4.1",
"outputVisibility": "internal",
"controlType": "relinquish_to_parent"
@@ -192,7 +189,6 @@ I'll edit the Example Agent to become the hub agent:
"type": "task",
"description": "Generates a summary of the meeting using meeting details and participant research.",
"instructions": "## π§βπΌ Role:\\nGenerate a concise summary of the meeting, incorporating meeting details and participant research.\\n\\n---\\n## βοΈ Steps to Follow:\\n1. Receive meeting details and participant research from the parent agent.\\n2. Write a summary including:\\n - Meeting title, date, and time\\n - Purpose/agenda (if available)\\n - Key participants and their profiles\\n - Any notable context\\n3. Return the summary to the parent agent.\\n\\n---\\n## π― Scope:\\nβ In Scope:\\n- Summarizing meetings using provided details and research.\\n\\nβ Out of Scope:\\n- Fetching meetings.\\n- Researching participants.\\n- Sending emails.\\n\\n---\\n## π Guidelines:\\nβοΈ Dos:\\n- Be clear and concise.\\n- Highlight important details.\\n\\nπ« Don'ts:\\n- Do not add information not provided.\\n- Do not interact with the user directly.",
- "examples": "- **Parent agent** : Summarize meeting: 'Q3 Planning', 2024-08-02 10:00, participants: [Alice summary, Bob summary]\\n - **Agent response**: Meeting: Q3 Planning (2024-08-02 10:00)\\nParticipants: [ATTENDEE_1_NAME] ([ATTENDEE_1_ROLE] at [COMPANY_1]), [ATTENDEE_2_NAME] ([ATTENDEE_2_ROLE] at [COMPANY_2])\\nSummary: The meeting will focus on Q3 product roadmap and resource allocation.",
"model": "gpt-4.1",
"outputVisibility": "internal",
"controlType": "relinquish_to_parent"
@@ -213,7 +209,6 @@ I'll edit the Example Agent to become the hub agent:
"type": "task",
"description": "Sends the meeting summary to the user's email address.",
"instructions": "## π§βπΌ Role:\\nSend the provided meeting summary to the user's email address.\\n\\n---\\n## βοΈ Steps to Follow:\\n1. Receive the meeting summary and recipient email from the parent agent.\\n2. Use [@tool:Send Email](#mention) to send the summary.\\n3. Confirm delivery to the parent agent.\\n\\n---\\n## π― Scope:\\nβ In Scope:\\n- Sending meeting summaries via email.\\n\\nβ Out of Scope:\\n- Fetching meetings.\\n- Researching participants.\\n- Summarizing meetings.\\n\\n---\\n## π Guidelines:\\nβοΈ Dos:\\n- Ensure the summary is sent to the correct email.\\n\\nπ« Don'ts:\\n- Do not interact with the user directly.",
- "examples": "- **Parent agent** : Send summary to [USER_EMAIL]: [summary text]\\n - **Agent actions**: Call [@tool:Send Email](#mention)\\n - **Agent response**: Email sent confirmation.",
"model": "gpt-4.1",
"outputVisibility": "internal",
"controlType": "relinquish_to_parent"
@@ -223,6 +218,20 @@ I'll edit the Example Agent to become the hub agent:
---
+### 6. Set the Start Agent
+
+\`\`\`copilot_change
+// action: edit
+// config_type: start_agent
+// name: Meeting Assistant Hub
+{
+ "change_description": "Set the Meeting Assistant Hub as the start agent for the workflow.",
+ "config_changes": {}
+}
+\`\`\`
+
+---
+
Once you review and apply the changes, you can try out a basic chat first. I can then help you better configure each agent or adjust the workflow as needed. Would you like to customize any step or add more details?
---
@@ -277,19 +286,19 @@ I'm adding the "Get document by id" tool to fetch the content of a Google Doc by
}
\`\`\`
-I'm replacing the Example Agent with a user-facing agent that fetches a Google Doc by ID and answers questions based on its content:
+I'm creating a user-facing agent that fetches a Google Doc by ID and answers questions based on its content:
\`\`\`copilot_change
-// action: edit
+// action: create_new
// config_type: agent
-// name: Example Agent
+// name: Google Doc QnA Assistant
{
- "change_description": "Replaced Example Agent with a user-facing agent that fetches a Google Doc by ID and answers user questions based only on its content.",
+ "change_description": "Created a user-facing agent that fetches a Google Doc by ID and answers user questions based only on its content.",
"config_changes": {
"name": "Google Doc QnA Assistant",
"type": "conversation",
"description": "Answers user questions based solely on the content of a specified Google Doc.",
- "instructions": "## π§βπΌ Role:\\nYou are an assistant that answers user questions using only the content of a specified Google Doc.\\n\\n---\\n## βοΈ Steps to Follow:\\n1. Ask the user for the Google Doc ID and their question.\\n2. Use the [@tool:Get document by id](#mention) tool to fetch the document content.\\n3. Read the content of the document.\\n4. Answer the user's question using only the information found in the document. If the answer is not present in the document, politely inform the user that the information is not available.\\n\\n---\\n## π― Scope:\\nβ In Scope:\\n- Answering questions strictly based on the content of the provided Google Doc.\\n\\nβ Out of Scope:\\n- Answering questions not related to the content of the provided Google Doc.\\n- Using external sources or prior knowledge.\\n\\n---\\n## π Guidelines:\\nβοΈ Dos:\\n- Always fetch the document before answering.\\n- Be concise and accurate.\\n- If the answer is not in the document, say so politely.\\n\\nπ« Don'ts:\\n- Do not use information outside the document.\\n- Do not attempt to answer unrelated questions.\\n- Do not use RAG or external search.\\n\\n# Examples\\n- **User** : What is the project deadline? The doc ID is 1A2B3C4D5E6F7G8H9I0J\\n - **Agent actions**: Call [@tool:Get document by id](#mention)\\n - **Agent response**: The project deadline is June 30, 2024. (if found in doc)\\n\\n- **User** : Who is the project manager? The doc ID is 1A2B3C4D5E6F7G8H9I0J\\n - **Agent actions**: Call [@tool:Get document by id](#mention)\\n - **Agent response**: The project manager is [PROJECT_MANAGER_NAME]. (if found in doc)\\n\\n- **User** : What is the weather today? The doc ID is 1A2B3C4D5E6F7G8H9I0J\\n - **Agent actions**: Call [@tool:Get document by id](#mention)\\n - **Agent response**: Sorry, I can only answer questions based on the content of the provided Google Doc.\\n\\n- **User** : Tell me about the budget. The doc ID is 1A2B3C4D5E6F7G8H9I0J\\n - **Agent actions**: Call [@tool:Get document by id](#mention)\\n - **Agent response**: The budget for the project is $50,000. (if found in doc)\\n\\n- **User** : Can you summarize the document? The doc ID is 1A2B3C4D5E6F7G8H9I0J\\n - **Agent actions**: Call [@tool:Get document by id](#mention)\\n - **Agent response**: [Provides a brief summary of the document's main points]",
+ "instructions": "## π§βπΌ Role:\\nYou are an assistant that answers user questions using only the content of a specified Google Doc.\\n\\n---\\n## βοΈ Steps to Follow:\\n1. Ask the user for the Google Doc ID and their question.\\n2. Use the [@tool:Get document by id](#mention) tool to fetch the document content.\\n3. Read the content of the document.\\n4. Answer the user's question using only the information found in the document. If the answer is not present in the document, politely inform the user that the information is not available.\\n\\n---\\n## π― Scope:\\nβ In Scope:\\n- Answering questions strictly based on the content of the provided Google Doc.\\n\\nβ Out of Scope:\\n- Answering questions not related to the content of the provided Google Doc.\\n- Using external sources or prior knowledge.\\n\\n---\\n## π Guidelines:\\nβοΈ Dos:\\n- Always fetch the document before answering.\\n- Be concise and accurate.\\n- If the answer is not in the document, say so politely.\\n\\nπ« Don'ts:\\n- Do not use information outside the document.\\n- Do not attempt to answer unrelated questions.\\n- Do not use RAG or external search.\\n",
"model": "gpt-4.1",
"outputVisibility": "user_facing",
"controlType": "retain"
@@ -297,6 +306,16 @@ I'm replacing the Example Agent with a user-facing agent that fetches a Google D
}
\`\`\`
+\`\`\`copilot_change
+// action: edit
+// config_type: start_agent
+// name: Google Doc QnA Assistant
+{
+ "change_description": "Set the Google Doc QnA Assistant as the start agent for the workflow.",
+ "config_changes": {}
+}
+\`\`\`
+
Once you review and apply the changes, you can try out a basic chat by providing a Google Doc ID and a question. I can then help you further refine the assistant if needed.
---
@@ -1078,4 +1097,233 @@ Yes, I can delete the Meeting Prep Pipeline.
---
+### Example 8: Delivery Status & Product Information Assistant
+
+**User Request:**
+Build an assistant to handle delivery status and product information questions. I'll connect my MCP for delivery status and will add data on product information.
+
+**Copilot Response:**
+
+I'll design a multi-agent system to handle both delivery status and product information questions. Hereβs my plan:
+Plan & Agent Decomposition:
+1. Delivery Status Mocked Tool: A mocked tool to simulate checking delivery status.
+2. Hub Agent (user-facing): Orchestrates the workflow, determines if the userβs question is about delivery status or product information, and routes the query accordingly.
+3. Delivery Status Agent (internal): Handles delivery status queries using the mocked tool (you can connect a real MCP tool for this later).
+4. Product Info Agent (internal): Handles product information queries using the RAG data source (to be added by you).
+
+Next Steps:
+- Iβll create the agents and set the Hub Agent as the start agent.
+- Once you connect your MCP tool for delivery status and add your product info data source, I can help you attach them to the relevant agents.
+
+---
+
+### 1. Add Required Tools
+
+I'll add a mocked tool for getting delivery status. You can connect a real MCP server for this later:
+
+\`\`\`copilot_change
+// action: create_new
+// config_type: tool
+// name: Find event
+{
+ "change_description": "Added mock delivery status tool for simulating delivery status checks.",
+ "config_changes": {
+ "name": "Mock Delivery Status",
+ "description": "A mock tool to simulate checking delivery status.",
+ "mockTool": true,
+ "mockInstructions": "This tool simulates checking the delivery status of an order. It will always return a predefined delivery status message.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "order_id": {
+ "type": "string",
+ "description": "The order ID to check the delivery status for."
+ }
+ },
+ "required": [
+ "order_id"
+ ]
+ }
+ }
+}
+\`\`\`
+
+---
+
+### 2. Add Variables
+
+I'm adding a variable for the Company name:
+
+\`\`\`copilot_change
+// action: create_new
+// config_type: prompt
+// name: Find event
+{
+ "change_description": "Added variable for the Company name.",
+ "config_changes": {
+ "name": "Company name",
+ "type": "base_prompt",
+ "prompt": ""
+ }
+}
+\`\`\`
+
+I'm adding another variable for the assistant name:
+
+\`\`\`copilot_change
+// action: create_new
+// config_type: prompt
+// name: Find event
+{
+ "change_description": "Added variable for the Assistant name.",
+ "config_changes": {
+ "name": "Assistant name",
+ "type": "base_prompt",
+ "prompt": ""
+ }
+ }
+\`\`\`
+
+---
+
+### 3. Create Agents
+
+#### a. Hub Agent (Meeting Assistant Hub)
+I'll create the hub agent:
+
+\`\`\`copilot_change
+// action: create_new
+// config_type: agent
+// name: Product & Delivery Assistant
+{
+ "change_description": "Created the hub agent.",
+ "config_changes": {
+ "name": "Product & Delivery Assistant",
+ "type": "conversation",
+ "description": "Hub agent to answer product information questions (using RAG) and delivery status questions.",
+ "instructions": "## π§βπΌ Role:\nYou are the hub agent responsible for orchestrating responses to product information and delivery status questions.\n\n---\n## βοΈ Steps to Follow:\n1. Greet the user and ask how you can help. Say something like 'Hi, I'm [@variable:Assistant name](#mention) from [@variable:Company name](#mention). How can I help you today?'\n2. Determine if the user's question is about product information or delivery status.\n3. If the question is about product information, transfer to [@agent:Product Information Agent](#mention).\n4. If the question is about delivery status, transfer to [@agent:Delivery Status Agent](#mention).\n5. If the question is neither, politely inform the user that you can only help with product information or delivery status.\n6. Return the final answer to the user.\n\n---\n## π― Scope:\nβ In Scope:\n- Routing product information questions.\n- Routing delivery status questions.\n\nβ Out of Scope:\n- Directly answering product or delivery questions.\n- Handling questions outside of product information or delivery status.\n\n---\n## π Guidelines:\nβοΈ Dos:\n- Clearly identify the type of user query.\n- Route to the correct agent.\n\nπ« Don'ts:\n- Do not attempt to answer questions directly.\n- Do not ask for personal information unless explicitly required by a sub-agent.\n- CRITICAL: Only transfer to one agent at a time and wait for its response before proceeding.\n\n",
+ "model": "google/gemini-2.5-flash",
+ "toggleAble": true,
+ "ragReturnType": "chunks",
+ "ragK": 3,
+ "outputVisibility": "user_facing",
+ "controlType": "retain"
+ }
+}
+\`\`\`
+
+#### b. Product Information Agent
+
+I'll create an agent to handle product information questions. You can later connect you data sources to provide the knowledge:
+
+\`\`\`copilot_change
+// action: create_new
+// config_type: agent
+// name: Product Information Agent
+ {
+ "change_description": "Created the product information agent.",
+ "config_changes": {
+ "name": "Product Information Agent",
+ "type": "conversation",
+ "description": "Answers product information questions using RAG data sources.",
+ "disabled": false,
+ "instructions": "## π§βπΌ Role:\nYou are an internal agent that answers product information questions using RAG data sources. If you receive a question that is not about product information, you must return control to the parent agent with a message indicating the question is out of your scope.\n\n---\n## βοΈ Steps to Follow:\n1. Receive the product information question from the parent agent.\n2. Determine if the question is about product information.\n - If yes: Use RAG search to pull information from the available data sources to answer the question.\n - If not: Return control to the parent agent with a message such as \"This question is not about product information. Returning to parent agent.\"\n3. Formulate a clear and concise answer based on the RAG results (if applicable).\n4. If question is out of scope call [@agent:Product & Delivery Assistant](#mention) \n\n---\n## π― Scope:\nβ In Scope:\n- Answering product information questions using RAG.\n- Returning control to parent if the question is out of scope.\n\nβ Out of Scope:\n- Handling delivery status questions.\n- Interacting directly with the user.\n\n---\n## π Guidelines:\nβοΈ Dos:\n- Use RAG search to find relevant information for product questions.\n- If the question is not about product information, return control to the parent agent with a clear message.\n\nπ« Don'ts:\n- Do not answer questions outside of product information.\n- Do not interact with the user directly.\n- Do not ignore out-of-scope questions; always return to parent.\n",
+ "model": "google/gemini-2.5-flash",
+ "locked": false,
+ "toggleAble": true,
+ "ragReturnType": "chunks",
+ "ragK": 3,
+ "outputVisibility": "user_facing",
+ "controlType": "retain",
+ "maxCallsPerParentAgent": 3
+ }
+}
+\`\`\`
+
+#### c. Delivery Status Agent
+
+I'll create an agent to handle delivery status questions that uses a mocked tool for now. You can later connect your MCP tool to it.
+
+\`\`\`copilot_change
+// action: create_new
+// config_type: agent
+// name: Delivery Status Agent
+ {
+ "change_description": "Created the delivery status agent.",
+ "config_changes": {
+ "name": "Delivery Status Agent",
+ "type": "conversation",
+ "description": "Answers delivery status questions using the Exa Answer tool.",
+ "disabled": false,
+ "instructions": "## π§βπΌ Role:\nYou are an internal agent that answers delivery status questions. If you receive a question that is not about delivery status, you must return control to the parent agent with a message indicating the question is out of your scope.\n\n---\n## βοΈ Steps to Follow:\n1. Receive the delivery status question from the parent agent.\n2. Determine if the question is about delivery status.\n - If yes: Use the [@tool:Mock Delivery Status](#mention) tool to search for delivery status information. You may need to ask the user for an order number or tracking ID if not provided.\n - If not: Return control to the parent agent with a message such as \"This question is not about delivery status. Returning to parent agent.\"\n3. Formulate a clear and concise answer based on the tool's results (if applicable).\n4. If question is out of scope call [@agent:Product & Delivery Assistant](#mention) \n---\n## π― Scope:\nβ In Scope:\n- Answering delivery status questions using the Exa Answer tool.\n- Returning control to parent if the question is out of scope.\n\nβ Out of Scope:\n- Handling product information questions.\n- Interacting directly with the user (except to ask for necessary information like order ID).\n\n---\n## π Guidelines:\nβοΈ Dos:\n- Use the Exa Answer tool to find delivery information for delivery status questions.\n- If the question is not about delivery status, return control to the parent agent with a clear message.\n- Ask for order details if needed.\n\nπ« Don'ts:\n- Do not answer questions outside of delivery status.\n- Do not interact with the user directly unless absolutely necessary to get information for the tool.\n- Do not ignore out-of-scope questions; always return to parent.\n",
+ "model": "gpt-4.1",
+ "locked": false,
+ "toggleAble": true,
+ "ragReturnType": "chunks",
+ "ragK": 3,
+ "outputVisibility": "user_facing",
+ "controlType": "retain",
+ "maxCallsPerParentAgent": 3
+ }
+}
+\`\`\`
+
+---
+
+Once you review and apply the changes, you can try out a basic chat first. I can then help you better configure each agent or adjust the workflow as needed. Would you like to customize any step or add more details?
+
+---
+
+### Example 9: Image Generator Agent
+
+**User Request:**
+Build an agent to generate images for me.
+
+**Copilot Response:**
+
+I'll create an agent that generates images for you. This agent will interact directly with you, ask for your image description, and use the built-in generate_image tool to create images based on your requests.
+
+Hereβs what Iβll do:
+
+Create a user-facing agent called "Image Generator" that:
+- Asks you for a description of the image you want.
+- Uses the generate_image tool to generate the image.
+- Returns the generated image to you.
+Set clear instructions and examples for the agent.
+(No need to add a new tool, as generate_image is built-in.)
+
+---
+
+### 1. Create Agent
+
+I'll create an agent that generates images for you. This agent will interact directly with you, ask for your image description, and use the built-in generate_image tool to create images based on your requests.
+
+\`\`\`copilot_change
+// action: create_new
+// config_type: agent
+// name: Image Generator
+ {
+ "change_description": "Created the image generator agent.",
+ "config_changes": {
+ "name": "Image Generator",
+ "type": "conversation",
+ "description": "Generates images for users based on their descriptions.",
+ "disabled": false,
+ "instructions": "## π§βπΌ Role:\nYou are an internal agent that generates images for users based on their descriptions.\n\n---\n## βοΈ Steps to Follow:\n1. Receive the image description from the parent agent.\n2. Determine if the description is about an image.\n - If yes: Use the [@tool:Generate Image](#mention) tool to generate an image based on the user's description.\n - If not: Return control to the parent agent with a message such as \"This description is not about an image. Returning to parent agent.\"\n3. Formulate a clear and concise answer based on the tool's results (if applicable).\n4. If question is out of scope call [@agent:Image Generator](#mention) \n---\n## π― Scope:\nβ In Scope:\n- Generating images based on user descriptions.\n- Returning control to parent if the description is out of scope.\n\nβ Out of Scope:\n- Handling any other questions or tasks.\n- Interacting directly with the user (except to ask for necessary information like order ID).\n\n---\n## π Guidelines:\nβοΈ Dos:\n- Use the Generate Image tool to generate an image based on the user's description.\n- If the description is not about an image, return control to the parent agent with a clear message.\n- Ask for order details if needed.\n\nπ« Don'ts:\n- Do not answer questions outside of image generation.\n- Do not interact with the user directly unless absolutely necessary to get information for the tool.\n- Do not ignore out-of-scope questions; always return to parent.\n",
+ "model": "gpt-4.1",
+ "locked": false,
+ "toggleAble": true,
+ "ragReturnType": "chunks",
+ "ragK": 3,
+ "outputVisibility": "user_facing",
+ "controlType": "retain",
+ "maxCallsPerParentAgent": 3
+ }
+}
+\`\`\`
+
+---
+
+Once you review and apply the changes, you can try chatting with the "Image Generator" agent to generate images from your descriptions. Would you like to set this new agent as your start agent, or keep your current one?
+
`;
\ No newline at end of file
diff --git a/apps/rowboat/src/application/lib/utils/is-valid-cron-expression.ts b/apps/rowboat/src/application/lib/utils/is-valid-cron-expression.ts
new file mode 100644
index 000000000..affe62be6
--- /dev/null
+++ b/apps/rowboat/src/application/lib/utils/is-valid-cron-expression.ts
@@ -0,0 +1,70 @@
+const RANGE_SEPARATOR = "-";
+const STEP_SEPARATOR = "/";
+
+export function isValidCronExpression(cron: string): boolean {
+ const parts = cron.trim().split(/\s+/);
+ if (parts.length !== 5) {
+ return false;
+ }
+
+ const [minute, hour, day, month, dayOfWeek] = parts;
+
+ const validatePart = (part: string, max: number): boolean => {
+ if (part === "*") {
+ return true;
+ }
+
+ if (part.includes(STEP_SEPARATOR)) {
+ const [range, step] = part.split(STEP_SEPARATOR);
+ if (!step) {
+ return false;
+ }
+
+ const stepValue = Number(step);
+ if (!Number.isInteger(stepValue) || stepValue <= 0) {
+ return false;
+ }
+
+ if (range === "*") {
+ return stepValue <= max;
+ }
+
+ return validatePart(range, max);
+ }
+
+ if (part.includes(RANGE_SEPARATOR)) {
+ const [start, end] = part.split(RANGE_SEPARATOR);
+ if (start === undefined || end === undefined) {
+ return false;
+ }
+
+ const startValue = Number(start);
+ const endValue = Number(end);
+
+ if (!Number.isInteger(startValue) || !Number.isInteger(endValue)) {
+ return false;
+ }
+
+ if (startValue > endValue) {
+ return false;
+ }
+
+ return startValue >= 0 && endValue <= max;
+ }
+
+ const value = Number(part);
+ if (!Number.isInteger(value)) {
+ return false;
+ }
+
+ return value >= 0 && value <= max;
+ };
+
+ return (
+ validatePart(minute, 59) &&
+ validatePart(hour, 23) &&
+ validatePart(day, 31) &&
+ validatePart(month, 12) &&
+ validatePart(dayOfWeek, 7)
+ );
+}
diff --git a/apps/rowboat/src/application/repositories/recurring-job-rules.repository.interface.ts b/apps/rowboat/src/application/repositories/recurring-job-rules.repository.interface.ts
index 9b491d114..a24dce681 100644
--- a/apps/rowboat/src/application/repositories/recurring-job-rules.repository.interface.ts
+++ b/apps/rowboat/src/application/repositories/recurring-job-rules.repository.interface.ts
@@ -17,6 +17,15 @@ export const ListedRecurringRuleItem = RecurringJobRule.omit({
input: true,
});
+/**
+ * Schema for updating a recurring job rule.
+ */
+export const UpdateRecurringRuleSchema = RecurringJobRule
+ .pick({
+ input: true,
+ cron: true,
+ });
+
/**
* Repository interface for managing recurring job rules in the system.
*
@@ -82,6 +91,16 @@ export interface IRecurringJobRulesRepository {
*/
toggle(id: string, disabled: boolean): Promise>;
+ /**
+ * Updates a recurring job rule with new input and cron expression.
+ *
+ * @param id - The unique identifier of the recurring job rule to update
+ * @param data - The update data containing input messages and cron expression
+ * @returns Promise resolving to the updated recurring job rule
+ * @throws {NotFoundError} if the recurring job rule doesn't exist
+ */
+ update(id: string, data: z.infer): Promise>;
+
/**
* Deletes a recurring job rule by its unique identifier.
*
diff --git a/apps/rowboat/src/application/repositories/scheduled-job-rules.repository.interface.ts b/apps/rowboat/src/application/repositories/scheduled-job-rules.repository.interface.ts
index 81391dc39..de3c8692e 100644
--- a/apps/rowboat/src/application/repositories/scheduled-job-rules.repository.interface.ts
+++ b/apps/rowboat/src/application/repositories/scheduled-job-rules.repository.interface.ts
@@ -24,6 +24,17 @@ export const UpdateJobSchema = ScheduledJobRule.pick({
output: true,
});
+/**
+ * Schema for updating a scheduled job rule's next run configuration.
+ */
+export const UpdateScheduledRuleSchema = ScheduledJobRule
+ .pick({
+ input: true,
+ })
+ .extend({
+ scheduledTime: z.string().datetime(),
+ });
+
/**
* Repository interface for managing scheduled job rules in the system.
*
@@ -69,6 +80,16 @@ export interface IScheduledJobRulesRepository {
*/
update(id: string, data: z.infer): Promise>;
+ /**
+ * Updates a scheduled job rule with new input and scheduled time.
+ *
+ * @param id - The unique identifier of the scheduled job rule to update
+ * @param data - The update data containing input messages and scheduled time
+ * @returns Promise resolving to the updated scheduled job rule
+ * @throws {NotFoundError} if the scheduled job rule doesn't exist
+ */
+ updateRule(id: string, data: z.infer): Promise>;
+
/**
* Releases a scheduled job rule after it has been executed.
*
@@ -103,4 +124,4 @@ export interface IScheduledJobRulesRepository {
* @returns Promise resolving to void
*/
deleteByProjectId(projectId: string): Promise;
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/src/application/services/temp-binary-cache.ts b/apps/rowboat/src/application/services/temp-binary-cache.ts
new file mode 100644
index 000000000..25d34c53f
--- /dev/null
+++ b/apps/rowboat/src/application/services/temp-binary-cache.ts
@@ -0,0 +1,47 @@
+import crypto from 'crypto';
+
+type Entry = {
+ buf: Buffer;
+ mimeType: string;
+ expiresAt: number; // epoch ms
+};
+
+class TempBinaryCache {
+ private store = new Map();
+ private cleanupInterval: NodeJS.Timeout | null = null;
+
+ constructor() {
+ this.startCleanup();
+ }
+
+ private startCleanup() {
+ if (this.cleanupInterval) return;
+ this.cleanupInterval = setInterval(() => {
+ const now = Date.now();
+ for (const [id, entry] of this.store.entries()) {
+ if (entry.expiresAt <= now) this.store.delete(id);
+ }
+ }, 60_000); // every minute
+ if (this.cleanupInterval.unref) this.cleanupInterval.unref();
+ }
+
+ put(buf: Buffer, mimeType: string, ttlMs: number = 10 * 60 * 1000): string {
+ const id = crypto.randomUUID();
+ const expiresAt = Date.now() + ttlMs;
+ this.store.set(id, { buf, mimeType, expiresAt });
+ return id;
+ }
+
+ get(id: string): { buf: Buffer; mimeType: string } | undefined {
+ const entry = this.store.get(id);
+ if (!entry) return undefined;
+ if (entry.expiresAt <= Date.now()) {
+ this.store.delete(id);
+ return undefined;
+ }
+ return { buf: entry.buf, mimeType: entry.mimeType };
+ }
+}
+
+export const tempBinaryCache = new TempBinaryCache();
+
diff --git a/apps/rowboat/src/application/use-cases/copilot/create-copilot-cached-turn.use-case.ts b/apps/rowboat/src/application/use-cases/copilot/create-copilot-cached-turn.use-case.ts
index 20f70a927..bb8b4cd2a 100644
--- a/apps/rowboat/src/application/use-cases/copilot/create-copilot-cached-turn.use-case.ts
+++ b/apps/rowboat/src/application/use-cases/copilot/create-copilot-cached-turn.use-case.ts
@@ -3,7 +3,7 @@ import { nanoid } from 'nanoid';
import { ICacheService } from '@/src/application/services/cache.service.interface';
import { IUsageQuotaPolicy } from '@/src/application/policies/usage-quota.policy.interface';
import { IProjectActionAuthorizationPolicy } from '@/src/application/policies/project-action-authorization.policy';
-import { CopilotChatContext, CopilotMessage, DataSourceSchemaForCopilot } from '@/src/entities/models/copilot';
+import { CopilotChatContext, CopilotMessage, DataSourceSchemaForCopilot, TriggerSchemaForCopilot } from '@/src/entities/models/copilot';
import { Workflow } from '@/app/lib/types/workflow_types';
import { USE_BILLING } from "@/app/lib/feature_flags";
import { authorize, getCustomerIdForProject } from "@/app/lib/billing";
@@ -19,6 +19,7 @@ const inputSchema = z.object({
workflow: Workflow,
context: CopilotChatContext.nullable(),
dataSources: z.array(DataSourceSchemaForCopilot).optional(),
+ triggers: z.array(TriggerSchemaForCopilot).optional(),
}),
});
diff --git a/apps/rowboat/src/application/use-cases/copilot/run-copilot-cached-turn.use-case.ts b/apps/rowboat/src/application/use-cases/copilot/run-copilot-cached-turn.use-case.ts
index 7afb0a29e..334c8b6de 100644
--- a/apps/rowboat/src/application/use-cases/copilot/run-copilot-cached-turn.use-case.ts
+++ b/apps/rowboat/src/application/use-cases/copilot/run-copilot-cached-turn.use-case.ts
@@ -90,6 +90,7 @@ export class RunCopilotCachedTurnUseCase implements IRunCopilotCachedTurnUseCase
cachedTurn.messages,
cachedTurn.workflow,
cachedTurn.dataSources || [],
+ cachedTurn.triggers || [],
)) {
yield event;
}
diff --git a/apps/rowboat/src/application/use-cases/projects/create-project.use-case.ts b/apps/rowboat/src/application/use-cases/projects/create-project.use-case.ts
index 8a5d4e104..aa14ab29b 100644
--- a/apps/rowboat/src/application/use-cases/projects/create-project.use-case.ts
+++ b/apps/rowboat/src/application/use-cases/projects/create-project.use-case.ts
@@ -84,6 +84,7 @@ export class CreateProjectUseCase implements ICreateProjectUseCase {
agents: template.agents,
prompts: template.prompts,
tools: template.tools,
+ pipelines: template.pipelines || [],
startAgent: template.startAgent,
}
} else {
@@ -94,6 +95,8 @@ export class CreateProjectUseCase implements ICreateProjectUseCase {
}
}
+ // Do not auto-attach image generation tool; it is available as a default library tool in the editor/runtime
+
// create project secret
const secret = crypto.randomBytes(32).toString('hex');
@@ -117,4 +120,4 @@ export class CreateProjectUseCase implements ICreateProjectUseCase {
return project;
}
-}
\ No newline at end of file
+}
diff --git a/apps/rowboat/src/application/use-cases/projects/sync-connected-account.use-case.ts b/apps/rowboat/src/application/use-cases/projects/sync-connected-account.use-case.ts
index 089585c95..64f393fa7 100644
--- a/apps/rowboat/src/application/use-cases/projects/sync-connected-account.use-case.ts
+++ b/apps/rowboat/src/application/use-cases/projects/sync-connected-account.use-case.ts
@@ -51,7 +51,23 @@ export class SyncConnectedAccountUseCase implements ISyncConnectedAccountUseCase
}
const account = project.composioConnectedAccounts?.[toolkitSlug];
if (!account || account.id !== connectedAccountId) {
- throw new Error(`Connected account ${connectedAccountId} not found in project ${projectId}`);
+ // Log detailed mismatch context to aid debugging
+ try {
+ // Avoid crashing on logging itself
+ // Include both expected and stored IDs, toolkit slug, and available toolkits
+ // so we can quickly spot wrong slug or race conditions.
+ // Note: This is server-side logging only.
+ console.error('[Composio] Connected account mismatch', {
+ projectId,
+ toolkitSlug,
+ expectedConnectedAccountId: connectedAccountId,
+ storedAccountId: account?.id ?? null,
+ storedStatus: account?.status ?? null,
+ availableToolkits: Object.keys(project.composioConnectedAccounts || {}),
+ });
+ } catch {}
+
+ throw new Error(`Connected account ${connectedAccountId} not found in project ${projectId} (toolkit: ${toolkitSlug})`);
}
if (account.status === 'ACTIVE') {
@@ -86,4 +102,3 @@ export class SyncConnectedAccountUseCase implements ISyncConnectedAccountUseCase
}
}
-
diff --git a/apps/rowboat/src/application/use-cases/recurring-job-rules/create-recurring-job-rule.use-case.ts b/apps/rowboat/src/application/use-cases/recurring-job-rules/create-recurring-job-rule.use-case.ts
index 0c2c2c0cf..51ad32084 100644
--- a/apps/rowboat/src/application/use-cases/recurring-job-rules/create-recurring-job-rule.use-case.ts
+++ b/apps/rowboat/src/application/use-cases/recurring-job-rules/create-recurring-job-rule.use-case.ts
@@ -5,6 +5,7 @@ import { IProjectActionAuthorizationPolicy } from '../../policies/project-action
import { IRecurringJobRulesRepository } from '../../repositories/recurring-job-rules.repository.interface';
import { RecurringJobRule } from '@/src/entities/models/recurring-job-rule';
import { Message } from '@/app/lib/types/types';
+import { isValidCronExpression } from '@/src/application/lib/utils/is-valid-cron-expression';
const inputSchema = z.object({
caller: z.enum(["user", "api"]),
@@ -42,7 +43,7 @@ export class CreateRecurringJobRuleUseCase implements ICreateRecurringJobRuleUse
async execute(request: z.infer): Promise> {
// Validate cron expression
- if (!this.isValidCronExpression(request.cron)) {
+ if (!isValidCronExpression(request.cron)) {
throw new BadRequestError('Invalid cron expression. Expected format: minute hour day month dayOfWeek');
}
@@ -66,31 +67,4 @@ export class CreateRecurringJobRuleUseCase implements ICreateRecurringJobRuleUse
return rule;
}
-
- private isValidCronExpression(cron: string): boolean {
- const parts = cron.split(' ');
- if (parts.length !== 5) {
- return false;
- }
-
- // Basic validation - in production you'd want more sophisticated validation
- const [minute, hour, day, month, dayOfWeek] = parts;
-
- // Check if parts are valid
- const isValidPart = (part: string) => {
- if (part === '*') return true;
- if (part.includes('/')) {
- const [range, step] = part.split('/');
- if (range === '*' || (parseInt(step) > 0 && parseInt(step) <= 59)) return true;
- return false;
- }
- if (part.includes('-')) {
- const [start, end] = part.split('-');
- return !isNaN(parseInt(start)) && !isNaN(parseInt(end)) && parseInt(start) <= parseInt(end);
- }
- return !isNaN(parseInt(part));
- };
-
- return isValidPart(minute) && isValidPart(hour) && isValidPart(day) && isValidPart(month) && isValidPart(dayOfWeek);
- }
}
diff --git a/apps/rowboat/src/application/use-cases/recurring-job-rules/update-recurring-job-rule.use-case.ts b/apps/rowboat/src/application/use-cases/recurring-job-rules/update-recurring-job-rule.use-case.ts
new file mode 100644
index 000000000..549eff631
--- /dev/null
+++ b/apps/rowboat/src/application/use-cases/recurring-job-rules/update-recurring-job-rule.use-case.ts
@@ -0,0 +1,69 @@
+import { BadRequestError, NotFoundError } from '@/src/entities/errors/common';
+import { z } from "zod";
+import { IUsageQuotaPolicy } from '../../policies/usage-quota.policy.interface';
+import { IProjectActionAuthorizationPolicy } from '../../policies/project-action-authorization.policy';
+import { IRecurringJobRulesRepository } from '../../repositories/recurring-job-rules.repository.interface';
+import { RecurringJobRule } from '@/src/entities/models/recurring-job-rule';
+import { Message } from '@/app/lib/types/types';
+import { isValidCronExpression } from '@/src/application/lib/utils/is-valid-cron-expression';
+
+const inputSchema = z.object({
+ caller: z.enum(["user", "api"]),
+ userId: z.string().optional(),
+ apiKey: z.string().optional(),
+ projectId: z.string(),
+ ruleId: z.string(),
+ input: z.object({
+ messages: z.array(Message),
+ }),
+ cron: z.string(),
+});
+
+export interface IUpdateRecurringJobRuleUseCase {
+ execute(request: z.infer): Promise>;
+}
+
+export class UpdateRecurringJobRuleUseCase implements IUpdateRecurringJobRuleUseCase {
+ private readonly recurringJobRulesRepository: IRecurringJobRulesRepository;
+ private readonly usageQuotaPolicy: IUsageQuotaPolicy;
+ private readonly projectActionAuthorizationPolicy: IProjectActionAuthorizationPolicy;
+
+ constructor({
+ recurringJobRulesRepository,
+ usageQuotaPolicy,
+ projectActionAuthorizationPolicy,
+ }: {
+ recurringJobRulesRepository: IRecurringJobRulesRepository,
+ usageQuotaPolicy: IUsageQuotaPolicy,
+ projectActionAuthorizationPolicy: IProjectActionAuthorizationPolicy,
+ }) {
+ this.recurringJobRulesRepository = recurringJobRulesRepository;
+ this.usageQuotaPolicy = usageQuotaPolicy;
+ this.projectActionAuthorizationPolicy = projectActionAuthorizationPolicy;
+ }
+
+ async execute(request: z.infer): Promise> {
+ if (!isValidCronExpression(request.cron)) {
+ throw new BadRequestError('Invalid cron expression. Expected format: minute hour day month dayOfWeek');
+ }
+
+ await this.projectActionAuthorizationPolicy.authorize({
+ caller: request.caller,
+ userId: request.userId,
+ apiKey: request.apiKey,
+ projectId: request.projectId,
+ });
+
+ await this.usageQuotaPolicy.assertAndConsumeProjectAction(request.projectId);
+
+ const rule = await this.recurringJobRulesRepository.fetch(request.ruleId);
+ if (!rule || rule.projectId !== request.projectId) {
+ throw new NotFoundError('Recurring job rule not found');
+ }
+
+ return await this.recurringJobRulesRepository.update(request.ruleId, {
+ input: request.input,
+ cron: request.cron,
+ });
+ }
+}
diff --git a/apps/rowboat/src/application/use-cases/scheduled-job-rules/update-scheduled-job-rule.use-case.ts b/apps/rowboat/src/application/use-cases/scheduled-job-rules/update-scheduled-job-rule.use-case.ts
new file mode 100644
index 000000000..6b8a54ad6
--- /dev/null
+++ b/apps/rowboat/src/application/use-cases/scheduled-job-rules/update-scheduled-job-rule.use-case.ts
@@ -0,0 +1,64 @@
+import { NotFoundError } from '@/src/entities/errors/common';
+import { z } from "zod";
+import { IUsageQuotaPolicy } from '../../policies/usage-quota.policy.interface';
+import { IProjectActionAuthorizationPolicy } from '../../policies/project-action-authorization.policy';
+import { IScheduledJobRulesRepository } from '../../repositories/scheduled-job-rules.repository.interface';
+import { ScheduledJobRule } from '@/src/entities/models/scheduled-job-rule';
+import { Message } from '@/app/lib/types/types';
+
+const inputSchema = z.object({
+ caller: z.enum(["user", "api"]),
+ userId: z.string().optional(),
+ apiKey: z.string().optional(),
+ projectId: z.string(),
+ ruleId: z.string(),
+ input: z.object({
+ messages: z.array(Message),
+ }),
+ scheduledTime: z.string().datetime(),
+});
+
+export interface IUpdateScheduledJobRuleUseCase {
+ execute(request: z.infer): Promise>;
+}
+
+export class UpdateScheduledJobRuleUseCase implements IUpdateScheduledJobRuleUseCase {
+ private readonly scheduledJobRulesRepository: IScheduledJobRulesRepository;
+ private readonly usageQuotaPolicy: IUsageQuotaPolicy;
+ private readonly projectActionAuthorizationPolicy: IProjectActionAuthorizationPolicy;
+
+ constructor({
+ scheduledJobRulesRepository,
+ usageQuotaPolicy,
+ projectActionAuthorizationPolicy,
+ }: {
+ scheduledJobRulesRepository: IScheduledJobRulesRepository,
+ usageQuotaPolicy: IUsageQuotaPolicy,
+ projectActionAuthorizationPolicy: IProjectActionAuthorizationPolicy,
+ }) {
+ this.scheduledJobRulesRepository = scheduledJobRulesRepository;
+ this.usageQuotaPolicy = usageQuotaPolicy;
+ this.projectActionAuthorizationPolicy = projectActionAuthorizationPolicy;
+ }
+
+ async execute(request: z.infer): Promise> {
+ await this.projectActionAuthorizationPolicy.authorize({
+ caller: request.caller,
+ userId: request.userId,
+ apiKey: request.apiKey,
+ projectId: request.projectId,
+ });
+
+ await this.usageQuotaPolicy.assertAndConsumeProjectAction(request.projectId);
+
+ const rule = await this.scheduledJobRulesRepository.fetch(request.ruleId);
+ if (!rule || rule.projectId !== request.projectId) {
+ throw new NotFoundError('Scheduled job rule not found');
+ }
+
+ return await this.scheduledJobRulesRepository.updateRule(request.ruleId, {
+ input: request.input,
+ scheduledTime: request.scheduledTime,
+ });
+ }
+}
diff --git a/apps/rowboat/src/entities/models/assistant-template.ts b/apps/rowboat/src/entities/models/assistant-template.ts
new file mode 100644
index 000000000..bd528bb5f
--- /dev/null
+++ b/apps/rowboat/src/entities/models/assistant-template.ts
@@ -0,0 +1,42 @@
+import { z } from "zod";
+import { Workflow } from "../../../app/lib/types/workflow_types";
+
+export const AssistantTemplate = z.object({
+ id: z.string(),
+ name: z.string(),
+ description: z.string(),
+ category: z.string(),
+ authorId: z.string(),
+ authorName: z.string(),
+ authorEmail: z.string().optional(),
+ isAnonymous: z.boolean(),
+ workflow: Workflow,
+ tags: z.array(z.string()),
+ publishedAt: z.string().datetime(),
+ lastUpdatedAt: z.string().datetime(),
+ downloadCount: z.number().default(0),
+ likeCount: z.number().default(0),
+ featured: z.boolean().default(false),
+ isPublic: z.boolean().default(true),
+ // Social features
+ likes: z.array(z.string()).default([]),
+ // Template-like metadata
+ copilotPrompt: z.string().optional(),
+ thumbnailUrl: z.string().optional(),
+ // New field to indicate source of template
+ source: z.enum(["library", "community"]),
+});
+
+export type AssistantTemplate = z.infer;
+
+export const AssistantTemplateLike = z.object({
+ id: z.string(),
+ assistantId: z.string(),
+ userId: z.string(),
+ userEmail: z.string().optional(),
+ createdAt: z.string().datetime(),
+});
+
+export type AssistantTemplateLike = z.infer;
+
+
diff --git a/apps/rowboat/src/entities/models/copilot.ts b/apps/rowboat/src/entities/models/copilot.ts
index eeff97423..972b59480 100644
--- a/apps/rowboat/src/entities/models/copilot.ts
+++ b/apps/rowboat/src/entities/models/copilot.ts
@@ -2,6 +2,9 @@ import { z } from "zod";
import { Workflow } from "@/app/lib/types/workflow_types";
import { Message } from "@/app/lib/types/types";
import { DataSource } from "@/src/entities/models/data-source";
+import { ScheduledJobRule } from "@/src/entities/models/scheduled-job-rule";
+import { RecurringJobRule } from "@/src/entities/models/recurring-job-rule";
+import { ComposioTriggerDeployment } from "@/src/entities/models/composio-trigger-deployment";
export const DataSourceSchemaForCopilot = DataSource.pick({
id: true,
@@ -10,6 +13,43 @@ export const DataSourceSchemaForCopilot = DataSource.pick({
data: true,
});
+export const ScheduledJobRuleSchemaForCopilot = ScheduledJobRule.pick({
+ id: true,
+ nextRunAt: true,
+ status: true,
+ input: true,
+}).extend({
+ type: z.literal('one_time'),
+ name: z.string(),
+});
+
+export const RecurringJobRuleSchemaForCopilot = RecurringJobRule.pick({
+ id: true,
+ cron: true,
+ nextRunAt: true,
+ disabled: true,
+ input: true,
+}).extend({
+ type: z.literal('recurring'),
+ name: z.string(),
+});
+
+export const ComposioTriggerDeploymentSchemaForCopilot = ComposioTriggerDeployment.pick({
+ id: true,
+ triggerTypeName: true,
+ toolkitSlug: true,
+ triggerTypeSlug: true,
+ triggerConfig: true,
+}).extend({
+ type: z.literal('external'),
+});
+
+export const TriggerSchemaForCopilot = z.union([
+ ScheduledJobRuleSchemaForCopilot,
+ RecurringJobRuleSchemaForCopilot,
+ ComposioTriggerDeploymentSchemaForCopilot,
+]);
+
export const CopilotUserMessage = z.object({
role: z.literal('user'),
content: z.string(),
@@ -21,7 +61,7 @@ export const CopilotAssistantMessageTextPart = z.object({
export const CopilotAssistantMessageActionPart = z.object({
type: z.literal("action"),
content: z.object({
- config_type: z.enum(['tool', 'agent', 'prompt', 'pipeline', 'start_agent']),
+ config_type: z.enum(['tool', 'agent', 'prompt', 'pipeline', 'start_agent', 'one_time_trigger', 'recurring_trigger', 'external_trigger']),
action: z.enum(['create_new', 'edit', 'delete']),
name: z.string(),
change_description: z.string(),
@@ -60,6 +100,7 @@ export const CopilotAPIRequest = z.object({
workflow: Workflow,
context: CopilotChatContext.nullable(),
dataSources: z.array(DataSourceSchemaForCopilot).optional(),
+ triggers: z.array(TriggerSchemaForCopilot).optional(),
});
export const CopilotAPIResponse = z.union([
z.object({
diff --git a/apps/rowboat/src/infrastructure/mongodb/ensure-indexes.ts b/apps/rowboat/src/infrastructure/mongodb/ensure-indexes.ts
index d4cdd6b50..6b99b8824 100644
--- a/apps/rowboat/src/infrastructure/mongodb/ensure-indexes.ts
+++ b/apps/rowboat/src/infrastructure/mongodb/ensure-indexes.ts
@@ -10,6 +10,8 @@ import { RECURRING_JOB_RULES_COLLECTION, RECURRING_JOB_RULES_INDEXES } from "../
import { SCHEDULED_JOB_RULES_COLLECTION, SCHEDULED_JOB_RULES_INDEXES } from "../repositories/mongodb.scheduled-job-rules.indexes";
import { COMPOSIO_TRIGGER_DEPLOYMENTS_COLLECTION, COMPOSIO_TRIGGER_DEPLOYMENTS_INDEXES } from "../repositories/mongodb.composio-trigger-deployments.indexes";
import { USERS_COLLECTION, USERS_INDEXES } from "../repositories/mongodb.users.indexes";
+import { SHARED_WORKFLOWS_COLLECTION, SHARED_WORKFLOWS_INDEXES } from "../repositories/mongodb.shared-workflows.indexes";
+import { COMMUNITY_ASSISTANTS_COLLECTION, COMMUNITY_ASSISTANTS_INDEXES, COMMUNITY_ASSISTANT_LIKES_COLLECTION, COMMUNITY_ASSISTANT_LIKES_INDEXES } from "../repositories/mongodb.community-assistants.indexes";
export async function ensureAllIndexes(database: Db): Promise {
await database.collection(API_KEYS_COLLECTION).createIndexes(API_KEYS_INDEXES);
@@ -23,4 +25,7 @@ export async function ensureAllIndexes(database: Db): Promise {
await database.collection(SCHEDULED_JOB_RULES_COLLECTION).createIndexes(SCHEDULED_JOB_RULES_INDEXES);
await database.collection(COMPOSIO_TRIGGER_DEPLOYMENTS_COLLECTION).createIndexes(COMPOSIO_TRIGGER_DEPLOYMENTS_INDEXES);
await database.collection(USERS_COLLECTION).createIndexes(USERS_INDEXES);
-}
\ No newline at end of file
+ await database.collection(SHARED_WORKFLOWS_COLLECTION).createIndexes(SHARED_WORKFLOWS_INDEXES);
+ await database.collection(COMMUNITY_ASSISTANTS_COLLECTION).createIndexes(COMMUNITY_ASSISTANTS_INDEXES);
+ await database.collection(COMMUNITY_ASSISTANT_LIKES_COLLECTION).createIndexes(COMMUNITY_ASSISTANT_LIKES_INDEXES);
+}
diff --git a/apps/rowboat/src/infrastructure/repositories/mongodb.assistant-templates.repository.ts b/apps/rowboat/src/infrastructure/repositories/mongodb.assistant-templates.repository.ts
new file mode 100644
index 000000000..ad927f946
--- /dev/null
+++ b/apps/rowboat/src/infrastructure/repositories/mongodb.assistant-templates.repository.ts
@@ -0,0 +1,106 @@
+import { z } from "zod";
+import { Filter, ObjectId } from "mongodb";
+import { db } from "@/app/lib/mongodb";
+import { AssistantTemplate, AssistantTemplateLike } from "@/src/entities/models/assistant-template";
+import { PaginatedList } from "@/src/entities/common/paginated-list";
+
+const DocSchema = AssistantTemplate.omit({ id: true });
+const LikeDocSchema = AssistantTemplateLike.omit({ id: true });
+
+export class MongoDBAssistantTemplatesRepository {
+ private readonly collection = db.collection>("assistant_templates");
+ private readonly likesCollection = db.collection>("assistant_template_likes");
+
+ async create(data: Omit, 'id' | 'publishedAt' | 'lastUpdatedAt'>): Promise> {
+ const now = new Date().toISOString();
+ const _id = new ObjectId();
+ const doc: z.infer = { ...data, publishedAt: now, lastUpdatedAt: now } as any;
+ await this.collection.insertOne({ ...doc, _id });
+ return { ...doc, id: _id.toString() } as any;
+ }
+
+ async fetch(id: string): Promise | null> {
+ const result = await this.collection.findOne({ _id: new ObjectId(id) });
+ if (!result) return null;
+ return { ...result, id: result._id.toString() } as any;
+ }
+
+ async list(filters: {
+ category?: string;
+ search?: string;
+ featured?: boolean;
+ isPublic?: boolean;
+ authorId?: string;
+ source?: 'library' | 'community';
+ } = {}, cursor?: string, limit: number = 20): Promise>>> {
+ const query: Filter> = {};
+ if (filters.category) query.category = filters.category;
+ if (filters.featured !== undefined) query.featured = filters.featured;
+ if (filters.isPublic !== undefined) query.isPublic = filters.isPublic;
+ if (filters.authorId) query.authorId = filters.authorId;
+ if (filters.source) query.source = filters.source;
+ if (filters.search) {
+ query.$or = [
+ { name: { $regex: filters.search, $options: 'i' } },
+ { description: { $regex: filters.search, $options: 'i' } },
+ { tags: { $in: [new RegExp(filters.search, 'i')] } },
+ ];
+ }
+
+ const skip = cursor ? parseInt(cursor) : 0;
+ // Stable sort: newest first, with _id as tiebreaker to ensure deterministic pages
+ const results = await this.collection
+ .find(query)
+ .sort({ publishedAt: -1, _id: -1 } as any)
+ .skip(skip)
+ .limit(limit)
+ .toArray();
+ const items = results.map(r => ({ ...r, id: r._id.toString() }));
+ const nextCursor = results.length === limit ? (skip + limit).toString() : null;
+ return { items, nextCursor } as any;
+ }
+
+ async toggleLike(assistantId: string, userId: string, userEmail?: string): Promise<{ liked: boolean; likeCount: number }> {
+ const existingLike = await this.likesCollection.findOne({ assistantId, userId });
+ if (existingLike) {
+ await this.likesCollection.deleteOne({ _id: existingLike._id });
+ await this.collection.updateOne({ _id: new ObjectId(assistantId) }, { $inc: { likeCount: -1 }, $pull: { likes: userId } });
+ return { liked: false, likeCount: await this.getLikeCount(assistantId) };
+ } else {
+ const now = new Date().toISOString();
+ await this.likesCollection.insertOne({ assistantId, userId, userEmail, createdAt: now } as any);
+ await this.collection.updateOne({ _id: new ObjectId(assistantId) }, { $inc: { likeCount: 1 }, $addToSet: { likes: userId } });
+ return { liked: true, likeCount: await this.getLikeCount(assistantId) };
+ }
+ }
+
+ async getLikeCount(assistantId: string): Promise {
+ const result = await this.collection.findOne({ _id: new ObjectId(assistantId) }, { projection: { likeCount: 1 } });
+ return result?.likeCount || 0;
+ }
+
+ async getLikedTemplates(templateIds: string[], userId: string): Promise {
+ const likes = await this.likesCollection.find({
+ assistantId: { $in: templateIds },
+ userId
+ }).toArray();
+ return likes.map(like => like.assistantId);
+ }
+
+ async getCategories(): Promise {
+ const categories = await this.collection.distinct('category', { isPublic: true });
+ return categories.filter(Boolean);
+ }
+
+ async deleteByIdAndAuthor(id: string, authorId: string): Promise {
+ const result = await this.collection.deleteOne({ _id: new ObjectId(id), authorId } as any);
+ if (result.deletedCount && result.deletedCount > 0) {
+ // Clean up likes associated with this assistant template
+ await this.likesCollection.deleteMany({ assistantId: id } as any);
+ return true;
+ }
+ return false;
+ }
+}
+
+
diff --git a/apps/rowboat/src/infrastructure/repositories/mongodb.community-assistants.indexes.ts b/apps/rowboat/src/infrastructure/repositories/mongodb.community-assistants.indexes.ts
new file mode 100644
index 000000000..2e1b1233f
--- /dev/null
+++ b/apps/rowboat/src/infrastructure/repositories/mongodb.community-assistants.indexes.ts
@@ -0,0 +1,22 @@
+import { IndexDescription } from "mongodb";
+
+export const COMMUNITY_ASSISTANTS_COLLECTION = "community_assistants";
+export const COMMUNITY_ASSISTANT_LIKES_COLLECTION = "community_assistant_likes";
+
+export const COMMUNITY_ASSISTANTS_INDEXES: IndexDescription[] = [
+ { key: { category: 1, publishedAt: -1 }, name: "category_publishedAt" },
+ { key: { tags: 1 }, name: "tags" },
+ { key: { authorId: 1 }, name: "authorId" },
+ { key: { isPublic: 1, featured: 1, publishedAt: -1 }, name: "isPublic_featured_publishedAt" },
+ { key: { name: "text", description: "text", tags: "text" }, name: "text_search" },
+ { key: { publishedAt: -1 }, name: "publishedAt_desc" },
+ { key: { likeCount: -1 }, name: "likeCount_desc" },
+ { key: { downloadCount: -1 }, name: "downloadCount_desc" },
+];
+
+export const COMMUNITY_ASSISTANT_LIKES_INDEXES: IndexDescription[] = [
+ { key: { assistantId: 1, userId: 1 }, name: "assistantId_userId", unique: true },
+ { key: { assistantId: 1 }, name: "assistantId" },
+ { key: { userId: 1 }, name: "userId" },
+ { key: { createdAt: -1 }, name: "createdAt_desc" },
+];
diff --git a/apps/rowboat/src/infrastructure/repositories/mongodb.recurring-job-rules.repository.ts b/apps/rowboat/src/infrastructure/repositories/mongodb.recurring-job-rules.repository.ts
index a0a43d7c2..fe9937f2d 100644
--- a/apps/rowboat/src/infrastructure/repositories/mongodb.recurring-job-rules.repository.ts
+++ b/apps/rowboat/src/infrastructure/repositories/mongodb.recurring-job-rules.repository.ts
@@ -1,7 +1,7 @@
import { z } from "zod";
import { Filter, ObjectId } from "mongodb";
import { db } from "@/app/lib/mongodb";
-import { CreateRecurringRuleSchema, IRecurringJobRulesRepository, ListedRecurringRuleItem } from "@/src/application/repositories/recurring-job-rules.repository.interface";
+import { CreateRecurringRuleSchema, IRecurringJobRulesRepository, ListedRecurringRuleItem, UpdateRecurringRuleSchema } from "@/src/application/repositories/recurring-job-rules.repository.interface";
import { RecurringJobRule } from "@/src/entities/models/recurring-job-rule";
import { NotFoundError } from "@/src/entities/errors/common";
import { PaginatedList } from "@/src/entities/common/paginated-list";
@@ -208,6 +208,31 @@ export class MongoDBRecurringJobRulesRepository implements IRecurringJobRulesRep
return await this.updateNextRunAt(id, result.cron);
}
+ /**
+ * Updates a recurring job rule with new input and schedule.
+ */
+ async update(id: string, data: z.infer): Promise> {
+ const now = new Date().toISOString();
+
+ const result = await this.collection.findOneAndUpdate(
+ { _id: new ObjectId(id) },
+ {
+ $set: {
+ input: data.input,
+ cron: data.cron,
+ updatedAt: now,
+ },
+ },
+ { returnDocument: "after" },
+ );
+
+ if (!result) {
+ throw new NotFoundError(`Recurring job rule ${id} not found`);
+ }
+
+ return await this.updateNextRunAt(id, data.cron);
+ }
+
/**
* Deletes a recurring job rule by its unique identifier.
*/
diff --git a/apps/rowboat/src/infrastructure/repositories/mongodb.scheduled-job-rules.repository.ts b/apps/rowboat/src/infrastructure/repositories/mongodb.scheduled-job-rules.repository.ts
index 0bbb206ad..ec2b93800 100644
--- a/apps/rowboat/src/infrastructure/repositories/mongodb.scheduled-job-rules.repository.ts
+++ b/apps/rowboat/src/infrastructure/repositories/mongodb.scheduled-job-rules.repository.ts
@@ -1,7 +1,7 @@
import { z } from "zod";
import { Filter, ObjectId } from "mongodb";
import { db } from "@/app/lib/mongodb";
-import { CreateRuleSchema, IScheduledJobRulesRepository, ListedRuleItem, UpdateJobSchema } from "@/src/application/repositories/scheduled-job-rules.repository.interface";
+import { CreateRuleSchema, IScheduledJobRulesRepository, ListedRuleItem, UpdateJobSchema, UpdateScheduledRuleSchema } from "@/src/application/repositories/scheduled-job-rules.repository.interface";
import { ScheduledJobRule } from "@/src/entities/models/scheduled-job-rule";
import { NotFoundError } from "@/src/entities/errors/common";
import { PaginatedList } from "@/src/entities/common/paginated-list";
@@ -138,6 +138,41 @@ export class MongoDBScheduledJobRulesRepository implements IScheduledJobRulesRep
return this.convertDocToModel(result);
}
+ /**
+ * Reconfigures a scheduled job rule's input and next run time.
+ */
+ async updateRule(id: string, data: z.infer): Promise> {
+ const scheduledDate = new Date(data.scheduledTime);
+ const nextRunAtSeconds = Math.floor(scheduledDate.getTime() / 1000);
+ const nextRunAt = Math.floor(nextRunAtSeconds / 60) * 60;
+ const now = new Date().toISOString();
+
+ const result = await this.collection.findOneAndUpdate(
+ { _id: new ObjectId(id) },
+ {
+ $set: {
+ input: data.input,
+ nextRunAt,
+ status: "pending",
+ workerId: null,
+ lastWorkerId: null,
+ updatedAt: now,
+ },
+ $unset: {
+ output: "",
+ processedAt: "",
+ },
+ },
+ { returnDocument: "after" },
+ );
+
+ if (!result) {
+ throw new NotFoundError(`Scheduled job rule ${id} not found`);
+ }
+
+ return this.convertDocToModel(result);
+ }
+
/**
* Updates a scheduled job rule with new status and output data.
*/
diff --git a/apps/rowboat/src/infrastructure/repositories/mongodb.shared-workflows.indexes.ts b/apps/rowboat/src/infrastructure/repositories/mongodb.shared-workflows.indexes.ts
new file mode 100644
index 000000000..832e20f2b
--- /dev/null
+++ b/apps/rowboat/src/infrastructure/repositories/mongodb.shared-workflows.indexes.ts
@@ -0,0 +1,8 @@
+import { IndexDescription } from "mongodb";
+
+export const SHARED_WORKFLOWS_COLLECTION = "shared_workflows";
+
+export const SHARED_WORKFLOWS_INDEXES: IndexDescription[] = [
+ { key: { expiresAt: 1 }, name: "expiresAt_ttl", expireAfterSeconds: 0 },
+];
+
diff --git a/apps/rowboat/src/interface-adapters/controllers/copilot/create-copilot-cached-turn.controller.ts b/apps/rowboat/src/interface-adapters/controllers/copilot/create-copilot-cached-turn.controller.ts
index bd14730e0..805905395 100644
--- a/apps/rowboat/src/interface-adapters/controllers/copilot/create-copilot-cached-turn.controller.ts
+++ b/apps/rowboat/src/interface-adapters/controllers/copilot/create-copilot-cached-turn.controller.ts
@@ -1,5 +1,5 @@
import { z } from "zod";
-import { CopilotChatContext, CopilotMessage, DataSourceSchemaForCopilot } from '@/src/entities/models/copilot';
+import { CopilotChatContext, CopilotMessage, DataSourceSchemaForCopilot, TriggerSchemaForCopilot } from '@/src/entities/models/copilot';
import { Workflow } from '@/app/lib/types/workflow_types';
import { ICreateCopilotCachedTurnUseCase } from "@/src/application/use-cases/copilot/create-copilot-cached-turn.use-case";
import { BadRequestError } from "@/src/entities/errors/common";
@@ -14,6 +14,7 @@ const inputSchema = z.object({
workflow: Workflow,
context: CopilotChatContext.nullable(),
dataSources: z.array(DataSourceSchemaForCopilot).optional(),
+ triggers: z.array(TriggerSchemaForCopilot).optional(),
}),
});
diff --git a/apps/rowboat/src/interface-adapters/controllers/recurring-job-rules/update-recurring-job-rule.controller.ts b/apps/rowboat/src/interface-adapters/controllers/recurring-job-rules/update-recurring-job-rule.controller.ts
new file mode 100644
index 000000000..dd1758720
--- /dev/null
+++ b/apps/rowboat/src/interface-adapters/controllers/recurring-job-rules/update-recurring-job-rule.controller.ts
@@ -0,0 +1,50 @@
+import { BadRequestError } from "@/src/entities/errors/common";
+import z from "zod";
+import { IUpdateRecurringJobRuleUseCase } from "@/src/application/use-cases/recurring-job-rules/update-recurring-job-rule.use-case";
+import { RecurringJobRule } from "@/src/entities/models/recurring-job-rule";
+
+const inputSchema = z.object({
+ caller: z.enum(["user", "api"]),
+ userId: z.string().optional(),
+ apiKey: z.string().optional(),
+ projectId: z.string(),
+ ruleId: z.string(),
+ input: z.object({
+ messages: z.array(z.any()),
+ }),
+ cron: z.string(),
+});
+
+export interface IUpdateRecurringJobRuleController {
+ execute(request: z.infer): Promise>;
+}
+
+export class UpdateRecurringJobRuleController implements IUpdateRecurringJobRuleController {
+ private readonly updateRecurringJobRuleUseCase: IUpdateRecurringJobRuleUseCase;
+
+ constructor({
+ updateRecurringJobRuleUseCase,
+ }: {
+ updateRecurringJobRuleUseCase: IUpdateRecurringJobRuleUseCase,
+ }) {
+ this.updateRecurringJobRuleUseCase = updateRecurringJobRuleUseCase;
+ }
+
+ async execute(request: z.infer): Promise> {
+ const result = inputSchema.safeParse(request);
+ if (!result.success) {
+ throw new BadRequestError(`Invalid request: ${JSON.stringify(result.error)}`);
+ }
+ const { caller, userId, apiKey, projectId, ruleId, input, cron } = result.data;
+
+ return await this.updateRecurringJobRuleUseCase.execute({
+ caller,
+ userId,
+ apiKey,
+ projectId,
+ ruleId,
+ input,
+ cron,
+ });
+ }
+}
diff --git a/apps/rowboat/src/interface-adapters/controllers/scheduled-job-rules/update-scheduled-job-rule.controller.ts b/apps/rowboat/src/interface-adapters/controllers/scheduled-job-rules/update-scheduled-job-rule.controller.ts
new file mode 100644
index 000000000..1976b35f5
--- /dev/null
+++ b/apps/rowboat/src/interface-adapters/controllers/scheduled-job-rules/update-scheduled-job-rule.controller.ts
@@ -0,0 +1,51 @@
+import { BadRequestError } from "@/src/entities/errors/common";
+import z from "zod";
+import { IUpdateScheduledJobRuleUseCase } from "@/src/application/use-cases/scheduled-job-rules/update-scheduled-job-rule.use-case";
+import { ScheduledJobRule } from "@/src/entities/models/scheduled-job-rule";
+import { Message } from "@/app/lib/types/types";
+
+const inputSchema = z.object({
+ caller: z.enum(["user", "api"]),
+ userId: z.string().optional(),
+ apiKey: z.string().optional(),
+ projectId: z.string(),
+ ruleId: z.string(),
+ input: z.object({
+ messages: z.array(Message),
+ }),
+ scheduledTime: z.string().datetime(),
+});
+
+export interface IUpdateScheduledJobRuleController {
+ execute(request: z.infer): Promise>;
+}
+
+export class UpdateScheduledJobRuleController implements IUpdateScheduledJobRuleController {
+ private readonly updateScheduledJobRuleUseCase: IUpdateScheduledJobRuleUseCase;
+
+ constructor({
+ updateScheduledJobRuleUseCase,
+ }: {
+ updateScheduledJobRuleUseCase: IUpdateScheduledJobRuleUseCase,
+ }) {
+ this.updateScheduledJobRuleUseCase = updateScheduledJobRuleUseCase;
+ }
+
+ async execute(request: z.infer): Promise> {
+ const result = inputSchema.safeParse(request);
+ if (!result.success) {
+ throw new BadRequestError(`Invalid request: ${JSON.stringify(result.error)}`);
+ }
+ const { caller, userId, apiKey, projectId, ruleId, input, scheduledTime } = result.data;
+
+ return await this.updateScheduledJobRuleUseCase.execute({
+ caller,
+ userId,
+ apiKey,
+ projectId,
+ ruleId,
+ input,
+ scheduledTime,
+ });
+ }
+}