โข ๐บ๐ธ This document is also available in: English โข ๐ธ๐ฆ ูุฐุง ุงูู ุณุชูุฏ ู ุชููุฑ ุฃูุถุงู ุจู: ุงูุนุฑุจูุฉ โข ๐ฉ๐ช Dieses Dokument ist auch verfรผgbar in: Deutsch โข ๐ช๐ธ Este documento tambiรฉn estรก disponible en: Espaรฑol โข ๐ซ๐ท Ce document est รฉgalement disponible en: Franรงais โข ๐ฎ๐ฑ ืืกืื ืื ืืืื ืื ื: ืขืืจืืช โข ๐ฎ๐น Questo documento รจ disponibile anche in: Italiano โข ๐ฏ๐ต ใใฎๆๆธใฏไปฅไธใฎ่จ่ชใงใใๅฉ็จใใใ ใใพใ: ๆฅๆฌ่ช โข ๐ฐ๐ท ์ด ๋ฌธ์๋ ๋ค์ ์ธ์ด๋ก๋ ์ ๊ณต๋ฉ๋๋ค: ํ๊ตญ์ด โข ๐ณ๐ฑ Dit document is ook beschikbaar in: Nederlands โข ๐ต๐น Este documento tambรฉm estรก disponรญvel em: Portuguรชs โข ๐ท๐บ ะญัะพั ะดะพะบัะผะตะฝั ัะฐะบะถะต ะดะพัััะฟะตะฝ ะฝะฐ: ะ ัััะบะธะน โข ๐ธ๐ช Detta dokument รคr ocksรฅ tillgรคngligt pรฅ: Svenska โข ๐จ๐ณ ๆฌๆๆกฃ่ฟๆไพไปฅไธ่ฏญ่จ็ๆฌ: ไธญๆ
Group Chat AI is an advanced collaborative platform that enables dynamic group conversations with multiple AI personas. The system facilitates meaningful discussions across diverse perspectives, allowing users to explore ideas, get feedback, and engage in multi-participant conversations with AI agents representing different roles and viewpoints.
User Input โ React Frontend โ API Gateway โ ConversationOrchestrator
โ
Session Manager โ PersonaAgent โ LLM Interface โ Language Models (Bedrock/OpenAI/Anthropic/Ollama)
- Multi-Persona Conversations: Engage with multiple AI personas simultaneously in group discussions
- Dynamic Interaction Patterns: Real-time conversation flow with natural turn-taking and responses
- Diverse Perspectives: Each persona brings unique viewpoints, expertise, and communication styles
- Collaborative Problem Solving: Work through complex topics with AI agents offering different approaches
- Session Persistence: Maintain conversation context and persona consistency across sessions
- Flexible Persona Customization: Create and modify AI personas with natural language descriptions
- Multiple LLM Support: Leverage various language models including AWS Bedrock, OpenAI, Anthropic, and Ollama
- Node.js 20+
- npm 8+
- Docker (optional, for containerization)
- AWS CLI (for deployment)
-
Clone the repository
git clone <repository-url> cd group-chat-ai
-
Install dependencies
npm run install:all
-
Set up environment variables
# Backend cp backend/.env.example backend/.env # Edit backend/.env with your configuration # Frontend will use Vite's proxy configuration
-
Build shared package
npm run build:shared
-
Start the backend server
npm run dev:backend
Backend will be available at
http://localhost:3000
-
Start the frontend development server
npm run dev:frontend
Frontend will be available at
http://localhost:3001
-
Test the API
curl http://localhost:3000/health
group-chat-ai/
โโโ shared/ # Shared TypeScript types and utilities
โ โโโ src/
โ โ โโโ types/ # Common type definitions
โ โ โโโ constants/ # Application constants
โ โ โโโ utils/ # Shared utility functions
โโโ backend/ # Express.js API server
โ โโโ src/
โ โ โโโ controllers/ # API route handlers
โ โ โโโ services/ # Business logic services
โ โ โโโ middleware/ # Express middleware
โ โ โโโ config/ # Configuration files
โ โ โโโ utils/ # Backend utilities
โโโ frontend/ # React application
โ โโโ src/
โ โ โโโ components/ # Reusable React components
โ โ โโโ pages/ # Page components
โ โ โโโ services/ # API service layer
โ โ โโโ hooks/ # Custom React hooks
โ โ โโโ utils/ # Frontend utilities
โโโ infrastructure/ # AWS CDK infrastructure code
โโโ tests/ # Test files
โโโ documents/ # Project documentation
npm run install:all
- Install all dependenciesnpm run build
- Build all packagesnpm run test
- Run all testsnpm run lint
- Lint all packages
npm run dev:backend
- Start backend in development modenpm run build:backend
- Build backendnpm run test:backend
- Run backend tests
npm run dev:frontend
- Start frontend development servernpm run build:frontend
- Build frontend for productionnpm run test:frontend
- Run frontend tests
npm run personas:generate
- Generate English personas.json from shared definitionsnpm run docs:translate
- Translate all documentation to supported languagesnpm run docs:translate:single -- --lang es
- Translate to specific language
GET /health
- Basic health checkGET /health/detailed
- Detailed health information
GET /personas
- Get all available personasGET /personas/:personaId
- Get specific persona details
POST /sessions
- Create new conversation sessionPOST /sessions/:sessionId/messages
- Send message and get responsesPUT /sessions/:sessionId/personas
- Update session personasGET /sessions/:sessionId/summary
- Get session summaryDELETE /sessions/:sessionId
- End sessionGET /sessions/:sessionId
- Get session details
The system supports multiple LLM providers through a configurable interface:
- OpenAI (GPT-3.5/GPT-4)
- Anthropic (Claude)
- AWS Bedrock (Various models)
- Ollama (Local models)
Configure via environment variables:
LLM_PROVIDER=openai
LLM_MODEL=gpt-3.5-turbo
LLM_TEMPERATURE=0.7
LLM_MAX_TOKENS=500
In development, the system uses mock responses to simulate AI interactions without requiring API keys.
The system includes diverse AI personas that can be customized for various group conversation scenarios:
- Strategic Advisor - High-level planning, vision, and strategic direction
- Technical Expert - Deep technical knowledge, implementation details, and solutions
- Analyst - Data-driven insights, research, and analytical perspectives
- Creative Thinker - Innovation, brainstorming, and out-of-the-box ideas
- Facilitator - Discussion management, consensus building, and collaboration
Each persona is defined by just 4 simple fields:
- Name: Display name (e.g., "Strategic Advisor")
- Role: Short role identifier (e.g., "Strategist")
- Details: Free text description including background, priorities, concerns, and influence level
- Avatar Selection: Visual representation from available avatar options
- Edit Default Personas: Modify any default persona's details in natural language
- Create Custom Personas: Build completely custom personas with your own descriptions
- Session Persistence: All persona customizations persist throughout browser sessions
- Import/Export: Save and share persona configurations via JSON files
- Tile-Based Interface: Visual tile selection with comprehensive editing capabilities
Each persona maintains:
- Isolated conversation context for authentic responses
- Natural language processing of details field for AI prompt generation
- Role-specific response patterns based on described characteristics
- Intelligent turn-taking for natural group conversation flow
- Source of Truth: All persona definitions are maintained in
shared/src/personas/index.ts
- Generation: Run
npm run personas:generate
to create Englishpersonas.json
translation file - Translation: Use existing translation scripts to generate localized persona files
# 1. Update persona definitions in shared package
vim shared/src/personas/index.ts
# 2. Generate English personas.json from shared definitions
npm run personas:generate
# 3. Translate personas to all supported languages
npm run docs:translate # Translates all files including personas.json
# Or translate to specific language
npm run docs:translate:single -- --lang es
# 4. Rebuild shared package if needed
npm run build:shared
- Source:
shared/src/personas/index.ts
(TypeScript definitions) - Generated:
frontend/public/locales/en/personas.json
(English i18n) - Translated:
frontend/public/locales/{lang}/personas.json
(Localized versions)
The system supports 14 languages for personas and documentation:
- ๐บ๐ธ English (en) - Source language
- ๐ธ๐ฆ ุงูุนุฑุจูุฉ (ar) - Arabic
- ๐ฉ๐ช Deutsch (de) - German
- ๐ช๐ธ Espaรฑol (es) - Spanish
- ๐ซ๐ท Franรงais (fr) - French
- ๐ฎ๐ฑ ืขืืจืืช (he) - Hebrew
- ๐ฎ๐น Italiano (it) - Italian
- ๐ฏ๐ต ๆฅๆฌ่ช (ja) - Japanese
- ๐ฐ๐ท ํ๊ตญ์ด (ko) - Korean
- ๐ณ๐ฑ Nederlands (nl) - Dutch
- ๐ต๐น Portuguรชs (pt) - Portuguese
- ๐ท๐บ ะ ัััะบะธะน (ru) - Russian
- ๐ธ๐ช Svenska (sv) - Swedish
- ๐จ๐ณ ไธญๆ (zh) - Chinese
- Add persona definition to
shared/src/personas/index.ts
- Run
npm run personas:generate
to update English translations - Run translation scripts to generate localized versions
- The new persona will be available in all supported languages
- Input Validation: All user inputs are validated and sanitized
- Session Isolation: Each session maintains separate context
- Error Handling: Graceful error handling with user-friendly messages
- Rate Limiting: Built-in protection against abuse
- HTTPS: All communications encrypted in production
- Structured Logging: JSON-formatted logs with Winston
- Health Checks: Comprehensive health monitoring
- Metrics: Custom application metrics
- Error Tracking: Detailed error logging and tracking
# Build backend image
cd backend
npm run docker:build
# Run container
npm run docker:run
# Deploy infrastructure
cd infrastructure
npm run deploy:dev # replace :dev with staging or prod for those environments
By default, the Routing Model for Bedrock is OpenAI gpt-oss:20b (openai.gpt-oss-20b-1:0
). The Persona Model leverages Anthropic Claude 4 Sonnet (anthropic.claude-sonnet-4-20250514-v1:0
). Please ensure you are deploying to a region that supports both models, or configure alternative models.
npm run test
npm run test:integration
npm run test:e2e
- Response Time: < 3 seconds for persona responses
- Availability: 99.9% API availability
- Concurrency: Support 1000+ concurrent users
- Group Conversations: Up to 5 personas per session with natural conversation flow
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.