A production-ready Python microservice for intelligent log analysis, pattern detection, and anomaly identification. Part of the Developer Foundry 2.0 (AIMA) ecosystem.
- Overview
- Features
- Architecture
- Getting Started
- Running the Application
- Testing
- API Documentation
- Monitoring
- Contributing
- Team
The Log Analysis System is a microservice designed to analyze, interpret, and summarize logs from all services within the Developer Foundry 2.0 (AIMA) ecosystem. It provides:
- Real-time log ingestion from RabbitMQ queues
- Intelligent pattern detection using template-based clustering
- ML-based anomaly detection using Isolation Forest algorithm
- Automated error analysis and summarization
- Integration with Recommendation and Alert systems
- Prometheus metrics for observability
Tech Stack:
- Python 3.11+
- FastAPI (async web framework)
- PostgreSQL (database)
- RabbitMQ (message broker)
- SQLAlchemy (async ORM)
- Prometheus (metrics)
- Docker & Docker Compose
✅ Log Ingestion
- Consumes structured JSON messages from RabbitMQ
- Validates and normalizes log entries
- Stores logs in PostgreSQL with full metadata
✅ Pattern Detection
- Identifies recurring log patterns
- Template-based message normalization
- Frequency-based pattern clustering
✅ Anomaly Detection
- ML-based anomaly identification (Isolation Forest)
- Configurable anomaly thresholds
- Real-time anomaly scoring
✅ Error Analysis
- Common error extraction
- Error rate calculation
- Severity scoring
✅ Insights & Recommendations
- Automated summary generation
- Publishes insights to Recommendation System (Team E)
- Sends high-severity alerts to Alert System (Team A)
✅ Security
- JWT-based authentication
- API Gateway integration (Team G)
- Secure password hashing
✅ Observability
- Prometheus metrics endpoint
- Structured logging (JSON in production)
- Health check endpoints
┌─────────────────┐
│ Log Mgmt (B) │ ──── log_analysis_queue ────┐
└─────────────────┘ │
▼
┌────────────────────────┐
│ Log Analysis Service │
│ • Ingestion │
│ • Pattern Detection │
│ • Anomaly Detection │
│ • Analysis Engine │
└────────┬───────────────┘
│
┌────────────────────────┴─────────────────┐
│ │
recommendation_queue alerts_queue
│ │
▼ ▼
┌───────────────────┐ ┌──────────────────┐
│ Recommendations │ │ Alerts (A) │
│ (E) │ └──────────────────┘
└───────────────────┘
Processing Pipeline:
- Consume logs from
log_analysis_queue(RabbitMQ) - Parse and validate message structure
- Store in PostgreSQL database
- Analyze for patterns and anomalies
- Generate insights and summaries
- Publish results to downstream services
- Expose metrics for monitoring
Ensure you have the following installed:
- Python 3.11+ - Download
- Docker - Download
- Docker Compose - Included with Docker Desktop
- Git - Download
Optional for local development:
- PostgreSQL 15+
- RabbitMQ 3.12+
- Clone the repository:
git clone https://github.com/Developer-s-Foundry/df-2.0-aima-log-analysis
cd df-2.0-aima-log-analysis- Create environment file:
cp .env.example .env- Install Python dependencies:
pip install -r requirements.txtEdit the .env file to configure the service:
See .env.example for all available configuration options.
This is the easiest way to get started. Docker Compose will start all required services:
- Start all services:
docker compose up -dThis starts:
- PostgreSQL database (port 5432)
- RabbitMQ with management UI (ports 5672, 15672)
- Log Analysis Service (port 8000)
- Prometheus (port 9091)
- Check service health:
# View logs
docker compose logs -f log_analysis_service
# Check health endpoint
curl http://localhost:8000/health/- Run database migrations:
docker compose exec log_analysis_service alembic upgrade head- Stop services:
docker compose downQuick commands using Makefile:
make docker-build # Build Docker image
make docker-up # Start all services
make docker-down # Stop all servicesFor development with hot-reload:
- Start PostgreSQL and RabbitMQ:
docker compose up -d postgres rabbitmq- Run database migrations:
alembic upgrade head- Start the development server:
# Using uvicorn directly
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
# Or using Make
make devThe API will be available at http://localhost:8000 with auto-reload enabled.
# Run all tests with coverage
pytest tests/ -v --cov=app --cov-report=html --cov-report=term
# Or using Make
make test# Run flake8
flake8 app/ tests/
# Run mypy type checking
mypy app/
# Or using Make
make lint# Format with black
black app/ tests/
# Sort imports with isort
isort app/ tests/
# Or using Make
make formatInstall pre-commit hooks to automatically format and lint code:
pre-commit installOnce the service is running, access the interactive API documentation:
URL: http://localhost:8000/docs
Interactive API documentation with "Try it out" functionality.
URL: http://localhost:8000/redoc
Alternative API documentation with a clean, readable interface.
| Method | Endpoint | Description | Auth Required |
|---|---|---|---|
| GET | /health/ |
Health check | No |
| GET | /health/ready |
Readiness check (DB connectivity) | No |
| GET | / |
Root endpoint | No |
| GET | /api/v1/logs |
List logs with filtering | Yes (JWT) |
| GET | /api/v1/logs/{id} |
Get specific log entry | Yes (JWT) |
| GET | /api/v1/logs/summary |
Get aggregated analysis summary | Yes (JWT) |
| GET | /metrics |
Prometheus metrics | No |
API endpoints require JWT authentication. Include the token in the Authorization header:
curl -H "Authorization: Bearer YOUR_JWT_TOKEN" \
http://localhost:8000/api/v1/logsGet logs for a specific service:
curl -X GET "http://localhost:8000/api/v1/logs?service_name=auth_service&log_level=ERROR&page=1&page_size=50" \
-H "Authorization: Bearer YOUR_JWT_TOKEN"Get analysis summary:
curl -X GET "http://localhost:8000/api/v1/logs/summary?service_name=auth_service" \
-H "Authorization: Bearer YOUR_JWT_TOKEN"Response Example:
{
"data": {
"service": "auth_service",
"total_logs": 230,
"error_rate": 12.3,
"common_errors": ["Database timeout", "JWT expired"],
"anomalies_detected": 3,
"recommendations": ["Increase connection pool", "Review JWT expiration"]
},
"status_code": 200,
"message": "Log summary retrieved successfully"
}| Service | URL | Credentials |
|---|---|---|
| API Docs | http://localhost:8000/docs | N/A |
| Health Check | http://localhost:8000/health/ | N/A |
| Prometheus Metrics | http://localhost:8000/metrics | N/A |
| RabbitMQ Management | http://localhost:15672 | guest/guest |
| Prometheus UI | http://localhost:9091 | N/A |
The service exposes the following metrics at /metrics:
logs_ingested_total- Total logs ingested (by service, level)logs_processed_total- Successfully processed logslogs_failed_total- Failed log processing attemptsanomalies_detected_total- Anomalies detectedpatterns_detected_total- Patterns detectedmessages_consumed_total- RabbitMQ messages consumedmessages_published_total- RabbitMQ messages publishedapi_requests_total- API requests (by method, endpoint, status)api_request_duration_seconds- API request latencyactive_consumers- Active RabbitMQ consumersunprocessed_logs- Unprocessed logs in queue
View application logs:
# Docker
docker compose logs -f log_analysis_service
## Database Migrations
### Create a New Migration
```bash
alembic revision --autogenerate -m "Description of changes"
# Or using Make
make migrate-createalembic upgrade head
# Or using Make
make migratealembic downgrade -1-
Create a feature branch:
git checkout -b feature/your-feature-name
-
Make changes and test:
make test make lint -
Format code:
make format
-
Commit changes:
git add . git commit -m "feat: your feature description"
-
Push and create PR:
git push origin feature/your-feature-name
Port already in use:
# Check what's using the port
lsof -i :8000
# Kill the process or change the port in .env
PORT=8001Database connection errors:
# Ensure PostgreSQL is running
docker compose ps postgres
# Check connection string in .env
DATABASE_URL=postgresql+asyncpg://postgres:password@localhost:5432/log_analysis_dbRabbitMQ connection errors:
# Ensure RabbitMQ is running
docker compose ps rabbitmq
# Check RabbitMQ logs
docker compose logs rabbitmqMigration errors:
# Reset database (development only!)
docker compose down -v
docker compose up -d postgres
alembic upgrade headDesign Targets:
- Throughput: ≥ 10,000 logs/minute
- Latency: ≤ 2 seconds per batch
- Uptime: ≥ 99.9%
- Pattern Detection Precision: ≥ 90%
- Anomaly Alert Reliability: ≥ 95%
Optimizations:
- Async I/O throughout
- Database connection pooling
- Batch processing
- Efficient indexing
- Caching where appropriate
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Ensure all tests pass
- Submit a pull request
- Follow PEP 8 guidelines
- Use type hints
- Write docstrings for all functions
- Format with Black (line length 100)
- Sort imports with isort
Developer Foundry 2.0 (AIMA) - Team F
- Samuel Ogboye
- Nasiff Bello
- Daniel Kiyiki
This project is part of the Developer Foundry 2.0 (AIMA) ecosystem.
For issues, questions, or contributions, please:
- Open an issue on GitHub
- Contact the development team
- Check the documentation in
/docs
Built with ❤️ by Team F