- Overview
- System Requirements
- Project Setup
- Docker Configuration
- Training the ML Model
- Testing the System
- Running the Complete Application
- End-to-End Testing Workflow
- Troubleshooting
- Production Deployment
- Advanced Configuration
AtlasEye is a satellite change detection system that uses deep learning to identify changes between satellite images captured at different times. The system can detect and visualize urban development, deforestation, natural disasters, and other environmental changes.
Key Features:
- Change detection between satellite image pairs
- Geospatial analysis with GeoJSON export
- Interactive visualization with metrics
- REST API for integration
- Asynchronous processing for large images
Tech Stack:
- Backend: Python, PyTorch, FastAPI, PostgreSQL with PostGIS
- Frontend: Next.js, TypeScript, TailwindCSS, Mapbox GL
- Infrastructure: Docker, Celery, Redis
- Docker and Docker Compose
- Git
- 8GB+ RAM
- NVIDIA GPU (optional, for faster training)
- 20GB+ free disk space
git clone https://github.com/yourusername/atlaseye.git
cd atlaseyemkdir -p data/models data/images data/training/before data/training/after data/training/mask data/test_data1. Backend Dockerfile
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies for geospatial libraries
RUN apt-get update && apt-get install -y \
build-essential \
libproj-dev \
libgeos-dev \
proj-bin \
libspatialindex-dev \
libgl1-mesa-glx \
libglib2.0-0 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements file
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
2. Frontend Dockerfile
FROM node:18-alpine
WORKDIR /app
# Copy package.json and install dependencies
COPY package*.json ./
RUN npm install
# Copy the rest of the application
COPY . .- Place before images in
data/training/before/ - Place after images in
data/training/after/ - Place ground truth masks in
data/training/mask/(if available) - Place test images in
data/test_data/
The system uses Docker Compose to orchestrate multiple services. The key services are:
- postgres: PostgreSQL database with PostGIS extension
- redis: Message broker for Celery
- backend: Python FastAPI application with ML capabilities
- celery_worker: Background task processor
- frontend: Next.js web application
docker-compose buildThe system uses environment variables for configuration:
Backend Environment Variables:
POSTGRES_SERVER: Database hostnamePOSTGRES_USER: Database usernamePOSTGRES_PASSWORD: Database passwordPOSTGRES_DB: Database nameCELERY_BROKER_URL: Redis connection stringCELERY_RESULT_BACKEND: Result storage backendMODEL_PATH: Path to trained modelLOCAL_STORAGE_PATH: Path for storing uploaded images
Frontend Environment Variables:
NEXT_PUBLIC_API_URL: Backend API URLNEXT_PUBLIC_MAPBOX_TOKEN: Mapbox API token for maps
<<<<<<< HEAD
docker-compose run --rm backend python -m app.ml.training.train
--data_dir=/app/data/training
--checkpoint_dir=/app/data/models
--batch_size=8
--num_epochs=50
--learning_rate=0.001
--image_size=256
=======
docker-compose run --rm backend python -m app.ml.training.train
--data_dir=/app/data/training
--checkpoint_dir=/app/data/models
--batch_size=8
--num_epochs=50
--learning_rate=0.001
--image_size=256
>>>>>>> 23623dfdd587ac7e62bbc55b80a349de65f4efb4
--device=cpu # Use 'cuda' if GPU available| Parameter | Description |
|---|---|
--data_dir |
Directory containing training data |
--checkpoint_dir |
Directory to save model checkpoints |
--batch_size |
Number of samples per training batch |
--num_epochs |
Total number of training epochs |
--learning_rate |
Learning rate for optimizer |
--image_size |
Size to resize images (e.g., 256 for 256x256) |
--device |
'cuda' for GPU or 'cpu' for CPU-only training |
View training logs in real-time:
docker-compose logs -f backendThe training history plot is saved to data/models/training_history.png.
Run all backend tests:
docker-compose run --rm backend python -m unittest discover testsRun specific test modules:
# ML module tests
docker-compose run --rm backend python -m unittest backend/tests/test_ml.py
# API tests
docker-compose run --rm backend python -m unittest backend/tests/test_api.pydocker-compose run --rm backend python -m app.ml.inferencer.test_predictor
--model_path=/app/data/models/final_model.pth
--before=/app/data/test_data/before.tif
--after=/app/data/test_data/after.tif
--ground_truth=/app/data/test_data/mask.tif # Optional
--output_dir=/app/data/test_resultsRun lint checks:
docker-compose run --rm frontend npm run lintdocker-compose run --rm backend python -m app.db.test_connectiondocker-compose up -d- Backend API: http://localhost:8000/docs
- Frontend UI: http://localhost:3000
docker-compose ps# All services
docker-compose logs
# Specific service
docker-compose logs -f backendcurl -X POST http://localhost:8000/api/v1/detection/upload-images/
-F "before_image=@/path/to/before.tif"
-F "after_image=@/path/to/after.tif"You should receive a job_id in response.
curl -X POST http://localhost:8000/api/v1/detection/process/{job_id}Replace {job_id} with the actual ID received.
curl http://localhost:8000/api/v1/detection/results/{job_id}- Open http://localhost:3000/results/{job_id} in your browser
- Verify the results display correctly:
- Change percentage
- Before/after images
- Change detection overlay
- Interactive map
- Metrics charts
If PostgreSQL connection fails:
docker-compose down
docker volume rm atlaseye_postgres_data
docker-compose up -d postgres
# Wait 10 seconds for initialization
docker-compose run --rm backend python -m app.db.test_connectionIf model training fails:
# Check GPU availability
docker-compose run --rm backend python -c "import torch; print('CUDA available:', torch.cuda.is_available())"
# Verify data paths
docker-compose run --rm backend ls -la /app/data/training/before
docker-compose run --rm backend ls -la /app/data/training/after
docker-compose run --rm backend ls -la /app/data/training/maskFor frontend rendering problems:
# Check if static assets are being served
docker-compose run --rm frontend ls -la /app/public
# Verify environment variables
docker-compose run --rm frontend env | grep NEXT_PUBLIC
# Fix Mapbox token if maps don't render
echo "NEXT_PUBLIC_MAPBOX_TOKEN=your_mapbox_token_here" > frontend/.env.local
docker-compose restart frontendDetailed logs can help diagnose issues:
docker-compose logs -fFor production deployment, consider the following:
- Secure Passwords: Update environment variables with strong passwords
- SSL Configuration: Add a reverse proxy (Nginx/Traefik) with SSL
- Authentication: Implement proper user authentication and authorization
- Database Tuning: Optimize PostgreSQL for geospatial queries
- Caching: Add Redis caching for API responses
- CDN: Use a CDN for static assets
- Load Balancing: Deploy multiple backend instances behind a load balancer
- Database Replication: Set up PostgreSQL replication
- Monitoring: Add monitoring and alerting
version: '3.8'
services:
# ... services configuration
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- backend
- frontendTo enable GPU acceleration:
- Install NVIDIA Container Toolkit on host
- Update docker-compose.yml:
services:
backend:
# ... other settings
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]- Update training command:
docker-compose run --rm backend python -m app.ml.training.train --device=cudaTo use a custom model architecture:
- Create a new model class in models
- Update the model initialization in train.py
- Update the predictor to use your custom model
For database schema changes:
# Generate migration
docker-compose run --rm backend alembic revision --autogenerate -m "Description"
# Apply migration
docker-compose run --rm backend alembic upgrade headDocumentation Version: 1.0.0
Last Updated: April 14, 2025
Authors: Exprays