DOSync is a production-grade Docker Compose orchestration tool that automates deployments across single servers or multi-server fleets. It synchronizes services with container registries, performs zero-downtime rolling updates, and provides automatic rollback on failures.
Perfect for teams that choose Docker Compose over Kubernetes for operational simplicity.
- Zero-downtime rolling updates with health checks and automatic rollback
- Multi-server support - run independent DOSync instances behind a load balancer
- Intelligent replica detection for scale-based and name-based replicas
- Service dependency management - updates services in the correct order
- Deployment strategies: one-at-a-time, percentage, blue-green, canary
- Automatic rollback on health check failures with full history
- All major container registries: Docker Hub, GHCR, GCR, ACR, ECR, Harbor, Quay, DOCR, custom
- Advanced tag policies: semantic versioning, numerical ordering, regex filters
- Version constraints: deploy only specific ranges (e.g.,
>=1.0.0 <2.0.0) - State drift prevention: checks actual running containers vs compose file
- Self-updating: DOSync can update itself when new versions are available
- Backup management: creates backups before every modification
- Metrics & notifications: SQLite metrics storage, Slack/email/webhook alerts
- Web dashboard: monitor deployments and health across your fleet
- Docker Compose as source of truth: updates your compose file, not just containers
You don't need Kubernetes complexity when:
- Your application runs fine on 5-50 servers
- You value operational simplicity over theoretical scale
- Your team knows Docker Compose, not k8s manifests
- You want low infrastructure costs ($10-20/server vs $40+ for k8s nodes)
Run multiple servers with identical Docker Compose files, each with its own DOSync instance:
Load Balancer
|
+----------------+----------------+
| | |
Server 1 Server 2 Server 3
┌─────────┐ ┌─────────┐ ┌─────────┐
│ DOSync │ │ DOSync │ │ DOSync │
│ + App │ │ + App │ │ + App │
│(3 reps) │ │(3 reps) │ │(3 reps) │
└─────────┘ └─────────┘ └─────────┘
What you get:
- ✅ Zero-downtime deployments across your entire fleet
- ✅ Automatic rollback if deployments fail
- ✅ Each server is independent (no single point of failure)
- ✅ Standard Docker Compose (no new YAML to learn)
- ✅ Easy debugging (SSH to server, check logs)
- ✅ Horizontal scaling (add more servers as needed)
Single Server (Perfect for):
- Side projects and MVPs
- Internal tools
- Development/staging environments
- Small businesses ($50-500k ARR)
Multi-Server Fleet (Ideal for):
- Growing SaaS applications ($500k-5M ARR)
- High-availability web applications
- Agencies managing multiple client sites
- Teams that value "boring technology"
- 100-10,000 requests/second
When to use Kubernetes instead:
- 100+ servers
- Complex multi-region deployments
- Need for service mesh, auto-scaling pods across nodes
- Enterprise requirements (compliance, vendor support)
The easiest way to use DOSync is to include it in your Docker Compose file. This way, it runs as a container alongside your other services and can update them when new images are available.
We provide a helper script that can automatically add DOSync to your existing Docker Compose project:
# Download the script
curl -sSL https://raw.githubusercontent.com/localrivet/dosync/main/add-to-compose.sh > add-to-compose.sh
chmod +x add-to-compose.sh
# Run it (providing any required registry credentials as environment variables)
./add-to-compose.sh
# Start DOSync
docker compose up -d dosyncAlternatively, you can manually add the DOSync service to your Docker Compose file:
services:
# Your other services here
webapp:
image: ghcr.io/your-org/webapp:latest
# ...
api:
image: gcr.io/your-project/api:latest
# ...
backend:
image: registry.digitalocean.com/your-registry/backend:latest
# ...
frontend:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/frontend:latest
# ...
# Self-updating DOSync service
dosync:
image: localrivet/dosync:latest
restart: unless-stopped
volumes:
# Mount the Docker socket to allow controlling the Docker daemon
- /var/run/docker.sock:/var/run/docker.sock
# Mount the actual docker-compose.yml file that's being used to run the stack
- ./docker-compose.yml:/app/docker-compose.yml
# Mount a directory for backups
- ./backups:/app/backups
environment:
# Registry credentials as needed (see configuration below)
- DO_TOKEN=${DO_TOKEN} # Only required for DigitalOcean
- CHECK_INTERVAL=1m
- VERBOSE=--verboseSee docker-compose.example.yaml for a complete example.
# Download and run the installation script
curl -sSL https://raw.githubusercontent.com/localrivet/dosync/main/install.sh | bash# Clone the repository
git clone https://github.com/localrivet/dosync.git
cd dosync
# Build the binary
make build
# Install the binary
sudo cp ./release/$(go env GOOS)/$(go env GOARCH)/dosync /usr/local/bin/dosync
sudo chmod +x /usr/local/bin/dosyncCreate a .env file or set environment variables with your registry credentials as needed. For example:
# DigitalOcean (only if using DigitalOcean Container Registry)
DO_TOKEN=your_digitalocean_token_here
# Docker Hub
DOCKERHUB_USERNAME=youruser
DOCKERHUB_PASSWORD=yourpassword
# AWS ECR
AWS_ACCESS_KEY_ID=yourkey
AWS_SECRET_ACCESS_KEY=yoursecret
# ...and so on for other registriesYour Docker Compose file can use images from any supported registry:
services:
backend:
image: registry.digitalocean.com/your-registry/backend:latest
# ...
frontend:
image: ghcr.io/your-org/frontend:latest
# ...
api:
image: gcr.io/your-project/api:latest
# ...
worker:
image: quay.io/yourorg/worker:latest
# ...DOSync supports syncing images from multiple container registries, including Docker Hub, GCR, GHCR, ACR, Quay.io, Harbor, DigitalOcean Container Registry, AWS ECR, and custom/private registries.
To configure credentials for these registries, add a registry section to your dosync.yaml file. All fields are optional—only specify the registries you need. You can use environment variable expansion for secrets.
Example:
registry:
dockerhub:
username: myuser
password: ${DOCKERHUB_PASSWORD}
imagePolicy: # Optional image policy configuration
filterTags:
pattern: '^main-' # Only consider tags starting with 'main-'
policy:
numerical:
order: desc # Select the highest numerical value
gcr:
credentials_file: /path/to/gcp.json
ghcr:
token: ${GITHUB_PAT}
imagePolicy:
filterTags:
pattern: '^v(?P<semver>[0-9]+\.[0-9]+\.[0-9]+)$'
extract: '$semver'
policy:
semver:
range: '>=1.0.0 <2.0.0' # Only use 1.x versions
acr:
tenant_id: your-tenant-id
client_id: your-client-id
client_secret: ${AZURE_CLIENT_SECRET}
registry: yourregistry.azurecr.io
quay:
token: ${QUAY_TOKEN}
harbor:
url: https://myharbor.domain.com
username: myuser
password: ${HARBOR_PASSWORD}
docr:
token: ${DOCR_TOKEN}
imagePolicy:
policy:
semver:
range: ''
ecr:
aws_access_key_id: ${AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
region: us-east-1
registry: 123456789012.dkr.ecr.us-east-1.amazonaws.com
custom:
url: https://custom.registry.com
username: myuser
password: ${CUSTOM_REGISTRY_PASSWORD}See the code comments in internal/config/config.go for more details on each field.
DOSync allows you to define sophisticated policies for selecting which image tags to use. This is especially useful for CI/CD pipelines where tag patterns may contain branch names, timestamps, or version information.
Each registry configuration can include an imagePolicy section with the following components:
- Tag Filtering (optional): Use regex patterns to filter which tags are considered
- Value Extraction (optional): Extract values from tags using named groups
- Policy Selection: Choose how to sort and select the "best" tag (numerical, semver, alphabetical)
If no policy is specified, DOSync defaults to using the lexicographically highest tag, preferring non-prerelease tags if available (like traditional container registries).
Select tags based on numerical values, useful for tags containing timestamps or build numbers.
imagePolicy:
filterTags:
pattern: '^main-[a-zA-F0-9]+-(?P<ts>\d+)$' # Match format: main-hash-timestamp
extract: 'ts' # Extract the timestamp value
policy:
numerical:
order: desc # Select highest number (newest)Example: With tags ["main-abc123-100", "main-def456-200", "main-ghi789-150"], this policy selects main-def456-200.
Select tags based on semantic versioning rules, optionally with version constraints.
imagePolicy:
policy:
semver: # Select highest semver without constraints
range: '' # Empty means any valid semverOr with constraints:
imagePolicy:
policy:
semver:
range: '>=1.0.0 <2.0.0' # Only select from 1.x versionsExample: With tags ["v1.2.3", "v1.2.4", "v2.0.0", "v2.0.0-rc1"], the above policy selects v1.2.4.
You can extract the version from complex tag formats:
imagePolicy:
filterTags:
pattern: '^v(?P<semver>[0-9]+\.[0-9]+\.[0-9]+)(-[a-z]+)?$'
extract: '$semver'
policy:
semver:
range: '>=1.0.0'Select tags based on alphabetical ordering, useful for date-based formats like RELEASE.DATE.
imagePolicy:
filterTags:
pattern: '^RELEASE\.(?P<timestamp>.*)Z$' # Match format: RELEASE.2024-01-01T00-00-00Z
extract: '$timestamp' # Extract the timestamp portion
policy:
alphabetical:
order: desc # Select alphabetically highest (newest)Example: With tags ["RELEASE.2024-06-01T12-00-00Z", "RELEASE.2024-06-02T12-00-00Z"], this selects RELEASE.2024-06-02T12-00-00Z.
For tags like main-abc1234-1718435261:
imagePolicy:
filterTags:
pattern: '^main-[a-fA-F0-9]+-(?P<ts>\d+)$'
extract: 'ts'
policy:
numerical:
order: desc # Highest timestamp winsFor standard semver tags like v1.2.3:
imagePolicy:
policy:
semver:
range: '>=1.0.0' # Any version 1.0.0 or higherFor only stable 1.x versions:
imagePolicy:
policy:
semver:
range: '>=1.0.0 <2.0.0' # Only 1.x versionsFor including pre-releases:
imagePolicy:
policy:
semver:
range: '>=1.0.0-0' # Include pre-releasesFor only using release candidates:
imagePolicy:
filterTags:
pattern: '.*-rc.*'
policy:
semver:
range: ''For tags like 1.2.3-alpine3.17:
imagePolicy:
filterTags:
pattern: '^(?P<semver>[0-9]*\.[0-9]*\.[0-9]*)-.*'
extract: '$semver'
policy:
semver:
range: '>=1.0.0'For tags like RELEASE.2023-01-31T08-42-01Z:
imagePolicy:
filterTags:
pattern: '^RELEASE\.(?P<timestamp>.*)Z$'
extract: '$timestamp'
policy:
alphabetical:
order: asc # Ascending for dates in this format# Run manually with default settings
dosync sync -f docker-compose.yml
# Run with environment file and verbose output
dosync sync -e .env -f docker-compose.yml --verbose
# Run with custom polling interval (check every 5 minutes)
dosync sync -f docker-compose.yml -i 5mdosync sync [flags]
Flags:
-e, --env-file string Path to .env file with registry credentials
-f, --file string Path to docker-compose.yml file (required)
-h, --help Help for sync command
-i, --interval duration Polling interval (default: 5m)
-v, --verbose Enable verbose output
DOSync supports setting all sync command flags and config/env file paths via environment variables. This is especially useful for Docker Compose and CI/CD environments.
CLI flags always take precedence over environment variables.
| Flag/Config Option | Environment Variable | Example Value |
|---|---|---|
| --config, -c | CONFIG_PATH | /app/dosync.yaml |
| --env-file, -e | ENV_FILE | /app/.env |
| --file, -f | SYNC_FILE | /app/docker-compose.yml |
| --interval, -i | SYNC_INTERVAL | 5m |
| --verbose, -v | SYNC_VERBOSE | true |
| --rolling-update | SYNC_ROLLING_UPDATE | false |
| --strategy | SYNC_STRATEGY | canary |
| --health-check | SYNC_HEALTH_CHECK | http |
| --health-endpoint | SYNC_HEALTH_ENDPOINT | /status |
| --delay | SYNC_DELAY | 30s |
| --rollback-on-failure | SYNC_ROLLBACK_ON_FAILURE | true |
Example Docker Compose usage:
services:
dosync:
image: localrivet/dosync:latest
container_name: dosync
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./docker-compose.yml:/app/docker-compose.yml
- ./backups:/app/backups
- ./deploy/dosync/dosync.yaml:/app/dosync.yaml
env_file:
- .env
environment:
- DOCR_TOKEN=${DOCR_TOKEN}
- CONFIG_PATH=/app/dosync.yaml
- ENV_FILE=/app/.env
- SYNC_FILE=/app/docker-compose.yml
- SYNC_INTERVAL=1m
- SYNC_VERBOSE=true
# - SYNC_ROLLING_UPDATE=false
# - SYNC_STRATEGY=canary
# - SYNC_HEALTH_CHECK=http
# - SYNC_HEALTH_ENDPOINT=/status
# - SYNC_DELAY=30s
# - SYNC_ROLLBACK_ON_FAILURE=true
networks:
- proxyYou can set any of the above environment variables to control DOSync's behavior. CLI flags will always override environment variables if both are provided.
After installation, the script creates a systemd service:
# Start the service
sudo systemctl start dosync.service
# Enable automatic start on boot
sudo systemctl enable dosync.service
# Check service status
sudo systemctl status dosync.service
# View service logs
sudo journalctl -u dosync.service -f- DOSync polls all configured container registries according to the specified interval
- It checks for new image tags for each service defined in your Docker Compose file
- When a new tag is found, it updates the Docker Compose file
- It then uses
docker compose up -d --no-depsto restart only the affected services - Old images are pruned to save disk space
DOSync includes a sophisticated replica detection system that can identify and manage different types of service replicas in Docker Compose environments:
The Problem: Modern applications often run multiple copies (replicas) of the same service for reliability, load balancing, and zero-downtime deployments. When updating these services, you need to know:
- Which containers belong to which service
- How many replicas exist
- Whether they're using scale-based replication or name-based patterns like blue-green deployments
Without this knowledge, updates can become inconsistent or require manual intervention.
Our Solution: DOSync's replica detection automatically identifies all replicas of your services regardless of how they're deployed, allowing for:
- Consistent updates across all replicas of a service
- Proper handling of blue-green deployments
- Support for both Docker Compose scaling and custom naming patterns
- Zero-downtime rolling updates
Detects replicas created using Docker Compose's scale features:
services:
web:
image: nginx:latest
scale: 3 # Creates 3 replicas
api:
image: node:latest
deploy:
replicas: 2 # Creates 2 replicas using swarm mode syntaxDetects replicas with naming patterns like blue-green deployments:
services:
database-blue:
image: postgres:latest
database-green:
image: postgres:latest
cache-1:
image: redis:latest
cache-2:
image: redis:latestWe provide an interactive example to demonstrate replica detection:
cd examples
./run_example.shFor more details, see the replica package documentation.
DOSync supports running multiple independent instances across a server fleet without any coordination mechanism. This is perfect for horizontally scaling your application while maintaining operational simplicity.
Each server runs:
- Identical
docker-compose.ymlfile - Its own DOSync instance (monitors local Docker daemon)
- Multiple replicas of each service (via
deploy.replicas)
A load balancer (Traefik, nginx, Caddy, etc.) routes traffic based on health checks.
Step 1: Create docker-compose.yml (same on all servers)
version: '3.8'
services:
dosync:
image: localrivet/dosync:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./docker-compose.yml:/app/docker-compose.yml
- ./backups:/app/backups
environment:
- GHCR_TOKEN=${GHCR_TOKEN}
- SYNC_INTERVAL=2m
- SYNC_ROLLING_UPDATE=true
- SYNC_STRATEGY=one-at-a-time
web:
image: ghcr.io/yourorg/app:latest
deploy:
replicas: 3 # Each server runs 3 replicas
healthcheck:
test: ["CMD", "wget", "--spider", "http://localhost:8080/health"]
interval: 10s
timeout: 3s
retries: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.web.rule=Host(`example.com`)"Step 2: Deploy to multiple servers
# On each server (server1, server2, server3, etc.)
git clone your-infra-repo
cd your-infra-repo
docker compose up -dStep 3: Configure load balancer
Example Traefik configuration to distribute traffic:
# traefik.yml
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
providers:
docker:
exposedByDefault: false- Push new image:
docker push ghcr.io/yourorg/app:v2.1.0 - All DOSync instances detect the new version within 2 minutes
- Each server performs rolling update:
- Server 1: Updates its 3 replicas one-at-a-time
- Server 2: Updates its 3 replicas one-at-a-time
- Server 3: Updates its 3 replicas one-at-a-time
- Load balancer monitors health checks:
- Routes traffic only to healthy containers
- Automatically removes unhealthy containers from rotation
- Result: Zero-downtime deployment across entire fleet
vs Single Server:
- High availability (servers can fail independently)
- Horizontal scaling (add servers as traffic grows)
- Geographic distribution (place servers in different regions)
vs Kubernetes:
- No control plane overhead (no etcd, kube-apiserver, etc.)
- Each server is independent (failure isolation)
- Standard Docker Compose (familiar tooling)
- Much lower costs ($10-20/server vs $40+ for k8s nodes)
- Simpler operations (SSH to debug, standard logs)
vs Watchtower:
- Rolling updates with health checks (Watchtower just restarts)
- Automatic rollback on failures (Watchtower has none)
- Compose file as source of truth (Watchtower doesn't update files)
- Version control with tag policies (Watchtower only does "latest")
Start small (1-2 servers):
# Initial deployment
2 servers × 3 replicas = 6 total containersScale horizontally (add servers as needed):
# Add server 3
3 servers × 3 replicas = 9 total containers
# Add servers 4-5
5 servers × 3 replicas = 15 total containers
# Adjust replicas per server
5 servers × 5 replicas = 25 total containersEach DOSync instance is independent - no configuration changes needed when adding/removing servers.
Each server exposes its own metrics and logs:
# Check DOSync status on any server
docker logs dosync
# View deployment history
ls -lah backups/
# Check service health
docker ps --filter "label=com.docker.compose.service=web"For fleet-wide visibility, integrate with:
- Prometheus: Scrape metrics from each server's DOSync instance
- Grafana: Visualize deployments across all servers
- Loki: Centralized log aggregation
- Traefik Dashboard: Real-time traffic and health status
MIT License
