Docker Deployment
Run SeaClip hub in Docker containers
Quick Start with Docker Compose
bash
git clone https://github.com/t4tarzan/seaclip.git
cd seaclip
docker-compose up -d
Docker Compose Configuration
docker-compose.yml
version: '3.8'
services:
seaclip:
build: .
ports:
- "51842:51842"
- "5173:5173"
environment:
- NODE_ENV=production
- PORT=51842
- SEACLIP_DEPLOYMENT_MODE=local_trusted
- SEACLIP_EDITION=simple
- DATABASE_URL=postgres://seaclip:seaclip@db:5432/seaclip
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- db
- ollama
volumes:
- seaclip-data:/app/data
restart: unless-stopped
db:
image: postgres:15-alpine
environment:
- POSTGRES_USER=seaclip
- POSTGRES_PASSWORD=seaclip
- POSTGRES_DB=seaclip
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama-models:/root/.ollama
restart: unless-stopped
# For GPU support, uncomment:
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
volumes:
seaclip-data:
postgres-data:
ollama-models:
Dockerfile
Dockerfile
FROM node:20-alpine AS builder
WORKDIR /app
RUN npm install -g pnpm
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY server/package.json server/
COPY ui/package.json ui/
COPY cli/package.json cli/
COPY shared/package.json shared/
RUN pnpm install --frozen-lockfile
COPY . .
RUN pnpm build
FROM node:20-alpine AS runner
WORKDIR /app
RUN npm install -g pnpm
COPY --from=builder /app/package.json ./
COPY --from=builder /app/pnpm-lock.yaml ./
COPY --from=builder /app/pnpm-workspace.yaml ./
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/server ./server
COPY --from=builder /app/ui ./ui
COPY --from=builder /app/shared ./shared
EXPOSE 51842 5173
CMD ["pnpm", "start"]
Running with GPU Support
For NVIDIA GPU acceleration with Ollama:
bash — Install NVIDIA Container Toolkit
# Add NVIDIA repository
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
# Install
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
# Restart Docker
sudo systemctl restart docker
Then uncomment the GPU section in docker-compose.yml.
Environment Variables
| Variable | Default | Description |
|---|---|---|
PORT |
51842 | API server port |
SEACLIP_EDITION |
simple | Edition: simple or enhanced |
DATABASE_URL |
- | PostgreSQL connection string |
OLLAMA_BASE_URL |
http://localhost:11434 | Ollama API endpoint |
TELEGRAM_BOT_TOKEN |
- | Telegram bot token (optional) |
Managing Containers
bash
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f seaclip
# Stop all services
docker-compose down
# Rebuild after code changes
docker-compose build --no-cache
docker-compose up -d
# Pull a model into Ollama
docker-compose exec ollama ollama pull llama3
Production Considerations
Security Checklist:
- Change default PostgreSQL password
- Use Docker secrets for sensitive values
- Put behind a reverse proxy (nginx/traefik) with SSL
- Limit exposed ports to internal network
Using Docker Secrets
docker-compose.yml (with secrets)
services:
seaclip:
# ...
secrets:
- db_password
- telegram_token
environment:
- DATABASE_URL_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
telegram_token:
file: ./secrets/telegram_token.txt
Health Checks
bash
# Check all containers
docker-compose ps
# Health endpoint
curl http://localhost:51842/api/health
# Database connection
docker-compose exec db pg_isready -U seaclip