Self-Hosting Architecture
How Owlat runs as a fully self-hosted stack using Docker Compose — open-source Convex backend, custom MTA, and optional local LLM.
Self-Hosting Architecture
Owlat is designed to run entirely on your own infrastructure. Every component is open-source, and the entire stack deploys via a single Docker Compose file.
Data sovereignty, compliance requirements, air-gapped environments, or simply wanting full control. Owlat does not require any cloud services to function — not even for AI features, if you run a local model.
The Stack
docker compose upThe self-hosted stack consists of five required services and one optional service:
| Service | Role | Image |
|---|---|---|
| Convex Backend | Database, real-time subscriptions, file storage, vector search, serverless functions | ghcr.io/get-convex/convex-backend |
| Web | Nuxt application (dashboard, email builder, settings) | Built from apps/web/Dockerfile |
| MTA | Custom mail transfer agent — SMTP delivery, bounce processing, IP warming | Built from apps/mta/Dockerfile |
| Redis | Job queue for MTA, distributed coordination, rate limiting state | redis:7-alpine |
| ClamAV | Antivirus scanning for email attachments | clamav/clamav:1.3 |
| Ollama (optional) | Self-hosted LLM for the Agent Pipeline, knowledge extraction, and semantic search | ollama/ollama |
What Convex provides natively
The open-source Convex backend (ADR-006) bundles capabilities that would otherwise require separate services:
- Database — document-oriented with indexes, full-text search, and ACID transactions
- Vector search — native vector indexes for embedding-based retrieval (Knowledge Graph, semantic file search)
- File storage — binary file uploads and downloads via
ctx.storage(media library, attachments, semantic files) - Real-time subscriptions — reactive queries that push updates to the UI over WebSocket
- Scheduled functions — cron jobs and one-off scheduled tasks (campaign sending, usage metering, knowledge decay)
- HTTP actions — webhook endpoints, API routes, tracking pixels
This means no PostgreSQL, no MinIO, no separate vector database. The Convex backend is the single stateful service.
Docker Compose
services:
convex:
image: ghcr.io/get-convex/convex-backend
ports:
- "3210:3210" # Backend API
- "3211:3211" # HTTP actions
- "6791:6791" # Dashboard
volumes:
- convex-data:/data
environment:
- INSTANCE_SECRET=${INSTANCE_SECRET}
web:
build: ./apps/web
ports:
- "3000:3000"
environment:
- CONVEX_SELF_HOSTED_URL=http://convex:3210
- CONVEX_SELF_HOSTED_ADMIN_KEY=${CONVEX_ADMIN_KEY}
mta:
build: ./apps/mta
ports:
- "25:25" # Inbound SMTP (bounce processing)
- "3100:3100" # HTTP API
environment:
- REDIS_URL=redis://redis:6379
- CONVEX_SITE_URL=http://convex:3211
- MTA_API_KEY=${MTA_API_KEY}
- MTA_WEBHOOK_SECRET=${MTA_WEBHOOK_SECRET}
depends_on:
- redis
- convex
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
clamav:
image: clamav/clamav:1.3
volumes:
- clamav-data:/var/lib/clamav
# Optional: self-hosted LLM
ollama:
image: ollama/ollama
volumes:
- ollama-data:/root/.ollama
profiles:
- ai
volumes:
convex-data:
redis-data:
clamav-data:
ollama-data:
Environment Variables
Required
| Variable | Description |
|---|---|
INSTANCE_SECRET | Convex backend instance secret (generate on first run) |
CONVEX_ADMIN_KEY | Admin key for Convex (generated via generate_admin_key.sh) |
MTA_API_KEY | Shared secret between Convex and MTA for send requests |
MTA_WEBHOOK_SECRET | HMAC secret for MTA → Convex webhook authentication |
BETTER_AUTH_SECRET | Authentication secret for BetterAuth sessions |
SITE_URL | Public URL of the web application (e.g., https://owlat.example.com) |
LLM Configuration (for Agent Pipeline)
See ADR-007: Pluggable LLM Provider for details.
| Variable | Description | Default |
|---|---|---|
LLM_PROVIDER | Provider type: openai, anthropic, ollama, custom | openai |
LLM_BASE_URL | API endpoint (for Ollama: http://ollama:11434/v1) | Provider default |
LLM_API_KEY | API key (not needed for Ollama) | — |
LLM_MODEL | Model identifier | gpt-4o |
Optional
| Variable | Description | Default |
|---|---|---|
BILLING_ENABLED | Enable Stripe billing integration | false |
EMAIL_PROVIDER | Email provider: mta, ses, resend | mta |
CLAMAV_HOST | ClamAV hostname | clamav |
CLAMAV_PORT | ClamAV port | 3310 |
How the stack evolves
The Docker Compose file grows with each phase of the roadmap:
| Phase | Services Added | Purpose |
|---|---|---|
| Now (Email Platform) | convex, web, mta, redis, clamav | Full email marketing platform |
| Next (Inbound & Agents) | ollama (optional) | Agent pipeline, inbound email processing, verification queue |
| Then (Communication Intelligence) | — | Knowledge graph, multi-channel, CRM, file system — all run within Convex |
| Later (Complete Vision) | code-worker (optional) | Coding agent sidecar for feature request → prototype pipeline |
The architecture is designed so that adding AI capabilities does not add required infrastructure. The agent pipeline, knowledge graph, and semantic search all run as Convex functions. Ollama is optional — cloud users can use OpenAI or Anthropic API keys instead.
Security & isolation
Self-hosting means running AI agents on your own infrastructure — which requires the same defense-in-depth approach applied to the email pipeline. The security model covers credential isolation, process sandboxing, and environment hygiene.
Credential isolation
Agent pipeline functions never see raw API keys or secrets directly. All sensitive configuration flows through the Convex backend's environment variable system:
- LLM credentials —
LLM_API_KEYis read only bygetLLMProvider()inside Convex actions. The key never appears in agent context, LLM prompts, or log output. - Channel provider credentials — SMS/WhatsApp API keys stored encrypted in
channelConfigs, decrypted only at the moment of the API call. - Ollama on internal network — self-hosted LLM runs on the Docker internal network (
ollama:11434), with no API key required and no external exposure. The Ollama port is not published to the host by default.
# Ollama is only reachable from other Docker services
ollama:
image: ollama/ollama
# No 'ports:' mapping — only accessible via Docker network
networks:
- internal
Process sandboxing
Agent-generated content runs in isolated execution environments:
- Visualization agent output — rendered in
<iframe sandbox="allow-scripts">with no access to the parent DOM, Convex client, cookies, or navigation. See Visualization Agent. - Coding agent sidecar — the
code-workerDocker container runs with restricted filesystem access (mounted workspace volume only), no access to the Convex backend's data volume, and no network access to internal services beyond the Convex API endpoint. - Network isolation — agent containers communicate only with the Convex backend API. They cannot reach Redis, ClamAV, or the MTA directly.
code-worker:
build: ./apps/code-worker
volumes:
- workspace:/workspace # Isolated workspace, not convex-data
networks:
- agent-net # Restricted network: only Convex API access
security_opt:
- no-new-privileges:true
Environment variable hygiene
Sensitive variables follow strict scoping rules:
| Variable Scope | Who can read | Example |
|---|---|---|
| Convex backend | Convex functions only | INSTANCE_SECRET, LLM_API_KEY, BETTER_AUTH_SECRET |
| MTA | MTA process only | MTA_API_KEY, MTA_WEBHOOK_SECRET, DKIM keys |
| Web | Browser-safe only | CONVEX_SELF_HOSTED_URL (no secrets) |
| Agent containers | Task-specific only | CONVEX_URL (API endpoint), LLM_* (if agent needs direct LLM access) |
Convex environment variables are never passed to agent-generated code. The code-worker sidecar receives only the Convex client URL and LLM configuration — it authenticates to Convex via a scoped API token, not the admin key.
Getting started
# 1. Clone the repository
git clone https://github.com/owlat/owlat.git
cd owlat
# 2. Start the stack
docker compose up -d
# 3. Generate admin credentials
docker compose exec convex ./generate_admin_key.sh
# 4. Deploy the Convex functions
echo 'CONVEX_SELF_HOSTED_URL=http://localhost:3210' > .env.local
echo 'CONVEX_SELF_HOSTED_ADMIN_KEY=<your-key>' >> .env.local
npx convex deploy
# 5. Open the dashboard
open http://localhost:3000