Self-Hosting Architecture

How Owlat runs as a fully self-hosted stack using Docker Compose — open-source Convex backend, custom MTA, and optional local LLM.

Self-Hosting Architecture

Owlat is designed to run entirely on your own infrastructure. Every component is open-source, and the entire stack deploys via a single Docker Compose file.

Why self-hosting matters

Data sovereignty, compliance requirements, air-gapped environments, or simply wanting full control. Owlat does not require any cloud services to function — not even for AI features, if you run a local model.

The Stack

Convex BackendDB + Vectors + Files + Real-time
Web AppNuxt dashboard & email builder
MTASMTP delivery & bounce processing
RedisMTA job queue & rate limiting
ClamAVAttachment antivirus scanning
OllamaOptional self-hosted LLM
optional
Required
Optional
All services run via docker compose up

The self-hosted stack consists of five required services and one optional service:

ServiceRoleImage
Convex BackendDatabase, real-time subscriptions, file storage, vector search, serverless functionsghcr.io/get-convex/convex-backend
WebNuxt application (dashboard, email builder, settings)Built from apps/web/Dockerfile
MTACustom mail transfer agent — SMTP delivery, bounce processing, IP warmingBuilt from apps/mta/Dockerfile
RedisJob queue for MTA, distributed coordination, rate limiting stateredis:7-alpine
ClamAVAntivirus scanning for email attachmentsclamav/clamav:1.3
Ollama (optional)Self-hosted LLM for the Agent Pipeline, knowledge extraction, and semantic searchollama/ollama

What Convex provides natively

The open-source Convex backend (ADR-006) bundles capabilities that would otherwise require separate services:

  • Database — document-oriented with indexes, full-text search, and ACID transactions
  • Vector search — native vector indexes for embedding-based retrieval (Knowledge Graph, semantic file search)
  • File storage — binary file uploads and downloads via ctx.storage (media library, attachments, semantic files)
  • Real-time subscriptions — reactive queries that push updates to the UI over WebSocket
  • Scheduled functions — cron jobs and one-off scheduled tasks (campaign sending, usage metering, knowledge decay)
  • HTTP actions — webhook endpoints, API routes, tracking pixels

This means no PostgreSQL, no MinIO, no separate vector database. The Convex backend is the single stateful service.

Docker Compose

services:
  convex:
    image: ghcr.io/get-convex/convex-backend
    ports:
      - "3210:3210"   # Backend API
      - "3211:3211"   # HTTP actions
      - "6791:6791"   # Dashboard
    volumes:
      - convex-data:/data
    environment:
      - INSTANCE_SECRET=${INSTANCE_SECRET}

  web:
    build: ./apps/web
    ports:
      - "3000:3000"
    environment:
      - CONVEX_SELF_HOSTED_URL=http://convex:3210
      - CONVEX_SELF_HOSTED_ADMIN_KEY=${CONVEX_ADMIN_KEY}

  mta:
    build: ./apps/mta
    ports:
      - "25:25"       # Inbound SMTP (bounce processing)
      - "3100:3100"   # HTTP API
    environment:
      - REDIS_URL=redis://redis:6379
      - CONVEX_SITE_URL=http://convex:3211
      - MTA_API_KEY=${MTA_API_KEY}
      - MTA_WEBHOOK_SECRET=${MTA_WEBHOOK_SECRET}
    depends_on:
      - redis
      - convex

  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data

  clamav:
    image: clamav/clamav:1.3
    volumes:
      - clamav-data:/var/lib/clamav

  # Optional: self-hosted LLM
  ollama:
    image: ollama/ollama
    volumes:
      - ollama-data:/root/.ollama
    profiles:
      - ai

volumes:
  convex-data:
  redis-data:
  clamav-data:
  ollama-data:

Environment Variables

Required

VariableDescription
INSTANCE_SECRETConvex backend instance secret (generate on first run)
CONVEX_ADMIN_KEYAdmin key for Convex (generated via generate_admin_key.sh)
MTA_API_KEYShared secret between Convex and MTA for send requests
MTA_WEBHOOK_SECRETHMAC secret for MTA → Convex webhook authentication
BETTER_AUTH_SECRETAuthentication secret for BetterAuth sessions
SITE_URLPublic URL of the web application (e.g., https://owlat.example.com)

LLM Configuration (for Agent Pipeline)

See ADR-007: Pluggable LLM Provider for details.

VariableDescriptionDefault
LLM_PROVIDERProvider type: openai, anthropic, ollama, customopenai
LLM_BASE_URLAPI endpoint (for Ollama: http://ollama:11434/v1)Provider default
LLM_API_KEYAPI key (not needed for Ollama)
LLM_MODELModel identifiergpt-4o

Optional

VariableDescriptionDefault
BILLING_ENABLEDEnable Stripe billing integrationfalse
EMAIL_PROVIDEREmail provider: mta, ses, resendmta
CLAMAV_HOSTClamAV hostnameclamav
CLAMAV_PORTClamAV port3310

How the stack evolves

The Docker Compose file grows with each phase of the roadmap:

PhaseServices AddedPurpose
Now (Email Platform)convex, web, mta, redis, clamavFull email marketing platform
Next (Inbound & Agents)ollama (optional)Agent pipeline, inbound email processing, verification queue
Then (Communication Intelligence)Knowledge graph, multi-channel, CRM, file system — all run within Convex
Later (Complete Vision)code-worker (optional)Coding agent sidecar for feature request → prototype pipeline

The architecture is designed so that adding AI capabilities does not add required infrastructure. The agent pipeline, knowledge graph, and semantic search all run as Convex functions. Ollama is optional — cloud users can use OpenAI or Anthropic API keys instead.

Security & isolation

Self-hosting means running AI agents on your own infrastructure — which requires the same defense-in-depth approach applied to the email pipeline. The security model covers credential isolation, process sandboxing, and environment hygiene.

Credential isolation

Agent pipeline functions never see raw API keys or secrets directly. All sensitive configuration flows through the Convex backend's environment variable system:

  • LLM credentialsLLM_API_KEY is read only by getLLMProvider() inside Convex actions. The key never appears in agent context, LLM prompts, or log output.
  • Channel provider credentials — SMS/WhatsApp API keys stored encrypted in channelConfigs, decrypted only at the moment of the API call.
  • Ollama on internal network — self-hosted LLM runs on the Docker internal network (ollama:11434), with no API key required and no external exposure. The Ollama port is not published to the host by default.
# Ollama is only reachable from other Docker services
ollama:
  image: ollama/ollama
  # No 'ports:' mapping — only accessible via Docker network
  networks:
    - internal

Process sandboxing

Agent-generated content runs in isolated execution environments:

  • Visualization agent output — rendered in <iframe sandbox="allow-scripts"> with no access to the parent DOM, Convex client, cookies, or navigation. See Visualization Agent.
  • Coding agent sidecar — the code-worker Docker container runs with restricted filesystem access (mounted workspace volume only), no access to the Convex backend's data volume, and no network access to internal services beyond the Convex API endpoint.
  • Network isolation — agent containers communicate only with the Convex backend API. They cannot reach Redis, ClamAV, or the MTA directly.
code-worker:
  build: ./apps/code-worker
  volumes:
    - workspace:/workspace  # Isolated workspace, not convex-data
  networks:
    - agent-net              # Restricted network: only Convex API access
  security_opt:
    - no-new-privileges:true

Environment variable hygiene

Sensitive variables follow strict scoping rules:

Variable ScopeWho can readExample
Convex backendConvex functions onlyINSTANCE_SECRET, LLM_API_KEY, BETTER_AUTH_SECRET
MTAMTA process onlyMTA_API_KEY, MTA_WEBHOOK_SECRET, DKIM keys
WebBrowser-safe onlyCONVEX_SELF_HOSTED_URL (no secrets)
Agent containersTask-specific onlyCONVEX_URL (API endpoint), LLM_* (if agent needs direct LLM access)

Convex environment variables are never passed to agent-generated code. The code-worker sidecar receives only the Convex client URL and LLM configuration — it authenticates to Convex via a scoped API token, not the admin key.

Getting started

# 1. Clone the repository
git clone https://github.com/owlat/owlat.git
cd owlat

# 2. Start the stack
docker compose up -d

# 3. Generate admin credentials
docker compose exec convex ./generate_admin_key.sh

# 4. Deploy the Convex functions
echo 'CONVEX_SELF_HOSTED_URL=http://localhost:3210' > .env.local
echo 'CONVEX_SELF_HOSTED_ADMIN_KEY=<your-key>' >> .env.local
npx convex deploy

# 5. Open the dashboard
open http://localhost:3000