Vision
Where Owlat is heading — from email platform to unified communication intelligence powered by AI agents.
Where Owlat is going
Owlat started as an email platform. But the problem we set out to solve — helping companies communicate effectively — is much broader than email. Companies drown in fragmented communication across dozens of tools, with no intelligence layer connecting them. We are building the system that treats all organizational communication as a single, AI-powered pipeline.
Owlat currently handles email campaigns, transactional email, automations, and audience management with a custom email renderer, a real-time Convex backend, and a custom MTA. Everything described below builds on this foundation.
The problem with company communication today
Fragmentation. Companies use separate tools for email marketing, customer support, internal chat, ticketing, and CRM. Each tool has its own data silo. When a customer emails about a booking, the support agent alt-tabs to three different systems to find the context they need to reply.
Manual routing. Humans spend their days classifying, routing, and drafting responses. Most of this work is mechanical: look up data, apply a template, send. The creative, judgment-heavy work — deciding on exceptions, handling escalations, building relationships — is buried under routine.
No institutional memory. The veteran support rep who knows "when customer X says Y, they actually mean Z" — that knowledge lives in one person's head. When they leave, it disappears. There is no system that learns from every interaction and makes that knowledge available to the entire organization.
Communication is not just outbound. Companies treat human-to-human conversation and automated messaging as separate concerns — one lives in Slack and email threads, the other in marketing tools. But all of it is organizational communication. It all generates knowledge, builds relationships, and drives decisions. Any system that only handles one side is incomplete.
The Communication Hub
Owlat becomes the single place where all organizational communication flows through — inbound and outbound, customer-facing and internal, human-to-human and AI-assisted.
Today, that means email. Tomorrow, it means SMS, chat, voice, webhooks from third-party systems, and Owlat's own native channel. The architecture is channel-agnostic: every message becomes a structured event in the same pipeline, regardless of where it originates or where it needs to go.
Channel Adapters are pluggable connectors that normalize different communication channels into a unified message format. Adding a new channel — say, WhatsApp or a custom webhook — means writing an adapter, not redesigning the pipeline.
Human-to-human communication comes first
The pipeline does not replace direct conversation. When one person messages another — whether a colleague on the team or a customer over email — the message flows through and is delivered directly. No gates, no approval queues, no friction.
AI watches and enriches. It surfaces relevant context from the Knowledge Graph ("this customer had a billing dispute last month"), suggests follow-ups ("you promised to send the report by Friday"), and flags commitments that might otherwise be forgotten. For external adapters like email or SMS, the original message threading is fully preserved. The recipient sees a normal conversation in their inbox. They never need to know Owlat exists.
The agent pipeline is augmentation, not a gate. Humans talk to humans. AI makes those conversations more informed.
Owlat as its own channel
Owlat will become its own communication channel — a desktop app where organization members handle all their communication in one place.
Internal chat. Colleagues talk directly, in threaded conversations backed by the same Knowledge Graph that powers everything else. When you ask a question in a team channel, the system can surface relevant documents, past decisions, and related conversations before anyone types a reply.
External communication. Every conversation with every contact — customer, investor, partner — is accessible in the same interface. No switching between email, support tickets, and CRM notes. One thread per contact, across all channels.
Quick queries. Organization members can ask the system questions directly: "What is our current MRR?", "When did we last talk to Acme Corp?", "Show me the contract we signed with them." The Knowledge Graph and file system provide answers with source citations.
Owlat is a first-class adapter in the same architecture. It does not get special treatment — it follows the same pipeline, the same security model, the same audit trail as every other channel. This means anything built for one channel works for all of them.
The Agent Pipeline
Every inbound message flows through a multi-step processing chain. This is the technical heart of the system.
Step 1: Context retrieval. The agent queries the Knowledge Graph for everything relevant to this message: customer history, account status, previous interactions, organizational policies. It receives a synthesized briefing — not a raw dump of past data, but a token-budgeted summary produced by progressive compaction (recent context verbatim, older context summarized).
Step 2: Classification. The agent determines intent and urgency using a fast, cost-efficient model. Is this a billing question? A feature request? A complaint? An internal task? Multi-intent messages fork into parallel branches, each handled independently.
Step 3: Action planning. Based on classification, the agent decides what to do — using a capable model for complex reasoning. For a booking inquiry: fetch booking data and draft a response. For a feature request: create a ticket, check for duplicates, optionally spin up a task agent. For a complaint: escalate to a human immediately. The agent can actively save and recall knowledge during this step.
Step 4: Draft generation. The agent produces a response grounded in the organization's actual data, tone, and templates. This is not generic LLM output — it is a response that reflects how your organization communicates, using real data from your systems.
Step 5: Routing. The draft goes to the Verification Queue for human review, or — when the organization has configured graduated autonomy — delivers directly. Circuit breakers automatically pause auto-responses when LLM errors spike or confidence degrades.
The Verification Queue
For organization members, work becomes a list of verifications and approvals. This is not a limitation — it is the correct default. AI should draft, humans should decide.
Every agent action produces a reviewable artifact: a draft email, a proposed ticket, a suggested code change, a data update. Members see a prioritized queue — not a firehose of notifications, but a focused list of items that need their attention.
- One-click approve when the draft is ready to go
- Edit and approve when it needs adjustments — manually or via chat with the agent
- Reject when the agent got it wrong, with feedback that improves future drafts
- Confidence scoring determines routing: high-confidence items can be auto-approved based on organizational policy
- Full audit trail for every action, whether human-initiated or agent-initiated
Over time, organizations can expand the auto-approval boundary as they build confidence in the system. The goal is not to remove humans from the loop — it is to put them in the loop only where they add value.
The Knowledge Graph
Every organization accumulates knowledge: who their customers are, what has been decided, what happened last week, how things are done. Today, that knowledge is scattered across inboxes, documents, chat threads, and people's heads. The Knowledge Graph changes that.
"billing frustration""invoice #12345""all interactions with Acme Corp"Customer X has been on the Pro plan since 2024, upgraded in January. Billing exception granted in February ($50 credit). Prefers email. 3 recent tickets about API rate limits.
Typed knowledge, not raw data. The graph stores categorized knowledge — Facts, Decisions, Events, Preferences, Goals — each with inherent importance. This typed structure means agents retrieve relevant knowledge, not keyword matches. When an agent handles a billing question, it gets facts about the customer's plan and past billing decisions — not every message that happens to contain the word "billing."
Knowledge maintenance. The graph is a living system, not a write-only log. Stale knowledge decays over time. Contradictory facts get resolved. Duplicate entries merge. Recent interactions boost the relevance of associated knowledge. This keeps the graph accurate and useful as the organization evolves.
Each organization's knowledge graph is completely isolated. Agent context windows never mix data from different organizations. This extends the same multi-tenancy model Owlat uses today — every record scoped by organization — to the knowledge layer. Strict tenant boundaries are enforced at the storage, retrieval, and inference layers.
Internal communication and task agents
The same pipeline that handles "customer emails about a booking" also handles internal work. The architecture is identical — the agents are different.
Feature requests flow to engineering. A customer submits a feature request. The agent classifies it, checks the knowledge graph for similar existing requests, groups duplicates, creates a ticket with full context, and — when configured — spins up a coding agent to prototype a solution.
Code review in the same queue. A coding agent produces a pull request. The review lands in a developer's Verification Queue, just like a customer support draft lands in a support agent's queue. The developer reviews the code, improves it if needed, and merges. Same pattern, different artifact.
Agents pause, ask, and resume
Agents do not guess when they encounter ambiguity. If a coding agent hits an unclear requirement, or a support agent encounters a policy edge case, or a data agent needs approval for a sensitive operation — the agent pauses. It formulates a structured question with full context and places it in the relevant person's queue.
The question is not a bare notification. It includes what the agent was doing, what it already knows, what the options are, and what it recommends. The human answers — sometimes with a single click, sometimes with a detailed response. The agent then resumes from exactly where it stopped, incorporating the answer.
Multiple agents can have pending questions for the same person. The system prioritizes them by urgency and impact, just like any other queue item. Over time, the system learns from recurring questions and proposes organizational policies to eliminate them — turning ad-hoc decisions into codified rules.
The Visualize Agent
A specialized agent that takes data and builds interactive visual components — charts, dashboards, data tables, progress trackers. Not static images or screenshots, but live, interactive components that update as the underlying data changes.
Ask "Show me our email delivery rates for the last 30 days" and the Visualize Agent queries the relevant data, selects the right chart type, and produces a component you can interact with — hover for details, filter by segment, change the time range. Ask "Compare our open rates across campaigns this quarter" and it builds a comparison view.
The Visualize Agent operates within the same pipeline as every other agent. Its outputs land in the Verification Queue if configured, or render directly in a conversation. Visualizations can be pinned to dashboards, shared with team members, or embedded in reports. They are first-class artifacts, tracked in the same audit trail as every other agent output.
Internal knowledge queries. Team members ask the system questions: "What was our refund policy decision last quarter?" or "How did we handle the last outage?" The Knowledge Graph provides answers grounded in actual organizational history, with source citations.
The key insight: for all organization members, regardless of role, work converges to a single interface — a queue of verifications, approvals, and answers to provide.
The File System
Every organization has files — contracts, invoices, presentations, design assets, documentation. Today, they live in Google Drive, Dropbox, SharePoint, email attachments, and Slack threads. Finding the right file means remembering where it was saved, or who sent it, or what it was called. The file system of the future works differently.
Storage and retrieval via conversation. Files are shared and retrieved through natural language, not folder hierarchies. Drop a file into any conversation and it is indexed automatically. Ask "Find the contract we signed with Acme Corp" and the system finds it — regardless of what the file was named or who originally shared it.
Semantic organization. Every file gets embeddings generated from its content, plus auto-tags derived from the conversation context in which it was shared and the contacts or projects it relates to. A PDF shared in a thread about "Q3 financials" with "Acme Corp" automatically inherits those tags. No manual filing, no folder discipline required. Tags evolve as the organization's vocabulary evolves.
Version history with provenance. The system tracks not just what changed in a file, but who shared it, in which conversation, and why. When three versions of a contract float around, you can trace exactly which conversation produced each revision and who approved it.
Contextual retrieval for agents. Files become part of the agent's context retrieval step. When an agent handles a customer inquiry about their contract, it pulls the actual contract — not a summary of it, not a knowledge graph node about it, but the document itself. The file system and the Knowledge Graph are complementary: the graph stores structured facts, the file system stores the source material.
The file system is a layer in the same architecture. Files are scoped to the organization, searchable through the same retrieval pipeline, governed by the same permissions model, and logged in the same audit trail. There is no separate "file management" interface — files surface where they are needed, in conversations, in agent context, in search results.
The CRM Hub
Owlat already manages audiences and contacts for email campaigns. The next step is making it the definitive source of truth for every relationship the organization has — not just email subscribers.
Every contact type in one system. Customers, potential customers, investors, partners, vendors, press contacts, advisors. The CRM does not care about the label. Every person or company the organization interacts with is a contact, with a unified profile that spans all channels and all history.
Communication-native. Traditional CRMs require a separate "update the CRM" step — log the call, add the note, change the deal stage. In Owlat, the CRM builds itself from actual communication. When you email an investor, the interaction is logged automatically. When a customer complains in chat, the sentiment is captured. When a deal progresses through a conversation thread, the pipeline updates. The CRM is not a database you maintain — it is a view of your communication history.
Relationship intelligence. The Knowledge Graph powers relationship insights that go beyond "last contacted" timestamps. The system tracks sentiment trends, outstanding commitments ("you promised a proposal by next Tuesday"), recurring topics, response patterns, and relationship health scores. When you are about to meet with a contact, the system briefs you — not with a flat activity log, but with a synthesized summary of the relationship.
Contact unification across channels. The same person might email from their work address, text from their personal phone, and message from a chat handle. The system progressively merges these identities into a single contact profile, using signals from conversations, email signatures, and explicit user confirmation. One contact, one history, regardless of how they reached you.
Security model
AI-powered communication systems introduce real risks. We take them seriously. The security model is built around four layers.
Tenant isolation. Every query, every agent context window, every knowledge graph traversal is scoped to a single organization. There is no shared state between tenants. This extends Owlat's existing organizationId-scoped data model to every new subsystem. An agent processing messages for Organization A has zero visibility into Organization B's data.
Agent sandboxing. Agents that connect to external systems — booking APIs, code repositories, CRMs — operate within credential-scoped sandboxes. An agent processing a support ticket for Customer A cannot access Customer B's data, even if both customers belong to the same organization. Credentials are managed securely, never exposed in agent context, and scoped to the minimum necessary permissions.
Audit and explainability. Every agent action is logged with full provenance: what knowledge was retrieved, what classification was made, what draft was produced, and who approved it. If an agent makes a mistake, you can trace exactly why — and that trace feeds back into improving future behavior.
Graduated autonomy. Organizations control how much autonomy agents have. Start with human-in-the-loop for everything. Gradually enable auto-approval for specific categories — simple acknowledgments, routine status updates, standard booking confirmations — as confidence grows. The system earns trust incrementally. It does not assume it.
The Data Model
Everything described above — channels, conversations, agents, files, knowledge, contacts — connects through a single data model. This is not a collection of separate features bolted together. It is one graph of entities and relationships, scoped to the organization, where every surface (the desktop app, the verification queue, the CRM, the file system) is a different view of the same data.
The model has four layers. The tenant root (Organization) scopes everything — no entity exists outside an organization. Actors and channels (Members, Contacts, Channels, Agents) are the participants. Conversations are the universal hub where all participants meet, regardless of channel. Artifacts (Messages, Files, Knowledge Nodes) are what conversations produce. Every layer connects downward, and intelligence flows back up — agents read knowledge to enrich conversations, files feed into the knowledge graph, and the knowledge graph informs every future interaction.
The roadmap
We document architectural decisions for each phase as ADRs in our Developer docs, just as we have for the custom email renderer, Convex backend, custom MTA, process architecture, and model routing.
Why we are building this
Every company eventually builds an ad-hoc version of this: a support inbox connected to a CRM connected to a ticketing system connected to Slack, held together with Zapier and hope. That patchwork works until it does not. The knowledge is scattered. The routing is manual. The response time is slow. And when someone leaves, their knowledge leaves with them.
The file system, the CRM, the desktop app, the visualization engine — these are not separate products. They are surfaces on the same data model, connected by the same agent pipeline, governed by the same security model.
We think the right answer is a system that treats communication as a first-class data type, routes it intelligently, and lets humans focus on the decisions that actually require human judgment — not the mechanical work of looking up data and filling in templates.
Owlat today is the email layer. Tomorrow it is the communication layer. The architecture we have built — a real-time backend, a custom MTA, a block-based content system, and multi-tenant isolation — is the foundation for everything described here.
We are not building another inbox. We are building the communication intelligence layer that every organization needs but nobody has built yet.