Open source · MIT · TypeScript
Everything is a
conversation.
Verbum gives every participant in your system, AI models, terminals, MCP servers, APIs, humans, a seat at the table. They send messages. They receive messages. The rest is routing.
The right abstraction
changes everything downstream.
Most frameworks model agent systems as function calls that happen to use language. You invoke tools. You query models. You stitch results together. It works until you need to observe it, debug it, replay it, or hand control to a human.
Verbum starts from a different premise. Every participant is an Actor. Every interaction is a Message. Your terminal is not a tool, it is in the conversation. Your MCP server is not a plugin, it is a participant. Your human is not a callback, they are first-class.
What if every part of your system could talk to every other part in a language you could actually read?
The result is a system that is observable by construction, replayable by design, and composable without ceremony.
Seven actor types.
One universal interface.
ModelActor
Any LLM, wrapped uniformly. One line to swap providers.
ProcessActor
A persistent shell session as a first-class participant. Any CLI, any subprocess, any REPL.
MCPActor
Speaks the Model Context Protocol natively. Connect to any MCP server over stdio and auto-discover its tools.
ToolActor
Deterministic functions and API wrappers. Tools that can also initiate conversations.
HumanActor
A human is just another actor. Pause any flow, inject a response, resume. Native to the model.
MemoryActor
Persistent context that participates in conversation. Ask it anything, it responds like an actor.
Router
The runtime. It dispatches, records every hop, and turns the whole run into a readable graph.
Same API. Every actor.
import { Router, ModelActor, ProcessActor, MemoryActor, anthropicAdapter } from "verbum-ai"
const router = new Router()
router.register(new ModelActor({
id: "claude",
provider: "anthropic",
model: "claude-sonnet-4-20250514",
adapter: anthropicAdapter()
}))
router.register(new ProcessActor({ id: "shell" }))
router.register(new MemoryActor({ id: "memory" }))
await router.send({
from: "user",
to: "claude",
role: "user",
conversationId: "demo",
content: { type: "text", text: "Run the tests and tell me if we can ship." }
})Five things you get for free
from the right abstraction.
Observable by construction
Every interaction, model to shell, agent to MCP server, human to model, is a structured, readable message. Not a log line. Not a trace ID.
Replayable and forkable
Replay any run. Fork from any message. Explore what would have happened if the model had responded differently. Debug with time travel.
Portable context
Move a conversation between providers mid-flight. Start with Claude, hand off to a local model, resume with GPT. The context belongs to no one.
Composable without ceremony
An agent that uses a shell, an MCP server, and a memory store is three registered actors and a routing rule. No pipelines to wire. No abstractions to fight.
Middleware out of the box
Logging, cost tracking, and rate limiting ship as built-in middleware. Write your own with a single function that wraps the dispatch chain.
The god-view of every
conversation your system is having.
A native Mac app that ingests streams from Verbum agents, Claude Code, Codex CLI, terminals, and custom sources, then renders them as one unified conversation surface.
Master conversation
Your main conversation with Verbum is the master thread. Spawn focused side conversations when you need them.
Claude + Codex companion
Claude task files, Codex exec runs, and terminal sessions all stream into the same typed message feed.
Typed custom sources
Bring your own JSONL source and render it beside Claude Code and Codex without inventing a one-off UI.
Stop wiring. Start talking.
MIT licensed. Built in public. The docs site is static. The Mac app is where the live integrations run.
