/
CASE STUDIES
/
2026
How we built an AI operational brain to reduce founder friction


/
TIMELINE
6 weeks
/
SERVICES
Agency
How do you get operational knowledge out of founders' heads and into a system that actually answers team questions automatically?
/
OVERVIEW
Agency founders spend hours answering the same questions because operational knowledge lives in their heads or is scattered across unread documents. When the team needs context on a project or an SOP for a task, they interrupt leadership, creating a bottleneck that prevents the agency from scaling.
We built Compendium as an operational memory and decision system. It continuously pulls hard data from ClickUp and qualitative context from Obsidian, creating a single source of truth. When the team asks a question in Telegram, the system's two-tier engine retrieves the exact rule and applies multi-factor analysis to return the right answer immediately.
/
SOLUTIONS
Two-tier decision engine
We built an engine that does more than just retrieve documents. It first executes an SOP lookup to find the exact rule, then runs a multi-factor analysis to apply that rule to the specific context, ensuring the answer is accurate and immediately useful.
ClickUp and Obsidian integration
The system connects ClickUp for hard data (tasks, deadlines, statuses) with Obsidian for the contextual layer (SOPs, qualitative notes, guidelines). It maintains a single source of truth and automatically syncs across all modules.
Multi-interface access with multi-LLM fallback
The brain lives where the team already works. We deployed it across Telegram, an HTTP API, and a CLI. To ensure 100% reliability, the system runs on a multi-LLM setup with automatic fallback—if one provider struggles or goes down, it instantly routes to another.
/
RESULTS
Compendium successfully extracted founder knowledge into structured data, completely automating leadership decision responses. The team now gets instant, contextual answers directly in Telegram without interrupting founders.
The reactive system eliminated repeat questions, established a single source of truth, and provided 100% uptime through its multi-LLM fallback architecture, freeing up leadership to focus on growth rather than operational queries.
100%
Knowledge synced
0
Repeat questions
100%
System uptime
/
CASE STUDIES

