Hive · process extraction for AI agents

Your agent isn't broken. Its knowledge base is.

The refund policy your support agent quoted last Tuesday lived in a Slack DM from March, not in Notion. Hive reads how your team actually works — Slack, Notion, Drive — and extracts the processes nobody ever wrote down.

Pre-launch Taking 5 design partners
01 · The problem

The agent is reading a knowledge base that was never built for it.

why pilots stall

You shipped an agent. It works for 80% of tickets. Then it confidently quoted the wrong refund window to a customer who'd been with you for two years, and the deal team found out from the customer.

The right answer existed. It just lived in a Slack DM between your CX lead and a founder, written six months ago when you carved out an exception for annual plans. Notion still says 30 days, no exceptions. The Drive doc says escalate over $10k. Three sources, three answers, and your agent picked the most confidently-worded one.

The processes your team actually runs were never written down as processes. They were decided in threads, refined in calls, contradicted in policy docs nobody updated. When an agent reads that as "knowledge," it hallucinates — not because the model is wrong, but because the source of truth doesn't exist yet.

#cx-team · pinned · 14 days ago
M
Maya Chen2:14 PM
@ben quick one — the Acme refund. They're at day 41 but it's an annual plan and they only finished onboarding three weeks ago. Do we extend?
B
Ben Park2:18 PM
Yeah do it. Annual plans, extend to 60 days if onboarding wasn't completed in the first 30. Same call we made for Cortex in February. I'll update the doc later.
Hiveextracted · 14d later
process: refund.eligibility · v3 · contradicts notion/policies/refunds#L12 · confidence 0.91 · awaiting human resolution
02 · Mechanism

Read the work. Extract the process. Hand the agent something executable.

step 01 · connect

Point Hive at where work actually happens.

Slack, Notion, Google Drive. Read-only. We index threads, docs, and decisions — not just the documents your team remembered to write.

Slack Notion Drive + Linear, GitHubsoon
step 02 · extract & audit

Pull discrete processes out of conversations and contradictions.

We surface the rules implicit in how decisions get made, then flag every place sources disagree — so a human resolves the conflict once, not the agent every time.

claimrefund · 30d · no exceptionsnotion
claimrefund · 60d · annual + onboardingslack
claimrefund · escalate > $10kdrive
3 sources · 1 process · 1 conflict
step 03 · output

Versioned workflows your agent can execute, verbatim.

Each process gets a version, a source trail, and a confidence score. Wire it into Decagon, Sierra, your in-house agent — anything that takes a structured workflow as input.

# refund.eligibility · v3 when request.refund if plan == "annual" && onboarding.complete < 30d then approve until day(60) else approve until day(30) if arr > 10000 → escalate(founders) # sources: slack:#cx-team/p1729, notion:/policies/refunds
Where Hive sits
documents · knowledge · processes

Glean indexes the documents you have. Notion AI answers from what's already written. Hive extracts the processes that were never documented in the first place — and turns them into something an agent can execute the same way every time.

03 · The wedge

The agent stack is missing a layer. We're building it.

Glean & co.
Index documents.
Search across what's already written down.
Decagon · Sierra · Ada
Act on knowledge.
Take an action given a known process.
Hive
Extract the processes.
Recover the rules that were never written down — from how the team actually works.
why this gap exists

The processes a company runs on are mostly tacit. They're decided in Slack threads, refined in customer calls, contradicted in policy docs nobody owns, and held in the heads of three or four people who joined before headcount 50. When teams adopt agents, they assume those processes are written down somewhere. They almost never are.

Indexing that mess produces a search engine over contradictions. Acting on that mess produces an agent that hallucinates the half it doesn't have. The missing layer is the one that reads the work itself — threads, edits, decisions — and produces the explicit, versioned process the other tools assume already exists. That's a structural gap, not an incidental one. It widens every quarter as teams add tools and shrink the share of work that's written down.

Founders building agents have to choose: build this layer themselves, or accept the hallucinations. Most are accepting them. We don't think they should have to.

get in touch

If you're running an agent pilot and it isn't landing, we want to talk.

We're picking five design partners before we ship. If you've already deployed an agent and the knowledge base is fighting it, send us a note — we'll come back within 24 hours with a sample audit of one process from your stack.

5 design partner slots · 24h response · read-only access