How to Track Decisions Across Tools When Your Team Uses 5
How to track decisions across tools when they're scattered across Slack threads, Notion docs, Linear comments, and PR reviews – without decision logs.
By Ellis Keane · 2026-03-13
"Didn't we already decide this?"
Five people on the call. Nobody answers. Someone unmutes to say they think it came up in a Slack thread, maybe three weeks ago, possibly in #engineering but it could've been #backend. Our designer half-remembers a Notion comment. One of our engineers is already scrolling through Linear, looking for a comment on the issue that might've been closed. Or archived. Or moved to a different project.
The decision in question: which API versioning scheme we'd use going forward. Not a bet-the-company choice. Not an architectural cliff edge. Just a straightforward call about how to track decisions across tools – or, more precisely, how to find one specific decision that had definitely, provably, already been made, and that had now evaporated into the space between five tools that don't talk to each other.
Let me reconstruct the crime scene.
The forensic timeline of one lost decision
Here's what actually happened, pieced together after the fact (because of course I went and found it all eventually – took me the better part of an hour, which felt like a productive use of a Wednesday afternoon).
Day 1, 10:14am – One of our engineers drops a Notion doc titled "API Versioning Options" in #engineering. Three options laid out, pros and cons for each. Clean formatting. Good thinking. The kind of document that makes you feel like your team has its act together.
Day 1, 10:22am – Discussion starts in the Slack thread under the shared link. Six replies in the first twenty minutes. A genuine, useful conversation about backwards compatibility, client SDK implications, and whether header-based versioning is worth the debugging pain. Also, somewhere around reply four, a brief tangent about where to get lunch (which, honestly, produced a faster consensus than the versioning debate).
Day 1, 11:47am – Our designer, who'd been lurking, drops a one-liner: "path versioning keeps the API explorer readable, let's just do /v2/." Two thumbs-up reacts. No dissent. Decision made.
Day 1, 2:30pm – A teammate summarises the outcome in a Linear comment on the API refactor issue. Good instinct. Terrible discoverability, as it turns out, because Linear comments become functionally invisible once an issue gets closed.
Day 8 – The PR implementing /v2/ lands. The PR description references the Linear issue by number but says nothing about the versioning decision itself or the Slack thread where it actually happened. Perfectly normal. Nobody writes "by the way, here's the full genealogy of this decision" in a PR description, because nobody is a psychopath.
Day 43 – Someone new picks up a related ticket and asks: "Are we doing path versioning or header versioning?" The Notion doc? Never updated with the outcome. The Slack thread? Buried under six weeks of messages. The Linear comment? On a closed issue nobody thinks to search. The PR? You'd need to know it existed to find it.
And so five people sit on a call, staring at each other, re-deriving a decision that was already settled six weeks prior. Progress.
title: "One Decision, Six Weeks, Five Tools" Day 1, 10:14 AM|ok|Notion doc "API Versioning Options" shared in #engineering; three options laid out Day 1, 10:22 AM|ok|Slack discussion begins; productive debate on backwards compatibility and SDK implications Day 1, 11:47 AM|ok|Decision reached: path versioning, /v2/ Day 1, 2:30 PM|amber|Decision summarised in a Linear comment; closed issue = poor discoverability Day 8|amber|PR implementing /v2/ merges; description references the issue but not the decision Day 43|missed|New developer asks “path or header versioning?” – the answer exists in four places; nobody can find it
Where decisions go to die
The thing is, none of these tools failed. Slack did exactly what Slack does. Notion held the document beautifully. Linear tracked the issue. GitHub merged the code. Every tool performed flawlessly in isolation, which is the kind of observation that sounds like a compliment until you realise it's actually the diagnosis.
| Where it happened | Why it's unfindable later | |---|---| | Slack thread | You need to remember the exact words someone used six weeks ago. Good luck. | | Linear comment | Comments on closed issues might as well be carved into the ocean floor | | Notion doc | The doc exists, but nobody updated it with the outcome, because humans | | GitHub PR | Conversations collapse post-merge – you'd need to know which PR to excavate | | Meeting (verbal) | Gone entirely unless someone took notes AND put them somewhere findable | | Email thread | Decent search, but only if you were on the right chain |
Six tools. Six search engines. Zero shared memory.
Every tool performed flawlessly in isolation – which is the kind of observation that sounds like a compliment until you realise it’s actually the diagnosis. attribution: Chris Calo
The decision log: a beautiful corpse
If you've been nodding along, you've probably already had The Instinct. The one where you think: "Right, I'll create a Decision Log." Capital D, capital L. A gorgeous Notion database with columns for Date, Decision, Context, Stakeholders, and Status. You announce it in the team channel. You add the first three entries yourself, with meticulous detail, and it feels genuinely great.
I've built this exact artefact at three companies now (and yes, I'm aware that repeating the same failed experiment and expecting different results has a clinical name). Each time, I was absolutely certain it would stick. "This time we'll be disciplined!" We were not. You will not be either. I don't say this to be cruel – I say it because the failure mode is baked into the design.
Here's what happens. Week one: beautiful. Week two: mostly populated. Week three: someone makes a quick call in a Slack DM, and the log doesn't hear about it. Week four: a PR gets merged with an implicit architectural decision buried in a review comment, and nobody thinks to transcribe it. Week five: someone's on holiday, the remaining team decides something over lunch (the lunch tangent strikes again), and the log falls silent.
By week six your Decision Log is a memorial. A tasteful monument to good intentions, sitting in your Notion sidebar, untouched, gathering the digital equivalent of dust. I've got three of them. They're gorgeous. They're also completely useless.
Decision logs fail not because teams are undisciplined, but because they ask humans to recognise a moment as important while it’s happening, pause, context-switch to a documentation tool, and write it up with enough detail to be useful six weeks later. That’s an absurd thing to ask of people who are busy doing real work.
How to actually track decisions across tools
Manual logs fail because human nature. Per-tool search fails because fragmentation. What actually works is something that watches the full surface area of your tools and connects the dots without anyone needing to pause what they're doing.
In practice, that means four things:
Automatic ingestion. Every signal from your tools – Slack messages, Linear comments, PR reviews, Notion updates, meeting transcripts – gets captured without anyone lifting a finger. You keep working. The system keeps watching. Nobody has to pause mid-thought to open a spreadsheet and record what just happened (which, as we've established, nobody does anyway).
Classification. Not every message is a decision. Most are status updates, questions, or noise. The system needs to tell the difference between "should we use path or header versioning?" (a question), "let's just do /v2/" (a decision), and "/v2/ endpoint is deployed" (a status update). This is where an LLM classifier earns its keep – though it's not infallible. We had a memorable stretch where "yeah let's just do that" kept getting flagged as a major decision when it was really someone agreeing to grab coffee. Turns out you need temporal context and sender-role weighting to get the confidence score right.
Linking. This is the bit that separates "better search" from actual decision tracking. When a decision in a Slack thread relates to a Linear issue that produced a GitHub PR – those connections need to exist because the system traced the references (issue IDs, PR numbers, thread URLs, temporal proximity), not because someone dutifully drew them by hand. The Notion doc, the Slack thread, the Linear comment, and the PR should all point at each other, automatically, because that's what happened.
Retrieval. When you search for "API versioning decision", you get back the full trail – not just whichever tool you happened to search first. The Notion doc with the options, the Slack thread where the call was made, the Linear comment that summarised it, and the PR that implemented it. All connected. All without anyone having filed a single entry in a single log.
Two things you can try right now (genuinely, no strings attached):
- The
#decisions channel. Create one in Slack and ask your team to drop a one-liner there whenever something gets decided elsewhere. It's manual, and it will decay within a month (I've established my credentials on this point), but even a partial, decaying log makes the pattern of fragmented communication painfully visible.
- The PR description habit. When you open a PR that implements a decision, add one line to the description: "Decision: [what was decided] – see [link to thread/doc]." This costs ten seconds, survives code review, and gives future-you something to actually search for. It won't catch the decisions that happen in Slack DMs or over lunch, but the ones it does catch are the ones that matter most – the ones that change the codebase.
What connected decision tracking actually changes
The archaeology becomes a query. That versioning hunt from the opening? With cross-tool indexing, it becomes a five-second search that returns every artefact in the chain, linked. Which would've saved me an embarrassing Wednesday afternoon.
Onboarding context that doesn't rot. New team members get the connected trail of why things are the way they are, instead of a wiki page last updated three months ago that everyone vaguely knows is wrong but nobody's bothered to fix. (You have one of these. Everyone has one of these.)
Fewer re-runs of the same debate. This surprised me. Once decisions are findable, "didn't we already decide this?" becomes answerable in seconds instead of dissolving into a ten-minute group hallucination where everyone remembers discussing it but nobody can confirm what was actually concluded.
Patterns you couldn't see before. When decisions are visible in aggregate, you start noticing which topics generate the longest debates and where decisions stall. Operational intelligence that no single tool can give you, because no single tool has the full picture.
How Sugarbug approaches this
The versioning hunt was roughly the final straw that pushed me to start building Sugarbug (well, that and the three dead Decision Logs weighing on my conscience). It's the system I described above – connects to your existing tools via API, feeds every signal into a knowledge graph, classifies and links automatically. The graph builds itself while your team works. Nobody documents anything, because capture is a side effect of the work itself.
We're still early (in production, pre-launch), and there are hard problems we haven't cracked – decisions that happen verbally in meetings where nobody took notes, or disambiguating "yeah, let's do that" from a genuine commitment (the coffee incident taught us a lot about confidence thresholds). But the time I spend hunting for past decisions has dropped from "regularly infuriating" to "occasionally mild," which feels like a reasonable trajectory.
Get signal intelligence delivered to your inbox.
Frequently Asked Questions
How do I find a decision that was made in a Slack thread weeks ago?
Without a cross-tool index, you're doing what I did – scrolling, trying different keywords, hoping you remember roughly when the conversation happened. Sugarbug ingests Slack messages alongside your other sources into a knowledge graph, so you can search by concept rather than exact keyword. It's not magic – the conversation still needs to have happened in text – but it beats the archaeological dig.
Does Sugarbug automatically track decisions across tools?
It does. Every signal from your connected tools gets classified – decisions, status updates, questions, blockers – and linked to the relevant people and tasks. When a decision surfaces in a Slack thread that relates to a Linear issue, the graph connects them without anyone having to copy-paste a link or update a log.
What's the difference between a decision log and a knowledge graph?
A decision log requires someone to write down what was decided, when, and by whom. A knowledge graph builds those connections automatically from the signals your tools are already producing – the Slack thread, the Linear issue, the PR that implemented it. One requires discipline (which, as I've thoroughly established, we're terrible at); the other requires a system.
Why do decision logs always fail?
Because the tax falls at exactly the wrong moment. You'd need to recognise a decision as important while it's happening, pause, switch to a different tool, write it up with enough context to be useful weeks later, and then get back to work without losing your thread. Every team I've seen try this abandons it within six weeks – not from laziness, but because the process fights against how people actually work.
Can small teams benefit, or is this only for large organisations?
Small teams get hit harder, in my experience. There's no dedicated PM maintaining documentation, and the "institutional memory" is one or two people who happen to have good recall. A five-person startup making dozens of micro-decisions a week across Slack, GitHub, and Notion has the same fragmentation problem as a fifty-person org – just fewer people to absorb the cost when something goes missing.
---
If you've ever sat on a call while five people try to reconstruct a decision that was already settled weeks ago, that's the exact problem we built Sugarbug to eliminate. Join the waitlist.