Cross-tool Search for Developers: Your Codebase Is the Wrong Place to Look
Most developer decisions live outside the code. Here's how to build cross-tool search across Slack, Linear, GitHub, and Notion.
By Ellis Keane · 2026-03-17
Your codebase is the least useful place to search for why a decision was made.
I know that sounds backwards. We spend years learning ripgrep flags, configuring IDE search, memorising regex patterns – and none of it helps when the question isn't "where is this function?" but "why did we choose this approach over the three alternatives we discussed?" The answer to that second question is almost never in the code. It's in a Slack thread from four months ago, a Linear comment that got buried under status updates, a Notion doc that someone started and never finished, and a PR review where the real debate happened in a reply to a reply to a reply.
That's the cross-tool search problem for developers – decision context is split across tools with no unified query path. We have search that works well within each tool – Slack's search is decent, GitHub's code search is excellent, Linear has filters for everything – but nothing that searches across them. The decisions that shaped your architecture live in five different places, and you're expected to remember which place to look.
Right, so – here's how to build cross-tool search with what you already have. No new tools required (well, almost – I'll mention one at the very end, but this works without it).
The Anatomy of a Scattered Decision
Let me walk through something specific. Last year, we were deciding whether to use BullMQ or Temporal for our job queue. Here's where that decision actually lived:
- Slack (#engineering): Three separate threads across two days. The first was a link someone shared to a Temporal blog post. The second was a debate about whether we needed durable execution. The third (a week later, different channel) was someone asking "wait, did we decide on the queue thing?"
- Linear: An issue titled "Evaluate job queue options" with six comments, including a comparison table that one of our engineers spent an afternoon writing.
- GitHub: A PR description for the BullMQ implementation that said "as discussed" with zero links to where it was discussed.
- Notion: A half-finished architecture decision record that covered Temporal's pros but never got updated with the final choice.
- Google Docs: Meeting notes from a call where we actually made the decision, buried in bullet points between two unrelated agenda items.
Five tools. One decision. And if you'd searched any single tool, you'd have found a fragment – never the full picture. The PR tells you what we chose. The Slack threads tell you what we considered. The Linear issue tells you the trade-offs. The Notion doc tells you half the reasoning. The meeting notes tell you the moment it was finalised.
This is not unusual. This is, somehow, the state of the art for how engineering teams track decisions in 2026. We have AI that generates code and search engines that index the entire internet, but finding out why your team chose BullMQ over Temporal requires checking five apps and hoping someone's memory holds up.
What Makes Cross-tool Search Hard for Developers
It's not an API problem – every tool we use has a perfectly good search API. The problem is weirder than that:
Different data shapes. Slack returns messages with timestamps and channel IDs. Linear returns issues with states and labels. GitHub returns commits, PRs, and code matches in completely different response formats. Merging these into a coherent timeline requires normalisation that nobody bothers to build (because, honestly, it's the kind of work that doesn't show up in sprint demos).
Context fragmentation. A Slack message saying "let's go with option B" is meaningless without the thread that defined options A, B, and C. But Slack's search returns individual messages, not conversation arcs. You find the conclusion without the reasoning.
Temporal drift. The decision process often spans days or weeks, with gaps where nothing happened because everyone was heads-down on other work. A keyword search might surface the beginning and the end of a conversation while missing the crucial middle, simply because different words were used at different stages.
Cross-tool search for developers isn't an API problem – every tool has a perfectly good search endpoint. It's a context problem: decisions are scattered across tools in incompatible shapes, fragmented by conversation arcs, and separated by temporal drift. Keyword search finds fragments; only connected context finds the full picture.
Building Cross-tool Search With What You Have
Here's the practical bit. For three or four tools with read-only search, expect half a day to get an MVP working – most of that spent on auth setup and response normalisation rather than the search logic itself.
Set Up API Access
You'll need tokens for each tool:
- Slack: A user token with
search:read scope (Slack's search methods require user tokens, not bot tokens – create via Slack API apps page)
- Linear: A personal API key from Settings, then API
- GitHub: A fine-grained PAT with read access to your repos
- Notion: An internal integration token from Settings, then Connections
The Fan-out Query Script
The basic pattern is embarrassingly simple – fire the same search query at every API and collect the results:
```typescript interface SearchResult { source: 'slack' | 'linear' | 'github' | 'notion'; title: string; snippet: string; url: string; timestamp: Date; }
async function crossToolSearch(query: string): Promise<SearchResult[]> { const results = await Promise.all([ searchSlack(query), searchLinear(query), searchGitHub(query), searchNotion(query), ]);
return results .flat() .sort((a, b) => b.timestamp.getTime() - a.timestamp.getTime()); } ```
Each search* function wraps the respective API. For Slack, that's search.messages. For Linear, it's a GraphQL query against their search fields. For GitHub, it's the REST search endpoint. For Notion, it's the search endpoint with a query parameter.
Normalise and Deduplicate
The tricky part isn't the search – it's making the results useful. You'll want to:
- Normalise timestamps across tools (Slack uses Unix epochs, Linear uses ISO strings, GitHub uses ISO with timezone offsets)
- Group related results – if the same Slack thread appears three times because three messages matched, collapse them into one result with the thread URL
- Rank by relevance – most APIs return their own relevance scores, but they're not comparable across tools. A simple heuristic: exact keyword matches in titles rank above body matches, and more recent results rank above older ones at equal relevance
Wrap It in a CLI
I use Commander.js for this (mostly out of habit, but anything works):
```bash $ cross-search "bullmq vs temporal"
Found 14 results across 4 tools:
[Slack] #engineering – 2025-11-14 "I've been comparing BullMQ and Temporal for the job queue..." https://myteam.slack.com/archives/C0X.../p17318...
[Linear] ENG-342 – 2025-11-15 "Evaluate job queue options – BullMQ vs Temporal" https://linear.app/myteam/issue/ENG-342
[GitHub] PR #289 – 2025-11-22 "feat: implement BullMQ job queue (as discussed)" https://github.com/myorg/myrepo/pull/289
[Notion] Architecture Decisions – 2025-11-13 "Job Queue Evaluation: Temporal vs BullMQ" https://notion.so/myteam/abc123... ```
Fourteen results, sorted by time, across four tools. You can see the full arc of the decision in one place: the Notion doc was started first, then the Slack discussion happened, then the Linear issue was created for tracking, and finally the PR landed a week later.
Making It Actually Good
The basic version above works, but it has some frustrating edges. Here's how to improve it:
Thread expansion for Slack. When you find a matching message, fetch the entire thread with conversations.replies. The matching message might be "yeah, let's go with BullMQ" – not useful without the preceding 40 messages of debate. Display a snippet of the thread, not just the matching message.
PR review comments. GitHub's search API doesn't surface review comments when you search for PRs – you'll need a separate call to the pull request reviews endpoint to fetch them. That's where the real technical discussion lives.
Backlinks. When you find a Linear issue, check whether any Slack messages contain that issue's URL. Slack's search supports has:link filters combined with keywords. This surfaces the informal discussion that happened around the formal tracking.
Caching. If your team generates a lot of content (and whose doesn't), you'll hit rate limits quickly. Cache results locally with a TTL of 30 minutes – most historical decisions don't change that fast.
When Text Search Breaks Down
Here's where I'll be honest about the limitations. Keyword search across tools gets you surprisingly far, and then it hits a wall.
The wall is this: decisions evolve. The Slack thread about "job queues" might never mention "BullMQ" by name – instead, someone shared a link, someone else said "I like the Redis-backed option," and a third person said "agreed, let's go with that." Your search for "BullMQ" misses the entire thread because the word was never used. The people in the thread knew what "the Redis-backed option" meant. Your search doesn't.
This is fundamentally a graph problem, not a text problem. What you actually want is: "show me everything connected to the decision that led to PR #289." That means understanding that the PR references a Linear issue, which was created after a Slack discussion, which started because someone read a Notion doc. The connections are implicit – humans made them by copying URLs and saying "as discussed" – and a keyword search can't reconstruct them.
You can partially solve this by following links. Parse URLs out of Slack messages, PR descriptions, and Linear comments. Build a simple adjacency list: this Slack thread links to this Linear issue, which is referenced in this PR. Then when someone searches, you can expand results to include linked items even if they don't match the keyword.
That adjacency-list approach is essentially a rudimentary knowledge graph – and it's where the real value of cross-tool search for developers lives. Not in finding individual messages, but in following the thread of a decision across every tool it touched. It's less "search" and more developer knowledge management – understanding how information flows between your tools so you can reconstruct context when you need it.
The Maintenance Problem (And a Shortcut)
The script approach works brilliantly for about three months, and then someone changes the Slack workspace, or Linear updates their GraphQL schema, or you add a new tool and nobody remembers to update the search script. I've built this exact thing twice and abandoned it twice (which probably says more about my commitment to maintenance than about the approach itself).
If you want cross-tool search for developers that stays current without babysitting, that's what tools like Sugarbug are built for – it maintains the knowledge graph automatically and keeps the connections alive as your tools change. But the DIY version above is genuinely useful if you're willing to maintain it.
Stop searching five tools separately. Sugarbug builds the knowledge graph so you can find any decision, discussion, or commit in one place.
Q: How do I search across multiple developer tools at once? A: You can build a lightweight cross-tool search by combining each tool's API – Slack's search.messages, Linear's issueSearch, and GitHub's code search endpoint – into a single script that fans out queries and merges results by timestamp. The code samples above will get you started in an afternoon. The main challenge isn't the search itself but normalising the different response formats into a coherent timeline.
Q: Does Sugarbug provide cross-tool search for developers? A: Yes. Sugarbug ingests signals from Linear, GitHub, Slack, Figma, Notion, and other tools into a knowledge graph, so you can search for a decision or discussion and find every connected thread, issue, and commit in one place. It handles the normalisation, deduplication, and link-following automatically – the bits that make the DIY approach fragile over time.
Q: Why can't I find architecture decisions in my codebase? A: Because most decisions happen in Slack threads, Linear comments, Notion docs, and PR reviews – not in the code itself. The code records the outcome of a decision (the function exists, the library was chosen), but the reasoning, trade-offs, and alternatives discussed live scattered across your communication tools. A git blame tells you who changed a line and when, but not why they chose that approach over the alternatives.
Q: Can Sugarbug replace ADR documents for decision tracking? A: Sugarbug doesn't replace ADRs, but it catches the decisions that never make it into an ADR. Most teams write ADRs for maybe 10% of their architectural choices – the rest dissolve into Slack threads and PR comments. Sugarbug surfaces those by connecting conversations to the code changes they produced, so you get decision tracking for the other 90% without changing anyone's workflow.