What a Knowledge Graph for Your Work Tools Actually Looks Like
A knowledge graph for work tools isn't Google's fact box. Here's what one actually looks like when it connects Linear, Slack, Figma, and the rest of your startup's tool stack.
By Ellis Keane · 2026-03-14
In 1876, Melvil Dewey had a problem that should sound familiar. Libraries were drowning in books, and every institution had its own idiosyncratic system for organising them – or, more often, no system at all. A patron who wanted to trace a line of thought across three related works had to already know those works existed, already know where each one lived, and have enough free afternoon to physically walk between shelves. Dewey's Decimal Classification wasn't brilliant because it sorted books (people had been doing that for centuries). It was brilliant because it encoded relationships between subjects – the idea that thermodynamics and metallurgy and steam engineering were connected, even though the books sat on different floors.
Fast forward 150 years, and we've somehow managed to build the disorganised pre-Dewey library all over again, except the shelves are SaaS products and the books are Slack messages. A knowledge graph for work tools is, at its core, an attempt to solve the same problem Dewey solved – encoding relationships – but for the chaotic, fragmented mess of modern team collaboration. Progress.
The term "knowledge graph" gets thrown around with the same reckless confidence as "AI-powered" and "blockchain-enabled" – which is to say, almost nobody using it means the same thing. Google has one (the box that tells you the capital of Luxembourg when you search for it). Neo4j has one. Your company's Notion wiki is emphatically not one, despite what the consultant who sold it to you may have claimed. And somewhere in the middle of all this category confusion, there's an actually useful idea that keeps getting lost: a knowledge graph for work tools. A living graph that maps the relationships between the things your team does across Figma, Slack, Linear, GitHub, and the rest of the menagerie.
Let me try to cut through the fog.
What "knowledge graph" actually means (and what it doesn't)
Here's where the category confusion really bites. When most people hear "knowledge graph," they picture Google's Knowledge Panel – that tidy sidebar that tells you Barack Obama is 6'2" and was born in Honolulu. That's a static web of facts. Encyclopaedia Britannica with better typography. Useful, sure, but it has almost nothing to do with what a knowledge graph for work tools does.
The myth goes something like this: a knowledge graph is a big database of facts, maybe with some fancy visualisation, where someone (or something) has carefully entered all the important information about your organisation. It's a wiki, basically, but with circles and lines instead of pages and links.
The mechanism is different. A workplace knowledge graph doesn't store facts – it stores relationships between signals. Every Slack thread, every Figma comment, every Linear status change, every merged PR is a signal. The graph's entire job is to remember how those signals connect to each other: this conversation informed that decision, which produced that ticket, which was implemented in that pull request, which was reviewed by the same person who raised the original concern in a design crit three weeks earlier.
The signals are the nodes. The connections are the edges. And the edges are the whole point – without them, you just have a search index.
"The edges are what makes this a graph and not a database. Without them, you can find individual messages – but you can't find the decision a message was part of, or the six other conversations that shaped it." – Chris Calo
(You already have a search index. It's called Slack search. We'll get to why that's not enough.)
The great Notion wiki graveyard
Before we go further into the mechanism, let me take a moment to honour the fallen.
Every startup I've ever worked with – every single one – has had a Notion wiki. And every single one followed the same lifecycle: someone (usually the most organised person on the team, bless them) sets it up over a weekend. It's gorgeous. For about three weeks, people actually use it.
Then reality sets in. The wiki requires someone to physically move information from where it naturally lives – Slack conversations, Figma comments, Linear tickets – to where the wiki says it should live. That's a manual copy-paste tax on every piece of context your team generates. And, let me tell you, nobody pays that tax consistently. Not even the organised person who built the thing, because they're now too busy doing actual work to maintain the monument they built to doing actual work.
Six months later: half the pages are outdated, a quarter are contradictory, and the rest are blank templates that someone was definitely going to fill in "when things calm down." (Things never calm down. That's the other myth.)
The knowledge management industry has been selling us this same broken promise for twenty years: if you just document everything, you'll never lose context. It's a lovely theory. It founders on the same rock every time – humans don't document things in real time, and by the time they get around to it, the context has already been lost, distorted, or superseded by a Slack message nobody can find anymore.
What a knowledge graph for work tools actually stores
Right, back to the mechanism. A work knowledge graph stores two things: nodes and edges.
Nodes (the things)
- Tasks – Linear issues, GitHub issues, Jira tickets. Anything with a status and an owner.
- Conversations – Slack threads, Figma comment threads, email chains. Not individual messages – threaded discussions as units of meaning.
- People – your team, external contacts, stakeholders. Each person has a profile the graph builds over time from their interactions. (Not a profile they fill in and forget about. An actual, living profile.)
- Decisions – the moments where a team chose Path A over Path B. These are almost always implicit, buried in a Slack reply that three people saw and eleven people needed to see, rather than explicit in any decision log. A good knowledge graph surfaces them anyway.
- Artefacts – PRs, design files, documents, meeting recordings. The things your team produces.
Edges (the relationships)
The graph also stores how nodes connect:
- This Slack thread informed this Linear issue
- This person participated in this decision
- This PR implements this task
- This Figma comment blocked this design review
- This meeting produced these three action items
The edges are what makes this a graph and not a database. Without them, you can find individual messages, sure – but you can't find the decision a message was part of, or the six other conversations that shaped it.
How signals become knowledge (without anyone documenting anything)
Here's where the myth and the mechanism diverge most sharply. The myth says: build a knowledge base and maintain it. The mechanism says: observe what's already happening and map it automatically.
A knowledge graph that you have to maintain manually is a wiki by another name. It'll last three weeks. (See above, re: the graveyard.)
So the graph has to be automatic. Here's roughly how that works – and I'm simplifying, but the bones are right:
1. Signals come in. Every webhook, poll, and scrape from your connected tools produces a signal – a Slack message, a Linear status change, a Figma comment. A team of ten using five or six tools generates hundreds of these a day. Most people don't realise how much ambient context their team produces; they just know they can never find it when they need it.
2. Signals get classified. Is this a new task? An update to an existing one? A decision being made? Background noise? Classification happens programmatically where possible – a GitHub PR that references a Linear issue number is unambiguous. For the fuzzier signals (a Slack message that might be about the project or might just be someone sharing a recipe for banana bread), the system uses entity extraction and vector embedding similarity to match against existing graph nodes. If the embedding of a Slack message lands close enough to an existing task cluster, the link gets created as a weighted edge in the graph – a property graph, if you want the formal term – with a confidence score attached. Below threshold? Filed as context. Not forced into a connection it doesn't deserve.
3. Signals get linked. The classified signal connects to existing nodes. If someone mentions a Linear issue in a Slack thread, those two are now linked. If the same person who commented on a Figma design also opened the PR that implements it, those connections form automatically. Nobody had to document anything. Nobody had to update the wiki. (This is the core of what we're building with Sugarbug – the linking happens in the background while your team just works.)
4. The graph gets smarter over time. As cross-tool references accumulate, the graph builds a richer picture of how your team actually works – who collaborates with whom, which tools carry which kinds of decisions, and where context reliably gets lost. (In our experience, it's almost always the handoff between design and engineering. Every time. You'd think we'd have solved that one by now.)
Why Slack search, Zapier, and dashboards aren't this
Let me briefly address the "but can't I just..." crowd. (I was in that crowd for years. I tried everything.)
Slack search is genuinely underrated, but "searchable" and "findable" are fundamentally different things. Slack search works when you know what you're looking for and roughly when it happened. It collapses when you're reconstructing a decision made across multiple channels over the course of a week. You're looking for a relationship between conversations, not a specific message, and Slack has no model for that.
Zapier and Make can wire basic connections – "when a Linear issue moves to Done, post in Slack" – but that's plumbing, not understanding. Zapier knows that something happened. It has no concept of why, or how it connects to what preceded it. (This is the fundamental tragedy of workflow automation tools: the people who need them most have the least time to configure them.)
Dashboards tell you: open issues: 47, PRs merged this week: 12. Useful for throughput measurement. Useless for causality. The dashboard says "1 PR merged." The graph tells you why – a Figma review surfaced a bug, originally reported in a Slack thread nobody else had seen. Numbers without narrative are decorations.
What this actually unlocks
A knowledge graph for work tools isn't a wiki you maintain – it's an automatic map of relationships that forms as your team works. The value isn't in storing information; it's in encoding the connections between signals that individual tools can't see.
With connected signals – and in practice, you start seeing useful connections within the first few days of ingestion, not months – you can do things none of these individual tools support:
Find the decision, not just the message. Pull up the Linear issue for a feature, see every conversation and decision that touched it, and trace the thread back to the Figma comment where the approach was first debated. What used to require interrogating three colleagues and a commit log becomes a straightforward traversal of connected nodes.
Prepare for meetings without the archaeology. Before a one-to-one with an engineer, the graph can surface everything relevant – what they've shipped, what's stuck, what conversations they've been part of, what decisions are still hanging. Not a dashboard of velocity metrics (those are depressing for everyone involved), but a narrative of what's actually been happening. The difference between spending half an hour pulling context from four different tools and having it ready when you sit down.
Spot dropped context before it becomes a dropped ball. A Figma review requested three days ago with no response? The graph catches it. A Linear issue moved to "In Progress" a week ago with no commits since? Flagged. These aren't sophisticated automations – they're pattern recognition on connected data, and they only work because the graph knows which signals relate to which tasks.
Stop being the human glue. This is the one that gets me. In most startups, there's a person (often the founder, sometimes an unusually diligent PM) who functions as the team's connective tissue – the one who remembers that the conversation in #design-feedback was related to the ticket in the backlog which was blocked by the thing that came up in last week's standup. That person is doing the knowledge graph's job manually, in their head, all day. It's exhausting, it doesn't scale, and when they go on holiday, the whole team loses ten IQ points. A graph replaces that human routing layer with something that doesn't need a holiday.
That's why we built Sugarbug as a knowledge layer rather than another dashboard – not aggregating numbers from your tools, but mapping the relationships between the signals flowing through them. Each new connection makes existing connections more meaningful. Dewey would've approved. (Probably. He had some other views that haven't aged well, but the classification thing was solid.)
Stop relying on one person to hold the connections between your tools in their head. Sugarbug maps the relationships automatically.
Q: What happens to the graph when someone deletes a Slack message or resolves a Figma comment? A: Once a signal has been ingested and linked, the graph retains the relationship even if the source message is deleted or the comment is resolved. The original content may be gone from Slack or Figma, but the edge – "this conversation informed this decision" – persists. That's the whole point: the graph preserves context that individual tools discard.
Q: Does Sugarbug's knowledge graph handle private channels and DMs? A: Only data sources you explicitly connect are ingested. If you connect a private Slack channel, those signals enter the graph and are visible to anyone with access to the Sugarbug workspace. DMs are never scraped unless you specifically configure a channel for it. Data stays within your team's environment – Sugarbug doesn't share signals across organisations.
Q: How does the graph handle noisy signals – like off-topic Slack chatter? A: Classification uses a confidence threshold. Signals that match existing graph nodes above the threshold get linked; signals below it are filed as unlinked context rather than forced into a connection. Over time, as the graph accumulates more reference points, the classifier gets better at distinguishing project-relevant discussion from general chatter. In our experience, the noise-to-signal ratio drops noticeably after the first week or two.
Q: Can I query the knowledge graph directly, or is it only used behind the scenes? A: Sugarbug exposes the graph through its task views and meeting prep surfaces – you see the connected context without writing queries. But the underlying data is also accessible via Sugarbug's MCP server, so you can build custom integrations or use it from other tools if you want to go deeper.
Q: How long does it take for a new signal to appear in the graph? A: Webhook-driven sources (like GitHub and Linear) appear within seconds. Polled sources (like Figma and Notion) depend on the scrape interval – typically every 30 minutes to 2 hours depending on the source. In practice, by the time you sit down to look at a task, the relevant signals are already linked.