what-did-my-team-do-this-week
Why the simplest management question is the hardest to answer, and how to build a system that answers it without interrupting anyone.
By Ellis Keane · 2026-03-27
Ship captains kept logs – not because they enjoyed writing, but because three weeks into a voyage, the only way to reconstruct what happened was to have a running record that was a byproduct of the work itself. Nobody held a meeting to produce the log.
Many engineering teams in 2026 have less visibility into what happened last week than a merchant vessel had into yesterday's weather. The question "what did my team do this week" shouldn't be hard, and yet every Monday, engineering managers and product leads find themselves either scheduling a meeting to ask it, or clicking through Linear boards, GitHub feeds, Slack threads, and Notion docs trying to assemble the answer manually. The information exists – it's scattered across tools that don't talk to each other, and nobody's job is to stitch it together.
"Many engineering teams in 2026 have less visibility into what happened last week than a merchant vessel had into yesterday's weather." – Ellis Keane
Why "What Did My Team Do This Week" Is So Hard to Answer
Every tool your team uses already tracks activity. Linear knows which issues moved to "Done." GitHub knows which PRs were merged. Slack knows which threads blew up. Each tool, in isolation, has a perfectly good record of what happened.
But none of them have the full picture, and the relationships between activities across tools are invisible. A PR that closes a Linear issue that was discussed in a Slack thread that references a Figma prototype – that's one unit of work, but it shows up as four separate events in four separate feeds. If you're trying to understand what your team accomplished, you're doing the graph traversal in your head, hopping between tabs, matching timestamps, and hoping you don't miss the thread where someone quietly resolved a blocker that unblocked three other people.
The weekly status meeting persists because no single tool can answer the question, and nobody has time to correlate the ones that can.
What "Visibility" Actually Means (and What It Doesn't)
Before we go further (and this is worth pausing on), "team visibility" has become one of those phrases that means whatever the person saying it wants it to mean, which is part of why so many attempts to solve it end up feeling like surveillance.
What most managers actually want when they ask what did my team do this week is something like: which projects moved forward, what shipped, what got stuck, and is there anything I should know about before it becomes a problem? They're not trying to count commits or measure hours – they're trying to stay informed enough to be useful without requiring everyone to stop working and write reports about working.
The distinction matters because most tools that claim to offer "team visibility" are really offering activity metrics – commit counts, ticket velocity, time-in-status breakdowns. Those are useful for throughput analysis, but weak for understanding decision context. Knowing that your team merged 47 PRs tells you roughly nothing about whether the important things got done, or whether a critical decision fell through the cracks somewhere between a Slack thread and a Linear comment.
The gap between "what your team accomplished this week" and "what your tools recorded" isn't a visibility problem – it's a connection problem. The information exists across your tools; the relationships between it don't.
A Week in Five Tools: Where the Answers Live
Say you manage a team of six engineers and you want to understand what happened this week without asking anyone. Here's what each tool actually gives you:
Linear has your issue board – filter by "completed this week" and you'll see which tickets closed. But Linear can't distinguish between a closure that involved three days of architectural work and one that was a two-minute config change, and it doesn't capture work that happened outside of tickets (and there's always work outside of tickets).
GitHub has PR activity – merges, reviews, comments. Cross-referencing with Linear gives you a richer picture, but doing that manually is tedious, and it still misses the context of why a particular approach was chosen or what tradeoffs were debated.
Slack is where most of the actual decision-making happens, whether we like it or not. The important conversations are buried in threads you'd have to know existed to find. Slack search works if you know the exact phrasing someone used, but if you're fishing for "did anyone discuss the auth migration this week?" and the thread used the word "login refactor" instead, you'll miss it entirely.
Figma captures design iterations, but unless you were tagged on the relevant comments, you'd need to browse file version histories to understand what changed and why.
Notion has meeting notes, specs, and decision records – assuming people updated them (hopefully they did, but in our experience the update rate drops sharply after the first month of any new doc structure).
A complete answer to "what did my team do this week" lives across all of them, and no single feed gives you the connected view.
Workarounds That Exist (and Where They Break)
Most teams solve this with ritual and manual effort. Here's what we've seen:
The standup recap. Some teams have the EM compile a weekly summary from standup notes. This works when standups are substantive – but if they've devolved into "same as yesterday, no blockers" (and let's be honest, many have), the recap is just a formatted summary of nothing.
The Friday update thread. A Slack channel where everyone posts what they shipped. This works surprisingly well when people do it, but in our experience, participation decays within a few weeks unless someone actively nudges. The updates also become formulaic – people list the visible work and omit the invisible coordination that consumed most of their time.
The automated prompt. Tools like Geekbot or DailyBot prompt people for updates and compile digests. Better than nothing, but you're still relying on self-reported data, which means you're getting what people remember to mention rather than what actually happened.
The custom dashboard. Retool or Notion databases pulling from GitHub and Linear APIs. Good for the quantitative side, but they miss the qualitative context entirely – the discussions, the pivots, the "we tried X but it didn't work" narratives that are usually the most important part of understanding a team's week.
Each of these bridges the same gap: your tools don't share context with each other, so humans compensate manually.
Removing the Human From the Reporting Loop
We've tried most of these approaches ourselves (we're a small team, so you'd think it wouldn't matter at our scale – but it does, even at five people). Template-based approaches – weekly update docs, structured standup prompts, Friday reflection threads – all work for a while and then quietly die. Not because people don't care, but because writing about what you did always feels less urgent than doing the next thing.
What we've found actually works is removing the human from the reporting step entirely. Not from the work – from the act of describing the work after the fact.
Our current hypothesis – and we're still validating this honestly – is that the gap between "activity feed" and "useful weekly summary" is a relationship-mapping problem. An activity feed tells you a PR was merged; a cross-tool linking system tells you that the PR closed this Linear issue, which was discussed in this Slack thread last Tuesday, which referenced a design decision from Figma, and the whole thing relates to a quarterly goal in Notion. That's the difference between a list of events and an understanding of what happened.
There are real limitations here: private Slack channels the system can't see, work that happens in tools you haven't connected, conversations that happened over a video call with no written trace, and the ever-present problem of false joins where two things share a keyword but aren't actually related. We don't pretend this captures everything – but it captures far more than any self-reported system, and it captures it without interrupting anyone.
When You Genuinely Don't Need This
If your team is three people in the same room, you already know what happened this week. The "what did my team do" problem tends to appear as teams grow past the point where ambient awareness covers everything – in our experience, somewhere around six to eight people, or earlier if you're remote, across time zones, or spanning multiple disciplines that each use different primary tools.
It also matters less if your team works on one thing at a time. If everyone is heads-down on a single project with a single board, Linear's "completed this week" filter gives you most of what you need for weekly progress tracking. It's when work fragments across multiple projects, tools, and stakeholders that the information gap becomes painful enough to warrant a real solution.
If you're spending more than a few minutes on Monday morning trying to piece together what happened last week, you've probably crossed the threshold where a manual approach stops scaling.
Stop clicking through five tools to answer one question. Sugarbug assembles the weekly picture automatically from the tools where the work already happened.
Q: How does Sugarbug answer "what did my team do this week" automatically? A: Sugarbug connects to your team's tools – Linear, GitHub, Slack, Figma, Notion – and builds a knowledge graph of activity across all of them. Instead of asking each person for updates, you get an auto-generated weekly pulse that shows completed work, active threads, and decisions made, pulled directly from the tools where the work happened.
Q: Can Sugarbug replace weekly status meetings? A: For many teams, partially or fully. Sugarbug surfaces the same information a status meeting would – who worked on what, what shipped, what's blocked – without anyone having to prepare slides or write updates. Some teams keep a shorter weekly sync for discussion but eliminate the status-reporting portion entirely.
Q: What tools does Sugarbug pull weekly progress data from? A: Sugarbug currently integrates with Linear, GitHub, Slack, Figma, Notion, email, and calendar tools. Each integration feeds into a shared knowledge graph, so progress on a GitHub PR is linked to the Linear issue it addresses and the Slack thread where it was discussed.
Q: Do I need to set up automations or write Zapier workflows for this? A: No. Sugarbug's knowledge graph approach is different from trigger-action automation. Once your tools are connected, Sugarbug continuously builds context about your team's work. There are no workflows to configure or maintain.
---
If you've ever spent your Monday morning clicking through five apps trying to reconstruct what your team did last week, that's the problem we built Sugarbug to solve. See how it works.