Dropped Balls Aren't a People Problem
Why dropped balls in project management aren't about discipline or memory, and when your team actually needs a system to catch them.
By Ellis Keane · 2026-03-17
If your team is small enough that you all eat lunch together (or at least could, hypothetically, if you were ever in the same city at the same time), you probably don't need to read this. Close the tab. Go build something. The dropped balls problem at your scale is a Wednesday afternoon Slack reminder away from being solved, and I mean that genuinely.
Still here? Right, so let's talk about dropped balls in project management – and, more specifically, about why the standard advice doesn't work once your team hits a certain size.
Before We Go Any Further
We build a tool that addresses this problem (Sugarbug – I'd be lying if I pretended otherwise), but the honest answer is that most teams reading this don't need what we're building. Not yet. Maybe not ever. What they need is to understand why balls get dropped in the first place, because the fix depends on the cause, and the cause is almost never what people think it is.
Why Balls Get Dropped
Ask most managers why balls get dropped and you'll hear a familiar list: someone forgot, someone wasn't paying attention, someone didn't follow the process. The fix, therefore, is better processes, more reminders, maybe a standup bot that nudges people every morning.
And look, sometimes that's genuinely the problem. If your one engineer forgot to update the Linear ticket and your PM didn't check before the sprint review, that's a human lapse and a process gap. Fair enough. Add a checklist. Move on.
But that's not the kind of dropped ball that keeps engineering managers up at night. The kind that really hurts is the one where everyone did their job, followed their process, updated their tools – and something still fell through the gap. Because the gap isn't between a person and their task. It's between one tool and another.
Here's what I mean. An engineer ships a PR that closes a GitHub issue. The issue was linked to a Linear ticket, and the ticket moves to "Done." Great. Except the original request came from a Slack conversation three weeks ago where the PM also mentioned a follow-up requirement that nobody ever logged as a separate task. That follow-up lives in a Slack thread from February. It's not in Linear. It's not in GitHub. It's not in anyone's sprint. It's technically a dropped ball, but no individual person dropped it – it fell through the structural gap between Slack and the task tracker.
This pattern shows up everywhere once you start looking. A designer leaves a comment in Figma flagging that an edge case contradicts the spec in Notion, but the engineer working on the feature never checks Figma and the PM never sees the comment because they're living in Linear. A customer success lead promises a feature in a call, summarises it in an email, and it never makes it into the engineering backlog because nobody bridges that particular gap. An incident post-mortem identifies three follow-up items, the doc gets shared in Slack, and two of the three items never become tracked tasks because the person who usually does that was out sick that week.
The most damaging dropped balls in project management happen in the gaps between tools, not in the gaps between people and their task lists.
The Process Fix (and Where It Stops Working)
I genuinely believe that good processes solve most of these problems for most teams. Here's what works, and I'm sharing this without any ulterior motive because (well, to be fair) we're pre-launch and the best thing we can do right now is build trust by being useful.
The weekly sweep. One person, ideally the PM or engineering lead, spends 30 minutes every Friday going through Slack threads, meeting notes, and email to catch anything that was discussed but never tracked. Tedious? Absolutely. Effective? Surprisingly so, up to a point.
The decision log. Every decision that comes out of a Slack thread or a meeting gets pasted into a shared doc (Notion, Google Docs, doesn't matter) with the date, who decided, and what the follow-up is. This sounds simple, and it is, until you're making fifteen decisions a week across four channels and nobody remembers which ones got logged.
The linking discipline. Every PR references its Linear ticket. Every Linear ticket links to the Slack thread where the requirement was discussed. Every Notion spec links to its Linear epic. If someone breaks the chain (and someone will – it's not a matter of if), the visibility breaks with it.
These are all good practices. We use versions of all three ourselves. But they have a common failure mode: they depend on humans consistently doing a small, boring thing every single time, forever. And the research on that is not encouraging (not that I need to cite research – if you've managed a team of more than five people, you already know).
When the Process Fix Stops Scaling
There's a threshold, and I wish I could give you an exact number, but we haven't figured that out yet (honestly, it probably varies by team and by how disciplined your people are). Our working heuristic – and I want to be clear this is a heuristic, not benchmarked data – is that things start breaking somewhere around four or five tools, ten-plus people, and multiple parallel workstreams.
Not because anyone got lazy. Not because the process was bad. But because the volume of connections between tools outgrows any one person's ability to track them manually. The weekly sweep takes 90 minutes instead of 30, and the PM starts skimming. The decision log gets stale because the person maintaining it went on holiday and nobody picked it up. The linking discipline holds for Linear and GitHub but falls apart for Slack and Figma because those tools don't have the same kind of structured references.
This is (and I want to be clear about this) a scaling problem, not a discipline problem. I've watched genuinely excellent PMs and engineering leads struggle with this, people who run tight ships and care deeply about nothing falling through the cracks. At a certain scale, the problem outgrows the solution. That's not a failure of the person – it's a failure of the tooling ecosystem to provide connections between itself.
"The reward for being sophisticated about your tooling is a more complex failure surface for dropped balls. I find that deeply ironic." – Ellis Keane
And here's the part that I think is genuinely unfair about this: the better your team is at using their tools, the more surface area you have for cross-tool gaps. A team that uses Linear religiously, keeps Notion specs up to date, has active Figma reviews, and communicates in well-organised Slack channels has more handoff points than a team that just uses email and a spreadsheet.
Why Your Tools Can't Help
Here's the thing that I find genuinely interesting about this whole problem, and that I don't think gets talked about enough: your tools are doing exactly what they were designed to do. Linear is excellent at tracking Linear issues. GitHub is excellent at tracking code changes. Notion is excellent at organising documents. Slack is excellent at... being Slack (for better or worse).
But none of them were designed to track the connections between each other. And work, real work, doesn't happen inside one tool – it flows across all of them. The handoff points between tools are where things disappear, and no amount of improving any individual tool fixes that. You can make Linear better at tracking issues, but that doesn't help when the issue should have been created in the first place based on a Slack conversation that happened in a channel the engineering lead doesn't monitor.
What Would Actually Fix This
I've been deliberately vague about product stuff in this post, and that's intentional – I wanted this to be useful whether or not you ever use anything we build. But since you've made it this far (and I appreciate that), let me be honest about what I think the actual fix looks like.
It's not a better task tracker. It's not a better process. It's not a standup bot or a weekly review or a shared spreadsheet. Those all help, and at small scale they're sufficient, but they're all treating the symptom.
The actual fix is something that watches the connections between your tools and flags when something doesn't add up. When a Slack decision doesn't become a ticket. When a GitHub PR closes an issue but there are unresolved comments. When a Notion spec references a requirement that's been deprioritized in a conversation the spec author never saw.
To make this concrete, let me walk through what that looks like. Say your system is watching both Slack and Linear. It sees a conversation in #engineering where someone says "we should also handle the case where the user hasn't verified their email" – that's a new requirement. If that requirement never shows up as a Linear ticket within, say, 48 hours, the system flags it. Not with a notification that screams at you (nobody needs more of those), but as an entry in a "decisions not yet tracked" view that the PM can review during their Friday sweep. Same idea for GitHub PRs that close Linear tickets but still have open review comments, or Notion specs that reference features which have been deprioritized since the spec was written.
Whether you build that internally (some teams do, with webhooks, a message queue, and a modest amount of glue code), or use something off the shelf, or just accept the dropped balls as a cost of doing business – that's your call. We're building one version of this answer, but it's not the only version, and for a lot of teams it's not the right one yet.
If you want to know when it might be the right one for you, here's my honest heuristic: if your weekly sweep takes more than 30 minutes and things are still falling through, you've outgrown the manual approach.
---
When the weekly sweep takes more than 30 minutes and things still fall through, you've outgrown the manual approach. Sugarbug watches the gaps automatically.
Q: How does Sugarbug prevent dropped balls in project management? A: Sugarbug builds a knowledge graph across your tools – Linear, GitHub, Slack, Figma, Notion – and tracks tasks, conversations, and decisions as they move between them. When something stalls or loses its connection to the original request, Sugarbug surfaces it before it falls through the gap. It's not a reminder system – it understands the relationships between items across tools and flags when those relationships break.
Q: Can Sugarbug catch tasks that get discussed in Slack but never logged? A: Yes. Sugarbug monitors Slack conversations and identifies when a decision or action item is discussed but never appears as a task in Linear or a ticket in GitHub. It flags the gap so someone can act on it. We're still refining how aggressively it should flag (nobody wants notification fatigue on top of everything else), but the core detection works.
Q: Do I need a tool to fix dropped balls, or is it a process problem? A: Honestly, it depends on scale. Small teams with two or three tools can usually fix this with better habits – a weekly review, a shared doc, a linking discipline. But once you're past four or five tools and ten-plus people, the manual approach stops scaling and you need something that tracks the connections automatically. The threshold varies by team, but you'll know when you've hit it.
Q: What's the difference between a task tracker and a signal intelligence system for project management? A: A task tracker records what you tell it. A signal intelligence system watches what's actually happening across your tools and flags when something doesn't add up – a task that's marked done but has unresolved comments, a decision that was made in Slack but never reflected in the spec. It catches the things humans forget to log, which, in our experience, is where most of these gaps actually originate.