Engineering Metrics Without Jira
You don't need Jira to measure what matters. Here's how to track engineering health from Git, CI, and the tools your team already uses.
By Ellis Keane · 2026-03-23
Small teams that get the best engineering visibility tend to be the ones that stopped chasing the metrics Jira wants them to track.
I realise that sounds like I'm just being contrarian for the sake of it, and honestly, maybe I am a little – but I spent the better part of three years faithfully maintaining sprint boards, grooming backlogs, and updating estimates on tickets that were already half-finished before anyone opened Jira that morning. Every two weeks, we'd sit in a room (well, a Zoom – it was 2023) and squint at a velocity chart that told us exactly nothing we didn't already know from talking to each other. Engineering metrics without Jira wasn't something I went looking for. It's what happened when I stopped pretending velocity charts were telling me anything useful.
If your team is small enough to sit around one table, you probably don't need Jira to know how you're doing. You need better signals from tools you're already using.
This isn't a "Jira is bad" piece. Jira is a fine tool for organisations that need Jira-shaped tracking (and at a certain scale, they genuinely do). But if you're an engineering manager at a 10-to-40-person startup, paying for Jira solely to produce velocity charts is a bit like buying an industrial oven to reheat your lunch.
"Paying for Jira solely to produce velocity charts is a bit like buying an industrial oven to reheat your lunch." – Chris Calo
What Jira metrics actually measure
Let me be blunt: most Jira metrics measure Jira usage, not engineering output. Story points measure the team's ability to estimate story points. Velocity measures how consistently the team fills sprints to roughly the same capacity. Burndown charts measure whether someone remembered to drag tickets across the board on Thursday afternoon.
None of these are useless, exactly. But they're process metrics dressed up as developer productivity metrics, and the gap between the two is where engineering managers lose the plot.
I've been the EM who spends the better part of an hour before a stakeholder meeting massaging Jira data into a slide deck that shows "we're on track." The stakeholder nods, asks one question about the login bug from last Tuesday, and the meeting ends. The hour was for the slide deck. The actual answer was "ask the engineer."
If your metrics require more maintenance than the work they measure, the metrics are the problem.
Cycle time from Git, not from ticket boards
For small product teams, cycle time is usually the highest-signal metric you can track. It measures the duration from first commit to production deploy, and you can derive it entirely from your Git history and CI pipeline – no ticket board required.
The components:
- First commit timestamp on a branch or PR
- PR merge timestamp
- Deploy timestamp (from your CI/CD – GitHub Actions, CircleCI, whatever you're running)
The delta between 1 and 3 is your cycle time. Break it into segments – coding time (1 to PR open), review time (PR open to merge), and deploy queue (merge to production) – and you've got a diagnostic that tells you where work actually stalls.
When I first did this for our team, the numbers were genuinely surprising. I'd assumed review time was our bottleneck (everyone always assumes review time is the bottleneck, don't they?). Turns out our coding-to-PR phase was fine, reviews were fine, and we were losing about two days on average between merge and deploy because our staging environment was flaky and nobody had prioritised fixing it. A velocity chart would never have surfaced that.
How to measure it
If you're on GitHub, you can pull this with the CLI:
```bash
Get merged PRs for the last 30 days
gh pr list --state merged --limit 50 --json number,createdAt,mergedAt,headRefName ```
For deploy timestamps, most CI systems expose this via API or webhook. Map PR merge SHAs to deploy events, and you've got cycle time segmented by phase.
Tip: If your CI doesn't expose deploy timestamps cleanly, a dead-simple approach is a Slack bot that posts to a #deploys channel with the commit SHA. You get timestamps, a human-readable log, and – as a side effect – a channel that tells you how often you ship.
Code review throughput
Review throughput – the number of PRs reviewed per engineer per week, and the median time from PR open to first review – tells you more about team health than any sprint metric. It's underrated.
Why? Because review behaviour is a leading indicator. When review times creep up, it usually means engineers are overloaded, context-switching too heavily, or (and this is the uncomfortable one) avoiding each other's code. Any of those is worth knowing about before it shows up as a missed deadline two weeks later.
GitHub gives you this data through its API. The key fields are createdAt on the PR and submittedAt on the first review event.
The number I watch is median hours to first review. In our experience across a few small teams, healthy review cadence tends to sit below about 8 hours. When it climbs past a day consistently, something structural has shifted – and whatever it is, it's invisible in Jira.
The meetings-to-decisions ratio
This isn't a traditional engineering metric, and I should be upfront: it's a rough heuristic, not a KPI. But for small teams, I've found it surprisingly revealing.
Count the meetings your team had this week. Count the concrete decisions that came out of them (not "we should look into X" – actual decisions with owners and next steps). Divide the latter by the former.
If fewer than half your meetings produced a decision, it's worth asking whether those meetings are earning their time. Some meetings exist to reduce risk or share context, and that's legitimate – not everything needs to end with a resolution. But when you start tracking this even informally (I literally kept a tally in my notebook), you develop a sense for which meetings are generative and which ones are just rituals nobody's questioned.
I tracked this for about a month, and it changed how I scheduled meetings more than any productivity article ever did. When you can see that your Monday standup has produced exactly zero decisions in three weeks running, cancelling it stops feeling radical.
Building engineering metrics without Jira: a lightweight dashboard
You don't need a BI tool for this. A Notion page, a Google Sheet, or a weekly Slack post with four numbers is enough:
- Median cycle time (from Git/CI) – are we shipping faster or slower?
- Median time to first review (from GitHub) – is the team reviewing promptly?
- Deploy frequency (from CI or #deploys channel) – how often are we shipping?
- Meetings-to-decisions ratio (manual tally) – are our meetings earning their keep?
Four numbers, all derivable from tools you already have, none of which require maintaining a Jira board. Update them weekly. If a number moves in the wrong direction for two consecutive weeks, investigate. If it holds steady, leave it alone.
The point isn't to build a surveillance system (please don't – your engineers will hate you, and they'll be right to). Engineering team visibility should come from the work itself, not from asking people to report on the work.
The best engineering metrics are cheap to collect, hard to game, and tell you something you can act on. Story points fail on all three counts.
When you actually do need a ticket board
I said this isn't a "Jira is bad" piece, and I meant it. There are legitimate situations where tracking metrics without a project management tool becomes genuinely irresponsible:
- Compliance-heavy industries where audit trails on task status are legally required
- Larger engineering orgs where cross-team dependencies make informal coordination untenable
- Organisations with dedicated project management functions that need a canonical source of truth across teams
If that's your situation, Jira (or Linear, or Shortcut) is the right call. What I'm arguing is that for small teams, maintaining a heavyweight tool solely for metrics is a bad trade. You end up optimising for the tool's model of work rather than your team's actual work.
And honestly? Even teams that use Jira would benefit from supplementing their board data with the Git-derived metrics above. Jira tells you what people say they're doing. Git tells you what they're actually doing. Both are useful, but only one is immune to status-update theatre.
If the metrics question keeps coming up at your startup, try the four-number dashboard for a month. Engineering metrics without Jira might sound like going without a safety net – but once you see how much signal lives in your Git history and CI pipeline, you'll wonder what the ticket board was adding.
Surface cycle time, stalled PRs, and review bottlenecks automatically – without custom scripts or a Jira board.
Q: Can you get meaningful engineering metrics without a project management tool? A: Yes. Cycle time (first commit to deploy), review throughput, and deploy frequency all live in your Git history and CI pipeline. For teams under about 40 engineers, these tend to be sharper and harder to game than anything a ticket board produces – and they don't require anyone to drag cards across a board to stay accurate.
Q: Does Sugarbug surface engineering metrics automatically? A: Sugarbug connects to GitHub, Linear, Slack, and calendars to build a knowledge graph of your team's activity. It flags patterns like stalled PRs, review bottlenecks, and decisions that went unresolved – which covers many of the signals described here without requiring you to write and maintain custom scripts against the GitHub API.
Q: How do I get buy-in to stop using Jira for metrics? A: Frame it as an experiment, not a revolt. Run Git-derived metrics alongside your existing Jira dashboards for four weeks, then compare which numbers prompted actual changes. If the Git metrics prove more actionable (and in our experience, they tend to), the case makes itself.
Q: What's the difference between process metrics and performance metrics? A: Process metrics (story points, velocity, burndown) measure how consistently a team follows a workflow. Performance metrics (cycle time, deploy frequency, review throughput) measure what the team actually ships and how quickly. Small teams almost always get more signal from the latter, because performance metrics are derived from the work itself rather than from manual status updates.