Five Scrum KPIs to Maintain Team Accountability During a Product Release

April 17, 2015 | by Mike Maheu | Posted In Agile

Scrum KPIs to Maintain Team Accountability During a Product Release

Since many software developers and QA testers employ some adaptation of the scrum methodology, today’s milestones tend to be in line with status updates (i.e., sprint planning, daily standup, sprint review, sprint retrospective, and backlog grooming). Certified scrum master Robert Boyd insists that those milestones do not provide any guarantee of progress or success. As they stand, scrum metrics maintain neither individual nor team accountability.

If you want your product release to be a success, then it’s out with the old, stagnant metrics and in with the new, actionable KPIs — key performance indicators. What’s the difference between the two? Driven by upper management, KPIs are actionable, quantifiable, leading indicators that have the ability to change over a particular time, and that measure outcome toward an objective-defined goal. Metrics, on the other hand, are lagging indicators focused solely on activity and output.

There are many metrics but only a few KPIs. Furthermore, while all KPIs are metrics, all metrics are not necessarily KPIs. When in doubt, make a statement and follow that with the question “who cares?” If the answer is “upper management,” then it’s a KPI; if not, it’s a metric.

Here are five measurable KPIs:

1) Stories completed vs. stories committed

Measure: This is measured by comparing the number of stories committed to during sprint planning with the number of stories marked as completed during the sprint review.

Action Required Indicators:

  • The team doesn’t have an applicable reference story for making relative estimates, or not every team member understands the reference story.
  • Customer requirements are either not properly defined (leading to feature creep) or not adequately communicated to the development team.
  • The team faces multiple disruptions or is undercommitted, working at a slower than expected pace.
  • A single team member is burdened with making all decisions regarding estimation, design, engineering, and implementation.
  • The product has bugs.

2) Technical debt management (the known problems and issues delivered at the end of each sprint)

Measure: For QA testers, this is usually determined by the number of bugs identified.

Action Required Indicators:

  • No customer input has been requested, or team members are building the product based on how they think it should work rather than listening to the customer’s requirements.
  • The product’s “fully completed” stage isn’t properly defined, or other gating factors (such as the introduction of bugs) aren’t acknowledged.
  • There is extensive pressure from outside forces (including upper management and the customer) for the early release of a product.
  • The team is compromising quality by working excessively on stories during the sprint.
  • The team isn’t documenting found or fixed problems.

3) Team velocity (the consistency of the team’s estimates from sprint to sprint)

Measure: This is calculated by comparing story points completed in the current sprint with story points completed in the previous sprint. (Aim for no higher than a 10-percent variation.)

Action Required Indicators:

  • There are variations in team size between sprints.
  • The team has very short release cycles or is constantly doing maintenance work.
  • The team doesn’t understand or is unable to gauge the extent of the work to be completed at the sprint’s inception.

4) Retrospective process improvement (the team’s ability to revise its development process, making it more effective and enjoyable for the next sprint)

Measure: This can be measured by retrospectively counting the items identified, the items the team committed to addressing, and the items resolved by the end of the sprint

Action Required Indicators:

  • The team is unable to stay organized or manage itself.
  • Feature stories are put first, and team self-improvement is put second.
  • Team members lack the measures to check or rate themselves with respect to internal and external factors during the retrospective.

5) Team’s adherence to scrum rules and engineering practices

Measure: This is determined by counting the number of infractions that occur during each sprint. While scrum doesn’t outright define the best engineering practices, companies usually have their own ideal practices for undergoing projects. The scrum team should not deviate from that defined set of expectations.

Action Required Indicator:

  • No one is leading the team or coaching it to be more productive and produce higher-quality products. (Keep in mind that a good process leads to good products.)

Refer to the Atlassian Agile Glossary for more detail.

Thanks to Robert Boyd, whose scrum insights were provided in the Fall 2014 issue of Pragmatic Marketer.