The “quantified self” movement is becoming all the rage these days. People measuring how much REM sleep they manage to achieve, weight, BMI, how many steps they walk, resting HR, etc. The net result is driving people to constantly improve, walk more, sit less, sleep more, etc. All great stuff.
People are even spending hundreds of dollars on this type of measurement. I have personally spent lots of money on lactate threshold tests, vo2 max, bodpod etc. I do this so I can track the effectiveness of my training and see how much I’m really improving.
As Gordon Weir (@gordweir) once told me, if I was *made* to do all of the above or if my wife purchased a Fitbit for “encouragement” and to “track” my weight loss progress; I would probably attach the Fitbit to a fan so I can game the system while eating a Snickers bar.
While the measurements may be the same and the instruments the same, the difference is profound.
Feature teams are meant to gather metrics (e.g. defects per release, burn down, velocity, etc) as a way to measure where they are and improve, *not* as a mechanism to get beat over the head with. The former is the only way to gather “real” metrics. The latter will lend itself to gaming the system. The sprint metrics are gathered by the team, for the team, and are discussed during retrospectives.
The same applies for the product team. The product team should be gathering metrics around ROI, revenue etc. Metrics are the only way to validate assumptions. If we build (invest) X we assume we will get Y (return). Again these are ways to validate assumptions and continuously improve and not meant to be a noose.
Metrics are necessary and important but only when metrics are used as a means of improvement. Metrics that are forced on feature teams or product teams have a higher likelihood of being gamed to provide the “right” answer.