Metrics
The misuse or misinterpretation of metrics is a common contributor to internal model risks. Let's dive into a specific example now: someone finds a useful new metric that helps in evaluating performance.
It might be:
- Source Lines Of Code (SLOC): i.e. the number of lines of code each developer writes per day/week whatever.
- Function Points: the number of function points a person on the team completes, each sprint.
- Code Coverage: the number of lines of code exercised by unit tests.
- Response Time: the time it takes to respond to an emergency call, say, or to go from a feature request to production.
- Release Cadence: number of releases a team performs, per month, say.
With some skill, they may be able to correlate this metric against some other more abstract measure of success. For example:
- "quality is correlated with more releases"
- "user-satisfaction is correlated with SLOC"
- "revenue is correlated with response time"
Because the thing on the right is easier to measure than the thing on the left, it becomes used as a proxy (or, Map) for the thing they are really interested in (the Territory). At this point, it's easy to communicate this idea with the rest of the team, and the market value of the idea is high: it is a useful representation of reality, which is shown to be accurate at a particular point in time.