Potential Project Metrics
The purpose of metrics is to force a conversation, not to jump to a conclusion.
The purpose of metrics is to ultimately result in a change in behavior. But that change does not come from the metric directly, rather the metric indicates that a problem *may* exist, and forces a conversation. That conversation then results in a change to potentially both the team’s behavior and to the metric itself.
Metrics aren’t free -- they cost team time and leadership time, even if they are collected automatically by a tool, the team and management need to take time to review and discuss the results periodically. For that reason, it is important that teams and projects choose to track only the metrics that are potential problems to discuss. It may be that a team tracks a particular metric as they are trying to create a change in behavior for a time, and after that behavior change is achieved, the metric may no longer be tracked.
Productivity and Focus
These metrics attempt to measure the output of the team. In addition to raw numbers (higher is better), it is also valuable to look at the volatility of the numbers. For example, cycle time will tend to be more variable if the size of the work item is more variable.
Cycle Time
Definition: The average amount of time it takes an item to move from “in progress” to “complete”
Measures: Team productivity, focus.
Affected by: Work in Progress, external dependencies, task size
Mechanics: Configure task tracking tool to collect this information.
Work In Progress
Definition: The number of items that have been started, but not completed. This may include work that is awaiting external resources, etc.
Measures: Team Focus
Affected by: External dependencies, team size, team decisions
Mechanics: PM record number of tasks in progress daily, task tool tracks/enforces upper limit.
Throughput
Definition: The average number of issues completed per (day/week/etc).
Measures: Team productivity
Affected by: Task size, team focus, work in progress
Mechanics: Configure task tracking tool to collect this information.
Quality and Feedback
Testing is only one kind of feedback that leads to quality. Demonstrations to Product Owners can identify problems just as readily.
Testing and demonstration should be a practice that happens continuously during development, not just at project end. These metrics help identify this goal.
Time to first demo
Definition: The amount of time from the item moving “in progress” until the first demo to a Product Owner role
Measures: Early feedback habits
Affected by: team planning choices, Product Owner Availability, type of work done
Mechanics: Record this information manually in task tracking tool (Jira, etc)
Mean time between demos
Definition: the average number of days between a demonstration of any sort on the project.
Measures: Frequency of feedback
Affected by: team planning choices, Product Owner Availability, type of work done
Mechanics: Record when demonstrations take place in task tracking tool, and calculate manually.
Production Bug Count
Definition: The average number of bugs discovered in production, per release.
Measures: pre-production quality processes
Affected by: quality definitions, testing efforts, regression suites
Mechanics: Record each bug in a bug tracker, review each bug as it is reported to identify the release that introduced the bug.
Time to first passing functional test
Definition: The average number of work hours until the first non-unit test is written and passing.
Measures: Test Driven Development, early testing habits
Affected by: System design for testability
Mechanics: Record manually via task tracking tool.
Code Coverage
Definition: The percentage of the code covered by the automated testing suite. Best when broken down by cyclomatic complexity of target code. (simple code may be untested, but complex code should be well tested)
Measures: Testing Completeness, Change Risk
Affected by: team habits, system design for testability
Mechanics: configure CI to collect, team agrees to respond to these events.
Planning and Estimates
Estimated vs Actual time
Definition: Compare estimates given before starting to the actual time elapsed.
Measures: Accuracy of estimates
Affected by: team familiarity, developer assignment, external dependencies
Mechanics: record both values, review and discuss differences. Best when paired with ‘meterstick stories’ that can be used for comparison in lieu of raw hours.