Thursday, November 05, 2015

Convincing your team you haven't decided

One of the things I find myself saying a lot as a software architect is, "I don't have an opinion yet."  Often the team continues asking questions, ending with, "Is that what you want?" and I say again, "No, really -- I don't have an opinion yet!"  Since I am either the expert whose advice is being sought, or the leader who is responsible for making the decision, this is surprising (and sometimes a little frustrating) for the team I'm working with.  I can't possibly be the only software leader with this challenge, so here are my insights about it.

There are two main reasons I profess not to have an opinion:


  1. I need more information before I can make the decision.
  2. I have an opinion, but I don't trust it.


One helpful thing for my team would be just to share these things.  I think the reason I don't is that it's subconscious, so writing a blog like this helps me move these things from mental habits to explicit thought processes that I can change.

But sometimes I actually want the team to make the decision, and review it with me in some way (see 7 levels of delegation).  In that scenario, I really don't want to taint the results with my opinion in situation #2, but I have to avoid making the team feel "set up" when I ask them to research something, and don't like their results.

Another part of the communication problem can be my tone of voice, coupled with a little bit of a love of philosophy.  When we find a good question, my intonation tends to end on a descending tone, as though the sentence ended with a period.  For example, "What is the test framework we will use here."  If you read it like a sentence, it sounds a lot like your teacher asking a review question.  And that tone of voice made the team feel like I already knew the answer, and was just 'testing' them to see if they can come up with it.  (Kudos to my friend Matt for this insight)

Anyway, these are some things that happen to me as I work to lead a team of smart people.  Share or reblog your insights as well!

Team Metrics Roundup

Potential Project Metrics
The purpose of metrics is to force a conversation, not to jump to a conclusion.
The purpose of metrics is to ultimately result in a change in behavior. But that change does not come from the metric directly, rather the metric indicates that a problem *may* exist, and forces a conversation.  That conversation then results in a change to potentially both the team’s behavior and to the metric itself.


Metrics aren’t free -- they cost team time and leadership time, even if they are collected automatically by a tool, the team and management need to take time to review and discuss the results periodically.  For that reason, it is important that teams and projects choose to track only the metrics that are potential problems to discuss.  It may be that a team tracks a particular metric as they are trying to create a change in behavior for a time, and after that behavior change is achieved, the metric may no longer be tracked.

Productivity and Focus

These metrics attempt to measure the output of the team.  In addition to raw numbers (higher is better), it is also valuable to look at the volatility of the numbers.  For example, cycle time will tend to be more variable if the size of the work item is more variable.

Cycle Time

Definition: The average amount of time it takes an item to move from “in progress” to “complete”
Measures: Team productivity, focus.
Affected by: Work in Progress, external dependencies, task size
Mechanics: Configure task tracking tool to collect this information.

Work In Progress

Definition: The number of items that have been started, but not completed.  This may include work that is awaiting external resources, etc.
Measures: Team Focus
Affected by: External dependencies, team size, team decisions
Mechanics: PM record number of tasks in progress daily, task tool tracks/enforces upper limit.


Throughput

Definition: The average number of issues completed per (day/week/etc).
Measures: Team productivity
Affected by: Task size, team focus, work in progress
Mechanics: Configure task tracking tool to collect this information.


Quality and Feedback

Testing is only one kind of feedback that leads to quality.  Demonstrations to Product Owners can identify problems just as readily.


Testing and demonstration should be a practice that happens continuously during development, not just at project end.  These metrics help identify this goal.

Time to first demo

Definition: The amount of time from the item moving “in progress” until the first demo to a Product Owner role
Measures: Early feedback habits
Affected by: team planning choices, Product Owner Availability, type of work done
Mechanics: Record this information manually in task tracking tool (Jira, etc)

Mean time between demos

Definition: the average number of days between a demonstration of any sort on the project.
Measures: Frequency of feedback
Affected by: team planning choices, Product Owner Availability, type of work done
Mechanics: Record when demonstrations take place in task tracking tool, and calculate manually.

Production Bug Count

Definition: The average number of bugs discovered in production, per release.
Measures: pre-production quality processes
Affected by: quality definitions, testing efforts, regression suites
Mechanics: Record each bug in a bug tracker, review each bug as it is reported to identify the release that introduced the bug.

Time to first passing functional test

Definition: The average number of work hours until the first non-unit test is written and passing.
Measures: Test Driven Development, early testing habits
Affected by: System design for testability
Mechanics: Record manually via task tracking tool.

Code Coverage

Definition: The percentage of the code covered by the automated testing suite.  Best when broken down by cyclomatic complexity of target code.  (simple code may be untested, but complex code should be well tested)
Measures: Testing Completeness, Change Risk
Affected by: team habits, system design for testability
Mechanics: configure CI to collect, team agrees to respond to these events.


Planning and Estimates

Estimated vs Actual time

Definition: Compare estimates given before starting to the actual time elapsed.
Measures: Accuracy of estimates
Affected by: team familiarity, developer assignment, external dependencies

Mechanics: record both values, review and discuss differences.  Best when paired with ‘meterstick stories’ that can be used for comparison in lieu of raw hours.