Tuesday, April 18, 2017

Debugging is a Smell

I practice TDD.  And I realized that I'm irritated today with something I'm writing (in Node with promises), because I'm constantly using the debugger.

Using the debugger is a sign that something in your environment is causing your code to be difficult to reason about.  The error you get from your tests doesn't tell you what to do next, so you have to go on a fact finding mission.

Ideally, this fact finding mission would be a new test somewhere else.  But I can't write tests for code that isn't mine.  In this case, in the promises resolution parts of node, or the routing in the framework I'm using (Restify).  And to be fair, some of the stink here is my grasp of these frameworks.

One of the things I like best about TDD is how it surfaces classic old problems early: highly coupled code is hard to test. Lack of dependency injection is hard to test. Modules that do too many things are hard to test. And now, frameworks that have bad error handling and detection, they also are hard to test.

Friday, December 04, 2015

Node's Benefits Don't Matter. Use Node Anyway.

Systems that are be maintained by a single team will coalesce to the paradigm of their least changeable component.  In web development, that's the front end, which is javascript.

One of the problems of being a "full stack web developer" is that you must master very disparate sets of abilities.  In the traditional form, that's HTML/CSS, front-end javascript, back-end languages, and SQL. And maybe devops tools on top of that.

What makes Node unique in terms of platform features is that it is designed to be functional and non-blocking from the ground up.  Each library is already set up to use some kind of callback pattern, and the platform architecture forces the developer to adhere.  There are benefits to this paradigm, but they don't really matter.

Yes, Node's nonblocking IO and other features are great terms of scalability and performance, but these benefits don't really explain the uptake of Node as a first-implementation language.  I would bet that 90% of the applications written in Node never get to a volume where the performance considerations matter.  And those that do still end up needing custom performance features written in a compiled language.

But I hear the word "just" a lot, implying simplicity -- "just build it in Node".  So what's going on?

The real benefit: paradigm consistency. All the different concepts (CSS, functional javascript, imperative Java, SQL, etc) are too much, and take away from your time concentrating on the business problem. So we skimp.  We write Javascript that looks a lot like Java, or vice-versa.  But what about Node?

Node is not only the same language as the front-end, but also the same programming concepts: event-based, callbacks, etc.  This avoids shifting into the paradigm of imperative style, with blocking multithreading, big classes, etc.  This, coupled with avoiding the specialized knowledge of build cycles and compilation and artifact deployment, makes it feel "just easier".  And it is.

What about no-sql?  Everything I said about Node is probably true for Mongo.

Writing to no-sql databases really looks very similar to the list/hash data structures that appear on the front end.  It avoids the specialized knowledge of DDL.  Data migration is done in functional/reactive programming.

That's what makes the node/nosql stack so compelling -- it's all the same paradigm, and it matches the browser-side paradigm, which is the one you can't change.  This isn't bad, necessarily.  In fact, it may be the kind of thinking that unlocks Node for a lot of teams.

Thursday, November 05, 2015

Convincing your team you haven't decided

One of the things I find myself saying a lot as a software architect is, "I don't have an opinion yet."  Often the team continues asking questions, ending with, "Is that what you want?" and I say again, "No, really -- I don't have an opinion yet!"  Since I am either the expert whose advice is being sought, or the leader who is responsible for making the decision, this is surprising (and sometimes a little frustrating) for the team I'm working with.  I can't possibly be the only software leader with this challenge, so here are my insights about it.

There are two main reasons I profess not to have an opinion:

  1. I need more information before I can make the decision.
  2. I have an opinion, but I don't trust it.

One helpful thing for my team would be just to share these things.  I think the reason I don't is that it's subconscious, so writing a blog like this helps me move these things from mental habits to explicit thought processes that I can change.

But sometimes I actually want the team to make the decision, and review it with me in some way (see 7 levels of delegation).  In that scenario, I really don't want to taint the results with my opinion in situation #2, but I have to avoid making the team feel "set up" when I ask them to research something, and don't like their results.

Another part of the communication problem can be my tone of voice, coupled with a little bit of a love of philosophy.  When we find a good question, my intonation tends to end on a descending tone, as though the sentence ended with a period.  For example, "What is the test framework we will use here."  If you read it like a sentence, it sounds a lot like your teacher asking a review question.  And that tone of voice made the team feel like I already knew the answer, and was just 'testing' them to see if they can come up with it.  (Kudos to my friend Matt for this insight)

Anyway, these are some things that happen to me as I work to lead a team of smart people.  Share or reblog your insights as well!

Team Metrics Roundup

Potential Project Metrics
The purpose of metrics is to force a conversation, not to jump to a conclusion.
The purpose of metrics is to ultimately result in a change in behavior. But that change does not come from the metric directly, rather the metric indicates that a problem *may* exist, and forces a conversation.  That conversation then results in a change to potentially both the team’s behavior and to the metric itself.

Metrics aren’t free -- they cost team time and leadership time, even if they are collected automatically by a tool, the team and management need to take time to review and discuss the results periodically.  For that reason, it is important that teams and projects choose to track only the metrics that are potential problems to discuss.  It may be that a team tracks a particular metric as they are trying to create a change in behavior for a time, and after that behavior change is achieved, the metric may no longer be tracked.

Productivity and Focus

These metrics attempt to measure the output of the team.  In addition to raw numbers (higher is better), it is also valuable to look at the volatility of the numbers.  For example, cycle time will tend to be more variable if the size of the work item is more variable.

Cycle Time

Definition: The average amount of time it takes an item to move from “in progress” to “complete”
Measures: Team productivity, focus.
Affected by: Work in Progress, external dependencies, task size
Mechanics: Configure task tracking tool to collect this information.

Work In Progress

Definition: The number of items that have been started, but not completed.  This may include work that is awaiting external resources, etc.
Measures: Team Focus
Affected by: External dependencies, team size, team decisions
Mechanics: PM record number of tasks in progress daily, task tool tracks/enforces upper limit.


Definition: The average number of issues completed per (day/week/etc).
Measures: Team productivity
Affected by: Task size, team focus, work in progress
Mechanics: Configure task tracking tool to collect this information.

Quality and Feedback

Testing is only one kind of feedback that leads to quality.  Demonstrations to Product Owners can identify problems just as readily.

Testing and demonstration should be a practice that happens continuously during development, not just at project end.  These metrics help identify this goal.

Time to first demo

Definition: The amount of time from the item moving “in progress” until the first demo to a Product Owner role
Measures: Early feedback habits
Affected by: team planning choices, Product Owner Availability, type of work done
Mechanics: Record this information manually in task tracking tool (Jira, etc)

Mean time between demos

Definition: the average number of days between a demonstration of any sort on the project.
Measures: Frequency of feedback
Affected by: team planning choices, Product Owner Availability, type of work done
Mechanics: Record when demonstrations take place in task tracking tool, and calculate manually.

Production Bug Count

Definition: The average number of bugs discovered in production, per release.
Measures: pre-production quality processes
Affected by: quality definitions, testing efforts, regression suites
Mechanics: Record each bug in a bug tracker, review each bug as it is reported to identify the release that introduced the bug.

Time to first passing functional test

Definition: The average number of work hours until the first non-unit test is written and passing.
Measures: Test Driven Development, early testing habits
Affected by: System design for testability
Mechanics: Record manually via task tracking tool.

Code Coverage

Definition: The percentage of the code covered by the automated testing suite.  Best when broken down by cyclomatic complexity of target code.  (simple code may be untested, but complex code should be well tested)
Measures: Testing Completeness, Change Risk
Affected by: team habits, system design for testability
Mechanics: configure CI to collect, team agrees to respond to these events.

Planning and Estimates

Estimated vs Actual time

Definition: Compare estimates given before starting to the actual time elapsed.
Measures: Accuracy of estimates
Affected by: team familiarity, developer assignment, external dependencies

Mechanics: record both values, review and discuss differences.  Best when paired with ‘meterstick stories’ that can be used for comparison in lieu of raw hours.

Friday, February 13, 2015

The Highway Beautification Committee

The story goes that an old farmer was sitting on his porch one morning. He was looking down the state highway that ran in front of his house, and saw a truck off in the distance. As he watched, one of the people in the truck got out, and picked up a shovel from the back of the truck.  They dug a hole by the highway, and got back in the truck.

A little bit of time passed, and the other person got out of the truck, and picked up the shovel.  They filled in the freshly-dug hole, and returned to the truck.  They pulled forward about 50 yards and did it all over again: dug a hole, waited, filled in the hole, and pulled forward.

The farmer watched these two all morning as they approached his house, until finally they were close enough for him to yell out, "Hey!  What are you two doing out there?"

The first person yelled back, "We're on the Highway Beautification Committee!"

Puzzled, the farmer said, "Beautification?  I don't think I understand."

"Oh, well, the guy who plants the trees is out sick today."

Wednesday, March 12, 2014

PragDave is wrong. And his advice is harmful.

Dave Thomas has posted that "Agile is dead, long live agility."  There's some credibility assumed there, as Dave is a signer of the original manifesto itself, at the famous Snowbirds meeting that spawned it.
He asserts that Agile itself is corrupted, and we should reject the word and all its related practices, education, literature, trade groups, and conferences.

But he's wrong.

These terms (Eco, Natural, and Agile) aren't an excuse to turn off your brain.  You still have to know what Agile (or Eco or Natural) means in order to evaluate that claim.  Expecting everyone who uses a term to use it in exactly the same way is ridiculous.  Changing it from Agile to "agility" won't make any difference or distinction.  The same shady consultants and book writers that create "Doing Agile Right" books and courses will just write "Programming with Agility the Right Way."  The same companies that say "we are agile" while writing four-page user stories won't wake up and say, "oh, our practices don't exhibit agility", they'll continue to say it just the same.  And you'll still have to evaluate their claims, just the same.

But he's not just wrong, he's harmful.

Dave suggests that agile conferences, trainings, and apparently all literature on the subject is counter to the original spirit of the manifesto.  Maybe is is, I wasn't there so I wouldn't know.  But I know that they wrote about teams coming together regularly to tune and adjust their behavior.  If your conference doesn't feel like that, it's a bad conference.  If your training doesn't feel like someone sharing their experiences so that you can use what works, it's a bad training.  If you're ignoring that principle by requiring that your teams "do Scrum by the book", then changing from nouns to adjectives in titles and slides isn't going to fix that.

Rejecting the experiences of others isn't going to make you better at doing agile, being agile, or performing with agility.  His advice to "just do these four things, and build up your experience" while an accurate general guide (that I like), it ignores the fact that he has, himself, taken at least fifteen years to get to this point.  Should we all just derive the fundamental principles of calculus ourselves as well?  Or would it be better to talk together, as a team or as an industry, about what works and what doesn't?

He talks about "protecting" the word agile -- there is no practical way to do that, unless you want to trademark the term and sue those who use it in a way that you don't approve of.  But I get the sense that Dave isn't a big fan of the Scrum Alliance either.

So what do we do?  We keep being leaders, we keep sharing what works, we keep pointing out when the emperor or alliance has no clothes.  What we don't do is mess around with meaningless semantics, and in the process reject an entire community and network who, on the whole, has changed the industry for the better and continues to push the envelope.

Sunday, August 25, 2013

How to get stakeholders addicted to attending demos

Do one of these characters sound like your business stakeholders?

Waterfall Warrior:  "I don't have time to look at mid-sprint demos, it's not like anything changes if I do."
Optimistic PO:  "I don't need to take time to look at the software mid-sprint, I already wrote down what I want."

The Waterfall Warrior is carrying the perception of non-agile projects -- all changes go through the Change Board, whose middle name is "Denial."  Attending early demos is just a waste of time, as the news (bad or good) is the same whether she hears it every week, or just once at the end of the sprint.

The Optimistic PO believes in your team's perfection -- maybe too much!  They don't yet see the problem that their original docs are full of ambiguity.  

Both of these are a contract negotiation model.  How do we get them to participate, and as the Manifesto says, favor customer collaboration over contract negotiation?


Have your mid-sprint demos in a very warm room, and as they enter, secretly put a nicotine patch on them, and remove it when they leave.  Eventually, they will become trained that early demos deliver a mild euphoria.  If a few days pass with no demo, your stakeholders become anxious, and jittery.  They come to the Scrum Master and ask, "Hey, can you give me a demo?  I could really use a demo right about now."

There's one problem with this solution: it's felony assault.  A better idea is to find something more addictive, and easier to deliver. 


Saying "yes" to changes, with small or zero change in the cost, is like crack cocaine to stakeholders.  But how do we magically make changes cheap or free?  Timing.  Typically work that is recently created is the cheapest to change.  It has nothing built upon it, no disruption cost, and if we give preliminary demos before costly activities like final QA, less rework cost.  Try to put yourself in a situation where you're often saying, "Yes, we can change that.  No, it's not a schedule impact, I just build it this morning."

It's the role of the Scrum Master to help the team move to this "right time", but also to highlight with some fanfare what just happened.  Enthusiastic personalities might say, "Wait, say that again?  We just found a change and are going to do it, with no project impact?  That's great!"  You could say "groovy" here, but that's probably stretching the metaphor.  Regardless of your personality, it's the role of the Scrum Master to be sure that the stakeholder realizes that this is different than it has been before, and it's not just luck.  It's a result of their attendance.  Done properly, a few days later the stakeholder will come to the Scrum Master and say:

"Hey man, you got any demos?"

Friday, December 21, 2012

Scalatron: The best language tutorial / learning environment ever made

Let me expand on how great the Scalatron initial experience has been.

Scalatron has this fantastic introduction in several steps.  Each step produces a bot that does something, and by pressing a couple of buttons in the in-browser UI, you can see it work as well as tweak it yourself.  Here's a screenshot:

On the left is the tutorial, in the middle is the code, and on the right is the sandbox that allows you to actually see your little 'bot running around, limited to what it's allowed to "see".  All this is regular in-browser stuff, no plugins.  And the highest praise I can give it is that it "just works".  Edit the code in the middle, press 'run in sandbox' and it does.  Compile errors?  The build step pops up an error box at the bottom with line numbers.  Smooth.

And then the interaction with the tutorial:  Every block in the tutorial has a "load into editor button" drops the code directly into the editor pane.   Which means no copy/paste errors, and it works in blocks, so if you're working on the missile-launching section and you messed up the movement section with a half-baked idea, the tutorial lets you reset just one part.  This is really polished, catching use cases like, "you asked to load it into the editor, but you haven't saved the work there.  Save first?"

The tutorials were the foundation of the thoughts and opinions on the language that you can read in my previous post.  They take you through the major language features by introducing real problems you need to solve: how do I parse a string?  How do I re-use a function?  What are vars and vals?  None of it feels contrived just to make a point.

And this IDE has room for growth.  There's a little [<<] button on the tutorial that lets you take off those training wheels, and continue on with your bot development.  Save your work, give it a label, and load it into the Scalatron battle instance that loaded up when you started Scalatron.  Instant gratification.

One complaint I have is that it's not very clear from this IDE how to restart the tournament without restarting the Scalatron process, which is running your IDE is running the tournament.  It doesn't always interrupt the IDE, but when it does...

My other complaint, is that there's no way to do TDD with it.  That's a show stopper for me, if I write too much code without a test I start to get physically ill.  So now it's time to get a Scala environment up and running.  This has often been the stopping point for me in other languages/environments as I really have very little patience for cobbling together build scripts or downloading disparate parts.  More on that in the next post.