Thursday, November 30, 2006

Always and Never

The first thing I have to explain to business people when they're trying to give me requirements is that there are two kinds of "never". There's the business/real-world never, and then there's developer never. What regular people usually mean when they say never is "not very often".

In order to get the point across, and to clarify what people want when they say "never", I explain the repercussions: "Okay, you say this never happens. Is it okay if the program asks the user to contact support and then exits if it does happen?" That tends to get us to the right answer pretty quickly.

Trickier is "always". It has the same problem of being far more precice in software than it is in English. What's worse is that it's often implied or inferred in an otherwise reasonable requirement. "The software will send an email when an order is placed." Is that an always statement? Asking this question can be very illuminating. Typical responses can be "Well, not if there's been a problem -- then we want to call." or "Yes, if they've given us an email address".

Friday, November 17, 2006

Do (our) users want broken features?

So, I've completely drunk the testing Kool-Aid. I've taken it as my personal goal to incorporate automated testing in my entire process, and have seen it work. I've had multiple releases go through QA with no functional bugs*. I preach test-first development to everyone that will listen.

The problem is that our users actually want buggy features.

This was pointed out to me by some of my coworkers in a code review where we discovered a class that was fairly complex, but had no unit tests. We talked about why this came to be, and the answer given was that if the project goes to QA but has bugs, nobody gets in trouble. If it goes to QA, has no bugs, but has fewer than expected features, then the project "slips", people panic, and we have PMs and above at our desks wringing their hands and asking "what went wrong." The pressure shifts from dev to pump stuff out to QA to give the approval. And I fall into this trap every time.

However, I've said "my users", but it isn't my users that are giving me pressure on the release dates. It's my project management. Maybe I should say:

My project managment actually wants buggy features.

For those of you following along at home, this isn't specific to my current place of employment. Now that I consider it, I think I've seen this everywhere I've worked. The only time I didn't feel it was on the two projects where we had a reputation for quality in our QA releases. I feel like lightning has to strike in order to build up this kind of reputation, and I haven't been able to make a tall enough rod on my current projects.

What's worse is that we clearly increase the amount of time between when the bug is written and when it is discovered. That means the bugs take longer to fix, which ultimately means that the project takes longer, even though we've "hit our dates." This is clearly because there's an expectation of a long time in QA that can't be well estimated.

I think the Agile folks would suggest that the problem is with "QA resources" in general. I have a tough time letting go of that specialization. Test plans are difficult to write well, and even more difficult to execute with consistency and an eye for detail.

Maybe the right answer for us (if this can't be addressed by talking to the various date-concerned entities) is to not expose a QA handoff date, but incorporate QA into the dev team itself as partners in the ultimate deliverable date. XP would seem to suggest this as the way to go. We would have more flexibility (agility?) in giving sections to QA when they're testable, and optimally we could even test things earlier in our cycle than we currently do.

I worry that some parts of the team leadership may hyperventillate at the idea that there isn't an official QA handoff date that they can track and put a checkmark next to, or put less snarkily, have less information with which to schedule QA resources.

I still don't know if I can reconcile those needs, but I know that I don't like the behaviors that the current process encourages.

* functional bugs are what you get when QA says, "should it do this?" and Dev says, "oops, no." There are tons of other kinds of bugs, like usability issues or miscommunication issues. Most of those aren't addressed by automated testing.

Tuesday, November 14, 2006

Phidgets: the Beginning

So, I'm a total hardware n00b. I've written software for a decade, and happy talking about inheritence, encapsulation, polymorphism and tossing around newer buzzwords like dependency injection and all that.

But I decided to start a hardware project. I'm not going to give away the ending, mostly because I'll probably never get there. Remind me to tell you the story about the coffee table I was going to make once.

Anyway, I bought an 8/8/8 Interface Kit from Phidgets.com. It's a smallish thing, about the size of a deck of cards, and about a hundred bucks delivered. It's designed to pass inputs and outputs through USB to a computer, where software running there actually makes the decision about what to do when. Plus, one of the hojillion languages they support is Java, so that feels right at home.

I've got the thing on my desk now, and am installing the software. Except it needs the .net framework, blah blah blah. It's nice enough to redirect me to the download page, and after getting confused about my 64 bit proc but 32 bit operating system I'm good. That done, their software installs fine. While .Net was .downloading I got eclipse set up with their Java examples, and once the software was installed and the device connected their InterfaceKit example ran right out of the box. Granted, I have nothing hooked up to it, so I don't know if it's doing anything, but it prints lots of stuff to the screen. I grabbed a wire and jammed it between the ground and the different inputs, and successfully made stuff print to the screen. Cool!

Next up is outputs. For that I have an LED that I clipped off of a defunct printer screen. I'd have desoldered it, but Nicole just bought the desoldering iron, and I wanted her to be the first to use it. I hook it up, and nothing happens. So I hit their website, which has the "n00b manual for InterfaceKit", which is six pages long. I've read manuals that are 20 pages and have less useful information. There happens to be a section on "hooking up LEDs to your InterfaceKit", and by section I mean a page. It mentions anodes and cathodes. Which I look up on wikipedia, turn my LED around and it works.

Sort of. I've written my own software to turn input #1 on for 2 seconds, then off. The LED responds by shining bright and steady in the "on" state, but instead of being off for the "off" state, it flashes. I vaguely remember that "digital" outputs have some kind of square wave mojo going on below a certain hobgoblin threshhold or something, but all that information is eleven years old, and there is a lot of beer and parties standing between my CE 101 class and today's attempt. I also suspect that the answer is actually on the LED page of the manual, but again too much inebriation between those symbols and the present day.

So, I've got a flashing LED. The software has been fairly straightforward to this point, and while they don't release the sourcecode to their Java libraries, they do have a reasonably well written Javadoc. Moreover, to this point things are written in a pretty straightforward manner. To turn the LED on and off, I've written these lines:

interfaceKit.setOutputState(1,true);
interfaceKit.setOutputState(1,false);

(with sleep statements in between)

So, I'm reasonably sure I haven't screwed that up. I even have system.out.printlns in between, which of course make me feel dirty, but I'm ignoring that for now.

I'm not the only hardware n00b connected to the global inter-tubes, so I sign up for their forums. I could complain about the way the forums insist on mailing me a generated password as though I'm going to be doing stock trading on this thing or something, but that's really just sniping at this point. I shut up about the password thing, and enter "flashing LED" into the forums.

No luck there, I think now that the problem has to do with the shoddy wire I'm using (cut out of an old phone cable, stripped with a kitchen knife, and not a solid wire but twisted copper). I changed the LED to another output, and this time off was off, but on was flashing. I went to change it again, and dropped the LED off the end of the wire (I just had it wrapped, not soldered or anything), and have decided to declare that to be the problem, and call it a night.

Next time: wire strippers and real wire!

Monday, October 30, 2006

Dead Horse: Checked Exceptions

I'm reading "Beyond Java", by Bruce Tate, and he complains as many have about checked exceptions. I've never understood this complaint, and I don't want to say that it's because I understand how to use them and all these smart people don't, but I'm about to.

I like checked exceptions, they keep me honest. Tate asserts in many places in his book that the few actual problems that static typing catches would be caught with unit tests anyway, but this seems like a case where you can't make that argument anymore. In the unit tests I write and read, failure cases are the least well handled.

Tate (and others) asserts that checked exceptions are invasive, and continues to say that most of the time you can't do anything but throw it, so you're always throwing them, and eventually you just ignore anything you read that has exception in it.

He's right, about that part. The problem that I see is that it's never led him to used checked and unchecked exceptions together. Here's how I evaluate whether to handle, re-throw or convert an exception:


1. Can I handle it? Obviously, if I can, I do.
2. Will callers be able to handle it? Then let them.
3. If callers can't handle it, is it imperative (due to data corruption) that callers be aware of this situation?
4. If none of the above, turn it into a Runtime Exception.

These questions are asked at integration points, like the barrier between a webapp and the database, a property loader and the filesystem, or a utility program and a webservice. If we had no checked exceptions, I feel pretty confident that I would fail to consider all the potential failures in these integration points, and have far less robust code as a result.

Friday, October 27, 2006

Selenium rocks the house

I'm so impressed with Selenium. The browser integration and the IDE true/false makes it very easy to set up and use. I literally got it working in about ten minutes. I had a couple of bugs from QA, so I used Selinium to record their replication in my dev environment, then switched a couple of attributes from "off" to "on" to represent what the code *should* do. And now I'm back to automated test-first programming.

I see that with the Selinium RC server, I could actually run these things as Junit tests, or I can run them with the selinium suite. I'm torn as to which way I want to go. Obviously, recording them via the Selinium IDE is the way to start, but being able to make modules in Java code sounds pretty powerful. Once you go to Java, though, you don't go back to the IDE, so I'm worried that will make it less flexible in terms of using selinium tests to communicate with the QA department.

I'm sure that the QA department can learn to use this tool -- they've learned far more byzantine automation tools in the past. A frequent problem we have is in communicating with the folks that do our acceptance testing. They're a hand-picked subset of our actual users, and it's often difficult to communicate unusual situations. The type of bug reports we are typical of non-technical bug reports, basically "got this error" without a lot of information of what happened before the error happened in the first place. That's not their fault -- they're likely not thinking about what they've done before, and in many cases the problem is so complicated or convoluted that they can't realistically be expected to remember.

The thing I'm really wondering is if Selinium is simple enough to be used by our acceptance testers to help articulate what they're seeing. Could we really ask them to just record their session, and playback their experience? That would be pretty exciting.

Tuesday, October 24, 2006

So much less spaghetti code?

I talked to a Rails proponent today, who mentioned that he was very tired of all the spaghetti code he was working with in his current development language (Java), and said that working with Ruby and Rails was such a relief. He espoused this as an attribute of Ruby itself, that it didn't create such spaghetti code.

I, the skeptic, suggested that the spaghetti just hasn't been cooked yet.

Wednesday, September 06, 2006

Rails and the SOA

In a nutshell, I'm told that one of the great things about Rails is that you don't have to do all this middleware work to access the database. It gens up objects and they Just Work (tm).

Other people talk about how great middleware is: Create your Service Oriented Architecture as though the database doesn't even exist, and interact with it that way. Do all the ugly database stuff for the back-end of that.

Is the upshot of this that you should write middleware with Rails?

Wednesday, August 02, 2006

Please choose the site closest to you, because our site is stupid.

Why do websites and download tools continue to ask me to choose from a badly-sorted list of locations to decide where I should download from? Sourceforge and Eclipse both do this. They're completely capable of asking (and in at least Eclipse's case remembering) where I am, and choosing a site from there.

Otherwise, I'm left staring at a huge list of locations, some of which are ridiculous (Tel Aviv? I'm sure I've got a hot connection there), and others are unknowable (UMP? What does that mean?).

The whole idea of online updates is that it's easy. Adding these user-unfriendly steps kind of defeats that purpose.

Tuesday, June 27, 2006

Nice goal, what's your plan?

Scott Berkun writes today that, in a nutshell, personal goals are amorphous and weird. He points out that in order for them to be useful, they have to be measurable and specific. But that's not what goals are. If you ask someone for their life goals, they'll say things like "retire by 60" or "own my own home" or somesuch. That's not measurable until it's done. The illuminating followup question is, "What's your plan to reach that goal?" If you're talking about your life, and you don't have a plan to reach that goal, then you're not gonna. The same is true for career goals.

Let's say your goal is to "master Hibernate". That's a nice goal, but not at all measurable. Plan items might be to:

  1. Read a particular book on Hibernate and give a presentation.
  2. Answer (correctly) at least 7 postings in the Hibernate forums.
  3. Apply skills above to map a bidrectional many-to-many relationship.
  4. Research transaction options and give a presentation/recommendation.

Goals without a plan aren't goals, they're just hopes.