The first thing I have to explain to business people when they're trying to give me requirements is that there are two kinds of "never". There's the business/real-world never, and then there's developer never. What regular people usually mean when they say never is "not very often".
In order to get the point across, and to clarify what people want when they say "never", I explain the repercussions: "Okay, you say this never happens. Is it okay if the program asks the user to contact support and then exits if it does happen?" That tends to get us to the right answer pretty quickly.
Trickier is "always". It has the same problem of being far more precice in software than it is in English. What's worse is that it's often implied or inferred in an otherwise reasonable requirement. "The software will send an email when an order is placed." Is that an always statement? Asking this question can be very illuminating. Typical responses can be "Well, not if there's been a problem -- then we want to call." or "Yes, if they've given us an email address".
I write stuff sometimes. Also see my entries on medium: https://medium.com/@kevin_51850
Thursday, November 30, 2006
Friday, November 17, 2006
Do (our) users want broken features?
So, I've completely drunk the testing Kool-Aid. I've taken it as my personal goal to incorporate automated testing in my entire process, and have seen it work. I've had multiple releases go through QA with no functional bugs*. I preach test-first development to everyone that will listen.
The problem is that our users actually want buggy features.
This was pointed out to me by some of my coworkers in a code review where we discovered a class that was fairly complex, but had no unit tests. We talked about why this came to be, and the answer given was that if the project goes to QA but has bugs, nobody gets in trouble. If it goes to QA, has no bugs, but has fewer than expected features, then the project "slips", people panic, and we have PMs and above at our desks wringing their hands and asking "what went wrong." The pressure shifts from dev to pump stuff out to QA to give the approval. And I fall into this trap every time.
However, I've said "my users", but it isn't my users that are giving me pressure on the release dates. It's my project management. Maybe I should say:
My project managment actually wants buggy features.
For those of you following along at home, this isn't specific to my current place of employment. Now that I consider it, I think I've seen this everywhere I've worked. The only time I didn't feel it was on the two projects where we had a reputation for quality in our QA releases. I feel like lightning has to strike in order to build up this kind of reputation, and I haven't been able to make a tall enough rod on my current projects.
What's worse is that we clearly increase the amount of time between when the bug is written and when it is discovered. That means the bugs take longer to fix, which ultimately means that the project takes longer, even though we've "hit our dates." This is clearly because there's an expectation of a long time in QA that can't be well estimated.
I think the Agile folks would suggest that the problem is with "QA resources" in general. I have a tough time letting go of that specialization. Test plans are difficult to write well, and even more difficult to execute with consistency and an eye for detail.
Maybe the right answer for us (if this can't be addressed by talking to the various date-concerned entities) is to not expose a QA handoff date, but incorporate QA into the dev team itself as partners in the ultimate deliverable date. XP would seem to suggest this as the way to go. We would have more flexibility (agility?) in giving sections to QA when they're testable, and optimally we could even test things earlier in our cycle than we currently do.
I worry that some parts of the team leadership may hyperventillate at the idea that there isn't an official QA handoff date that they can track and put a checkmark next to, or put less snarkily, have less information with which to schedule QA resources.
I still don't know if I can reconcile those needs, but I know that I don't like the behaviors that the current process encourages.
* functional bugs are what you get when QA says, "should it do this?" and Dev says, "oops, no." There are tons of other kinds of bugs, like usability issues or miscommunication issues. Most of those aren't addressed by automated testing.
The problem is that our users actually want buggy features.
This was pointed out to me by some of my coworkers in a code review where we discovered a class that was fairly complex, but had no unit tests. We talked about why this came to be, and the answer given was that if the project goes to QA but has bugs, nobody gets in trouble. If it goes to QA, has no bugs, but has fewer than expected features, then the project "slips", people panic, and we have PMs and above at our desks wringing their hands and asking "what went wrong." The pressure shifts from dev to pump stuff out to QA to give the approval. And I fall into this trap every time.
However, I've said "my users", but it isn't my users that are giving me pressure on the release dates. It's my project management. Maybe I should say:
My project managment actually wants buggy features.
For those of you following along at home, this isn't specific to my current place of employment. Now that I consider it, I think I've seen this everywhere I've worked. The only time I didn't feel it was on the two projects where we had a reputation for quality in our QA releases. I feel like lightning has to strike in order to build up this kind of reputation, and I haven't been able to make a tall enough rod on my current projects.
What's worse is that we clearly increase the amount of time between when the bug is written and when it is discovered. That means the bugs take longer to fix, which ultimately means that the project takes longer, even though we've "hit our dates." This is clearly because there's an expectation of a long time in QA that can't be well estimated.
I think the Agile folks would suggest that the problem is with "QA resources" in general. I have a tough time letting go of that specialization. Test plans are difficult to write well, and even more difficult to execute with consistency and an eye for detail.
Maybe the right answer for us (if this can't be addressed by talking to the various date-concerned entities) is to not expose a QA handoff date, but incorporate QA into the dev team itself as partners in the ultimate deliverable date. XP would seem to suggest this as the way to go. We would have more flexibility (agility?) in giving sections to QA when they're testable, and optimally we could even test things earlier in our cycle than we currently do.
I worry that some parts of the team leadership may hyperventillate at the idea that there isn't an official QA handoff date that they can track and put a checkmark next to, or put less snarkily, have less information with which to schedule QA resources.
I still don't know if I can reconcile those needs, but I know that I don't like the behaviors that the current process encourages.
* functional bugs are what you get when QA says, "should it do this?" and Dev says, "oops, no." There are tons of other kinds of bugs, like usability issues or miscommunication issues. Most of those aren't addressed by automated testing.
Tuesday, November 14, 2006
Phidgets: the Beginning
So, I'm a total hardware n00b. I've written software for a decade, and happy talking about inheritence, encapsulation, polymorphism and tossing around newer buzzwords like dependency injection and all that.
But I decided to start a hardware project. I'm not going to give away the ending, mostly because I'll probably never get there. Remind me to tell you the story about the coffee table I was going to make once.
Anyway, I bought an 8/8/8 Interface Kit from Phidgets.com. It's a smallish thing, about the size of a deck of cards, and about a hundred bucks delivered. It's designed to pass inputs and outputs through USB to a computer, where software running there actually makes the decision about what to do when. Plus, one of the hojillion languages they support is Java, so that feels right at home.
I've got the thing on my desk now, and am installing the software. Except it needs the .net framework, blah blah blah. It's nice enough to redirect me to the download page, and after getting confused about my 64 bit proc but 32 bit operating system I'm good. That done, their software installs fine. While .Net was .downloading I got eclipse set up with their Java examples, and once the software was installed and the device connected their InterfaceKit example ran right out of the box. Granted, I have nothing hooked up to it, so I don't know if it's doing anything, but it prints lots of stuff to the screen. I grabbed a wire and jammed it between the ground and the different inputs, and successfully made stuff print to the screen. Cool!
Next up is outputs. For that I have an LED that I clipped off of a defunct printer screen. I'd have desoldered it, but Nicole just bought the desoldering iron, and I wanted her to be the first to use it. I hook it up, and nothing happens. So I hit their website, which has the "n00b manual for InterfaceKit", which is six pages long. I've read manuals that are 20 pages and have less useful information. There happens to be a section on "hooking up LEDs to your InterfaceKit", and by section I mean a page. It mentions anodes and cathodes. Which I look up on wikipedia, turn my LED around and it works.
Sort of. I've written my own software to turn input #1 on for 2 seconds, then off. The LED responds by shining bright and steady in the "on" state, but instead of being off for the "off" state, it flashes. I vaguely remember that "digital" outputs have some kind of square wave mojo going on below a certain hobgoblin threshhold or something, but all that information is eleven years old, and there is a lot of beer and parties standing between my CE 101 class and today's attempt. I also suspect that the answer is actually on the LED page of the manual, but again too much inebriation between those symbols and the present day.
So, I've got a flashing LED. The software has been fairly straightforward to this point, and while they don't release the sourcecode to their Java libraries, they do have a reasonably well written Javadoc. Moreover, to this point things are written in a pretty straightforward manner. To turn the LED on and off, I've written these lines:
interfaceKit.setOutputState(1,true);
interfaceKit.setOutputState(1,false);
(with sleep statements in between)
So, I'm reasonably sure I haven't screwed that up. I even have system.out.printlns in between, which of course make me feel dirty, but I'm ignoring that for now.
I'm not the only hardware n00b connected to the global inter-tubes, so I sign up for their forums. I could complain about the way the forums insist on mailing me a generated password as though I'm going to be doing stock trading on this thing or something, but that's really just sniping at this point. I shut up about the password thing, and enter "flashing LED" into the forums.
No luck there, I think now that the problem has to do with the shoddy wire I'm using (cut out of an old phone cable, stripped with a kitchen knife, and not a solid wire but twisted copper). I changed the LED to another output, and this time off was off, but on was flashing. I went to change it again, and dropped the LED off the end of the wire (I just had it wrapped, not soldered or anything), and have decided to declare that to be the problem, and call it a night.
Next time: wire strippers and real wire!
But I decided to start a hardware project. I'm not going to give away the ending, mostly because I'll probably never get there. Remind me to tell you the story about the coffee table I was going to make once.
Anyway, I bought an 8/8/8 Interface Kit from Phidgets.com. It's a smallish thing, about the size of a deck of cards, and about a hundred bucks delivered. It's designed to pass inputs and outputs through USB to a computer, where software running there actually makes the decision about what to do when. Plus, one of the hojillion languages they support is Java, so that feels right at home.
I've got the thing on my desk now, and am installing the software. Except it needs the .net framework, blah blah blah. It's nice enough to redirect me to the download page, and after getting confused about my 64 bit proc but 32 bit operating system I'm good. That done, their software installs fine. While .Net was .downloading I got eclipse set up with their Java examples, and once the software was installed and the device connected their InterfaceKit example ran right out of the box. Granted, I have nothing hooked up to it, so I don't know if it's doing anything, but it prints lots of stuff to the screen. I grabbed a wire and jammed it between the ground and the different inputs, and successfully made stuff print to the screen. Cool!
Next up is outputs. For that I have an LED that I clipped off of a defunct printer screen. I'd have desoldered it, but Nicole just bought the desoldering iron, and I wanted her to be the first to use it. I hook it up, and nothing happens. So I hit their website, which has the "n00b manual for InterfaceKit", which is six pages long. I've read manuals that are 20 pages and have less useful information. There happens to be a section on "hooking up LEDs to your InterfaceKit", and by section I mean a page. It mentions anodes and cathodes. Which I look up on wikipedia, turn my LED around and it works.
Sort of. I've written my own software to turn input #1 on for 2 seconds, then off. The LED responds by shining bright and steady in the "on" state, but instead of being off for the "off" state, it flashes. I vaguely remember that "digital" outputs have some kind of square wave mojo going on below a certain hobgoblin threshhold or something, but all that information is eleven years old, and there is a lot of beer and parties standing between my CE 101 class and today's attempt. I also suspect that the answer is actually on the LED page of the manual, but again too much inebriation between those symbols and the present day.
So, I've got a flashing LED. The software has been fairly straightforward to this point, and while they don't release the sourcecode to their Java libraries, they do have a reasonably well written Javadoc. Moreover, to this point things are written in a pretty straightforward manner. To turn the LED on and off, I've written these lines:
interfaceKit.setOutputState(1,true);
interfaceKit.setOutputState(1,false);
(with sleep statements in between)
So, I'm reasonably sure I haven't screwed that up. I even have system.out.printlns in between, which of course make me feel dirty, but I'm ignoring that for now.
I'm not the only hardware n00b connected to the global inter-tubes, so I sign up for their forums. I could complain about the way the forums insist on mailing me a generated password as though I'm going to be doing stock trading on this thing or something, but that's really just sniping at this point. I shut up about the password thing, and enter "flashing LED" into the forums.
No luck there, I think now that the problem has to do with the shoddy wire I'm using (cut out of an old phone cable, stripped with a kitchen knife, and not a solid wire but twisted copper). I changed the LED to another output, and this time off was off, but on was flashing. I went to change it again, and dropped the LED off the end of the wire (I just had it wrapped, not soldered or anything), and have decided to declare that to be the problem, and call it a night.
Next time: wire strippers and real wire!
Subscribe to:
Posts (Atom)