I'm reading "Beyond Java", by Bruce Tate, and he complains as many have about checked exceptions. I've never understood this complaint, and I don't want to say that it's because I understand how to use them and all these smart people don't, but I'm about to.
I like checked exceptions, they keep me honest. Tate asserts in many places in his book that the few actual problems that static typing catches would be caught with unit tests anyway, but this seems like a case where you can't make that argument anymore. In the unit tests I write and read, failure cases are the least well handled.
Tate (and others) asserts that checked exceptions are invasive, and continues to say that most of the time you can't do anything but throw it, so you're always throwing them, and eventually you just ignore anything you read that has exception in it.
He's right, about that part. The problem that I see is that it's never led him to used checked and unchecked exceptions together. Here's how I evaluate whether to handle, re-throw or convert an exception:
1. Can I handle it? Obviously, if I can, I do.
2. Will callers be able to handle it? Then let them.
3. If callers can't handle it, is it imperative (due to data corruption) that callers be aware of this situation?
4. If none of the above, turn it into a Runtime Exception.
These questions are asked at integration points, like the barrier between a webapp and the database, a property loader and the filesystem, or a utility program and a webservice. If we had no checked exceptions, I feel pretty confident that I would fail to consider all the potential failures in these integration points, and have far less robust code as a result.
I write stuff sometimes. Also see my entries on medium: https://medium.com/@kevin_51850
Monday, October 30, 2006
Friday, October 27, 2006
Selenium rocks the house
I'm so impressed with Selenium. The browser integration and the IDE true/false makes it very easy to set up and use. I literally got it working in about ten minutes. I had a couple of bugs from QA, so I used Selinium to record their replication in my dev environment, then switched a couple of attributes from "off" to "on" to represent what the code *should* do. And now I'm back to automated test-first programming.
I see that with the Selinium RC server, I could actually run these things as Junit tests, or I can run them with the selinium suite. I'm torn as to which way I want to go. Obviously, recording them via the Selinium IDE is the way to start, but being able to make modules in Java code sounds pretty powerful. Once you go to Java, though, you don't go back to the IDE, so I'm worried that will make it less flexible in terms of using selinium tests to communicate with the QA department.
I'm sure that the QA department can learn to use this tool -- they've learned far more byzantine automation tools in the past. A frequent problem we have is in communicating with the folks that do our acceptance testing. They're a hand-picked subset of our actual users, and it's often difficult to communicate unusual situations. The type of bug reports we are typical of non-technical bug reports, basically "got this error" without a lot of information of what happened before the error happened in the first place. That's not their fault -- they're likely not thinking about what they've done before, and in many cases the problem is so complicated or convoluted that they can't realistically be expected to remember.
The thing I'm really wondering is if Selinium is simple enough to be used by our acceptance testers to help articulate what they're seeing. Could we really ask them to just record their session, and playback their experience? That would be pretty exciting.
I see that with the Selinium RC server, I could actually run these things as Junit tests, or I can run them with the selinium suite. I'm torn as to which way I want to go. Obviously, recording them via the Selinium IDE is the way to start, but being able to make modules in Java code sounds pretty powerful. Once you go to Java, though, you don't go back to the IDE, so I'm worried that will make it less flexible in terms of using selinium tests to communicate with the QA department.
I'm sure that the QA department can learn to use this tool -- they've learned far more byzantine automation tools in the past. A frequent problem we have is in communicating with the folks that do our acceptance testing. They're a hand-picked subset of our actual users, and it's often difficult to communicate unusual situations. The type of bug reports we are typical of non-technical bug reports, basically "got this error" without a lot of information of what happened before the error happened in the first place. That's not their fault -- they're likely not thinking about what they've done before, and in many cases the problem is so complicated or convoluted that they can't realistically be expected to remember.
The thing I'm really wondering is if Selinium is simple enough to be used by our acceptance testers to help articulate what they're seeing. Could we really ask them to just record their session, and playback their experience? That would be pretty exciting.
Tuesday, October 24, 2006
So much less spaghetti code?
I talked to a Rails proponent today, who mentioned that he was very tired of all the spaghetti code he was working with in his current development language (Java), and said that working with Ruby and Rails was such a relief. He espoused this as an attribute of Ruby itself, that it didn't create such spaghetti code.
I, the skeptic, suggested that the spaghetti just hasn't been cooked yet.
I, the skeptic, suggested that the spaghetti just hasn't been cooked yet.
Subscribe to:
Posts (Atom)