So, I've completely drunk the testing Kool-Aid. I've taken it as my personal goal to incorporate automated testing in my entire process, and have seen it work. I've had multiple releases go through QA with no functional bugs*. I preach test-first development to everyone that will listen.
The problem is that our users actually want buggy features.
This was pointed out to me by some of my coworkers in a code review where we discovered a class that was fairly complex, but had no unit tests. We talked about why this came to be, and the answer given was that if the project goes to QA but has bugs, nobody gets in trouble. If it goes to QA, has no bugs, but has fewer than expected features, then the project "slips", people panic, and we have PMs and above at our desks wringing their hands and asking "what went wrong." The pressure shifts from dev to pump stuff out to QA to give the approval. And I fall into this trap every time.
However, I've said "my users", but it isn't my users that are giving me pressure on the release dates. It's my project management. Maybe I should say:
My project managment actually wants buggy features.
For those of you following along at home, this isn't specific to my current place of employment. Now that I consider it, I think I've seen this everywhere I've worked. The only time I didn't feel it was on the two projects where we had a reputation for quality in our QA releases. I feel like lightning has to strike in order to build up this kind of reputation, and I haven't been able to make a tall enough rod on my current projects.
What's worse is that we clearly increase the amount of time between when the bug is written and when it is discovered. That means the bugs take longer to fix, which ultimately means that the project takes longer, even though we've "hit our dates." This is clearly because there's an expectation of a long time in QA that can't be well estimated.
I think the Agile folks would suggest that the problem is with "QA resources" in general. I have a tough time letting go of that specialization. Test plans are difficult to write well, and even more difficult to execute with consistency and an eye for detail.
Maybe the right answer for us (if this can't be addressed by talking to the various date-concerned entities) is to not expose a QA handoff date, but incorporate QA into the dev team itself as partners in the ultimate deliverable date. XP would seem to suggest this as the way to go. We would have more flexibility (agility?) in giving sections to QA when they're testable, and optimally we could even test things earlier in our cycle than we currently do.
I worry that some parts of the team leadership may hyperventillate at the idea that there isn't an official QA handoff date that they can track and put a checkmark next to, or put less snarkily, have less information with which to schedule QA resources.
I still don't know if I can reconcile those needs, but I know that I don't like the behaviors that the current process encourages.
* functional bugs are what you get when QA says, "should it do this?" and Dev says, "oops, no." There are tons of other kinds of bugs, like usability issues or miscommunication issues. Most of those aren't addressed by automated testing.