One of the best things that I feel I took away after working on our
4th year project at
school was software process. I think that was the whole point, so I feel good about my experience. Not only have I been using
test driven development for a while now, but I'm starting to use other "tricks" that have been picked up. Like keeping 2 test suites, one that is always green, one where developers can put tests that they want to pass, but don't have
time to work on a special case of a feature right now. This really comes from discussions and posts with
Ryan (
here,
here, and
here).
I just think that I am able to apply so much from one "class" to what I am doing in the real world. I think that I have had too many classes where I don't think that I will ever use anything that was taught to me in there. I think that I just like practical things. ;-)
Listening to: Barenaked Ladies - Falling For The First Time
A and B test suites came from certain problems I found with XP that I don't think I ever explained in a blog post. Maybe I'll go into that in detail later. XP takes an idealized view of testing sometimes, so in "real life" you need to adapt it.
ReplyDeleteLike:
1. What if you have a test that fails that's not critical/stop-ship? XP doesn't address different classes of defects. You could demote this test from A to B and have a green build again.
2. Saying you always want to be green is psychologically bad for developers. In a system with one test suite, developers won't want to write failing tests because they will technically "break the build" (especially if they can't write a fix right away), and this is exactly what you WANT them to do: write more tests!
3. This also prevents QA from writing unit tests that expose failures without writing the fix. You can't put these unit tests in the main test suite because you'll "break the build".
The tricky part is the B to A promotion or A to B demotion. It could be as informal or formal as you liked. I don't think A/B test suites worked out as well with AudioMan as I would have liked. We didn't use or keep an eye on B enough.
If you had a serious QA team on a project though, the B suite could be their "turf" and it would be better organized, since they would deal with all of the pending defect fixes, along with the project manager (PM). Bug numbers on tests also help with that.
Like Spolsky was talking about with social interfaces: you have to enable the behaviour you want to happen and discourage the behaviour you don't. On a project, you can do this with software and/or process. IMO, one test suite does not enable effective project testing.
Not only to have a "B suite", but I think that it would be great to have an automated "feature" suite. More of a customer test. For certain releases, you have a bunch of customer tests that have to pass before release. If they do, then you stop coding and release.
ReplyDeleteEach new release can have it's own suite (a sub-suite of the "customer tests").
Now this is only good if you have someone (customer?) *actually* witting these tests in an automated form. Most of the time I think requirements and use cases don't get translated into something that can be run many times a day on an integration box.
I know this is what XP is all about, but I think that one of the hard parts about XP is 1) getting it through to people that it would help to have a person on-site and 2) getting that person to write automated tests.
Hell, *I* don't like witting automated tests against the UI, so why would the customer?
In XP I think these are called "acceptance tests", but they really mean the same thing as "customer tests". Yes, they are supposed to represent actual use cases but they don't have to be against the UI.
ReplyDeleteThis is where "explicitly writing your code to be tested" comes in, which implicitly happens when you do TDD with acceptance tests and well as regular unit tests.
You can seperate UI actions by method, and call those methods from (the UI or) a test in order to perform actions that are part of a use case.
Sure, you're not actually clicking on buttons or filling in text boxes. But if you push as much code as you can out of the UI code and into the business logic, this call all be tested automatically.
Then for important builds you run through some UI test cases to make sure everything is connected properly, doing your manual testing.
The problem that a lot of projects have is a lot of code stuck in UI handlers where it cannot be tested automatically! Encapsulate this in a method call, push it down to business logic layer (ie. JFace has Actions) and you can call it with automated tests.
As for the customer writing tests, this is not meant to be taken literally. They might write the use cases they want to work, and you translate them into code. They also need help thinking of exceptions and error cases from QA people.
Some customers have reps that are coders and can do this. Some do not, and require your help to make the testing code.