GA Tracking Code

Tuesday, 6 October 2015

An Irrational Dislike of Test Cases


I don't like Test Cases.

I'm almost certainly the only one among my current teammates who winces every time I hear the term being used. And I hear it every day. (I deliberately avoid using it - if forced to discuss them I might say "scenarios" instead ... which is pretty childish behaviour on my part.)

Or maybe I don't like a certain form of Test Cases.
It depends on what we consider a Test Case to be.

It doesn't have to imply scripted testing, and it doesn't necessarily have to be limiting.
I certainly like to have a number of scenarios, described at a high level in a sentence or two, which I have identified and will cover off during testing.

In the past, when I used my own version of Session-Based Test Management, alongside a charter which set the scope of a test session I would often bullet point specific tests that I want to cover in that session. 
So maybe Test Cases are fine as long as they are a framework for exploratory testing and not used as a definition of "done".

But I definitely don't like detailed, step-by-step Test Cases.

Being tasked to execute them is tedious and frustrating and lets my mind wander to think about all the jobs I'd rather do than be a Tester.

In my current role it's quite usual to have regression cycles where a team of 3-4 Testers may spend 3-4 weeks working through Test Case sets which are:
- incomplete, even as a lightweight regression set (of course)
- out-of-date with changes in the product (of course)
- often unclear. (No doubt the person who wrote them understood them at the time).
- sometimes wrong. (The "expected result" is incorrect and doesn't seem like it could ever have been right.)

I don't blame the previous incumbents for not doing a complete job - they probably realised what a futile task that was. They probably just wanted to make a checklist of stuff that it was important not to forget.

I sympathise because I know that having to write detailed Test Cases - as I am expected to do - can be even more of a grind.

Each time I write a test case, I'm painfully aware of the limitations of the form.

I'm thinking "this doesn't cover the possibilities".
Should I write out all the paths and variations I can think of?  It would use up a lot of time that might be better spent actually testing - but more importantly I won't think of everything.  There will be potential problems I cannot conceive of until I have the software in front of me. (And I still won't think of everything even then.)

So I find myself writing test cases which are often no more than a title, and replacing detailed steps with "use initiative to ...."

But in Test Case Peer Review meetings (yes, we have those) it's made clear to me that my approach won't do.

But am I being cynical about Test Cases simply because I'm basically lazy and don't like having to do the boring parts of testing?

Others around me seem to have a belief in the magical, protective power of Test Cases. That if we have enough Test Cases everything will be ok.
Writing Test Cases early and often seems more important than actual testing. And if all the Test Cases have been written, then there might be time for Exploratory Testing.

But if you do any Exploratory Testing then you have to make sure you write up the Test Cases from it afterwards.






No comments:

Post a Comment