GA Tracking Code

Wednesday, 7 October 2015

Struggling to Convert the Test Case Believers (or: Apparently test cases ARE testing after all)

Frustrated by how attached to detailed static Test Cases my teammates seem to be, I recently presented to them on Session-Based Test Management. I wanted to show that test cases are not the only way to track and report on testing, and even give management those metrics that they love.

Talking about SBTM was also a jumping-off point for introducing other topics including "Testing vs Checking"; how exploratory testing is understood in Context-Driven Testing; and approaches to test design in ET eg. heuristics, personas.

There was nothing special about the presentation and I'm slightly hesitant to include it here.  Anyone who has worked in an organisation that takes ET and SBTM seriously will probably find it lacking.

To try and illustrate SBTM in action I also showed examples from a "home-brew" version of it that I had used in a previous job.  (I was a sole tester in my first testing job so, with no-one to guide me, I put a SBTM process together from blog posts and my own trial-and-error experiments.)

To be clear: although I am generally a fan, I claim neither to be a member of the Context-Driven Testing community, nor qualified to accurately represent their views.



My Exploratory Testing is better than your Exploratory Testing
My teammates certainly would argue that they do exploratory testing already but my impression is that it tends to be closer to the "ad hoc" approach that got ET an unfair reputation for being vague - and which SBTM was designed to overcome.

Also, in my team ET is definitely seen as a poor relation to test cases. And I have genuinely heard comments like "if there's time left after running all the tests in the spreadsheet we can do some exploratory testing".

Failing to create debate
I'm not delusional enough to think that one presentation from me would start a revolution, but I did hope we might at least have some debate about why they see so much value in test cases. And I hoped to hear some opinions/feedback on SBTM as a possible approach to delivering value through more thoughtful and skilled Exploratory Testing - with proper note-taking!

Sadly that just didn't happen. And what I thought would make a good blog post ended up being this one. Sorry.

It was almost as if what I was describing was a kind of interesting curiosity, but not relevant to them. Like when we watch a documentary about how people live in another culture.

Because they already do exploratory testing (as they see it). After all, exploratory testing is one of the things they use to come up with ideas for the real stuff - the Test Cases!

Now, of course, you'll be thinking that the failure to engage with my audience was down to my poor presentation skills. Maybe so.
Aware of that possibility, and of the fact that I was only skimming the surface of some big topics, I followed up the presentation by circulating links to articles by better communicators than me. (Reproduced below.)

I also offered to demo Rapid Reporter later to anyone was interested; and even emailed them all their own copy of the Test Heuristics Cheat Sheet.

A couple of weeks later, not only has no-one shown any interest, I find myself being asked to write even more test cases (because apparently our problem is that we don't have enough), and even attending Test Case Peer Review meetings.

Not giving up
It has to be said that after moaning about my presentation's lack of impact on Slack, I got some encouragement and a couple of good suggestions there.

Damian Meydac Jean pointed out that in situations like these there's often more to be gained by working on newbies than on people who've worked there for a while.

And Vernon Richards made a good point about how it can be more powerful to relate new ideas like these to specific issues or pain-points facing the team.

Seeking the case for test cases
But maybe I'm the one who's wrong. Why should the rest of the team/business change what they do to fit in with a lot of fancy-Dan ideas I'm repeating from the Twitter-bubble?

If I try to ask colleagues why they find test cases so valuable it does seem to strike them as an odd question to ask. There is just a sense in which there have to be test cases because, you know, we're doing testing. (Although they tend to say we're doing "QA".)
But more specific reasons include:
  • they provide scripts which can then be automated 
    • this does strike me as a pretty good reason although the reality is we have little automation and only a fraction of test cases written will ever be automated
  • they suggest test ideas 
    • by this it's meant that team members can read the test cases to get ideas for what tests to carry out on a feature/product
  • they serve as documentation
    • they can be useful as a way for testers to familiarise themselves with products they haven't worked on before
    • we have a situation where test cases are sometimes viewed as an oracle on "correct" product behaviour. Yikes.
  • they tell us when we're finished regression testing

Let's have the debate right here!
Now I couldn't resist unfairly undermining some of the reasons I just listed, but I would genuinely be interested to hear more about the positives regarding the test case approach and why it might provide more value than exploratory testing in certain contexts.

There is definitely a possibility that I am against them simply because I am too lazy to write them, and I find executing them boring.  (In which case, maybe I should just get out of testing - God knows I think about that a lot.)

So, if there's anyone reading this with a view on the value to be found in test cases please add a comment.  Despite my overall jokey tone here I really would like to be challenged on my cynical attitude.


-------------------------
For reference, below are the "further info" links I circulated, and added to our internal Wiki, after my presentation
-------------------------

Exploratory Testing

Why Scripted Testing Sucks, Steve Green

Exploratory Testing in a Regulated Environment, Josh Gibbs

Exploratory Testing, Michael Bolton
(Describes what ET is, and what it isn’t, according to Context-Driven Testing.  Also has a long list of links to further resources on ET)


Session-Based Test Management

Session-Based Test Management (overview), James Bach

Session-Based Test Management (practical details of using it to manage testing), Jonathan Bach

Managing Exploratory Testing, Rob Lambert

Learning to use Exploratory Testing in Your Organisation, Mike Talks

Benefits of session-based test management, Katrina Clokie


Exploratory Testing Frameworks

Generic Testing Personas, Katrina Clokie

Testing Tours, Mike Kelly

James Whittaker’s  Exploratory Testing Tours
https://msdn.microsoft.com/en-us/library/jj620911.aspx

Tuesday, 6 October 2015

An Irrational Dislike of Test Cases


I don't like Test Cases.

I'm almost certainly the only one among my current teammates who winces every time I hear the term being used. And I hear it every day. (I deliberately avoid using it - if forced to discuss them I might say "scenarios" instead ... which is pretty childish behaviour on my part.)

Or maybe I don't like a certain form of Test Cases.
It depends on what we consider a Test Case to be.

It doesn't have to imply scripted testing, and it doesn't necessarily have to be limiting.
I certainly like to have a number of scenarios, described at a high level in a sentence or two, which I have identified and will cover off during testing.

In the past, when I used my own version of Session-Based Test Management, alongside a charter which set the scope of a test session I would often bullet point specific tests that I want to cover in that session. 
So maybe Test Cases are fine as long as they are a framework for exploratory testing and not used as a definition of "done".

But I definitely don't like detailed, step-by-step Test Cases.

Being tasked to execute them is tedious and frustrating and lets my mind wander to think about all the jobs I'd rather do than be a Tester.

In my current role it's quite usual to have regression cycles where a team of 3-4 Testers may spend 3-4 weeks working through Test Case sets which are:
- incomplete, even as a lightweight regression set (of course)
- out-of-date with changes in the product (of course)
- often unclear. (No doubt the person who wrote them understood them at the time).
- sometimes wrong. (The "expected result" is incorrect and doesn't seem like it could ever have been right.)

I don't blame the previous incumbents for not doing a complete job - they probably realised what a futile task that was. They probably just wanted to make a checklist of stuff that it was important not to forget.

I sympathise because I know that having to write detailed Test Cases - as I am expected to do - can be even more of a grind.

Each time I write a test case, I'm painfully aware of the limitations of the form.

I'm thinking "this doesn't cover the possibilities".
Should I write out all the paths and variations I can think of?  It would use up a lot of time that might be better spent actually testing - but more importantly I won't think of everything.  There will be potential problems I cannot conceive of until I have the software in front of me. (And I still won't think of everything even then.)

So I find myself writing test cases which are often no more than a title, and replacing detailed steps with "use initiative to ...."

But in Test Case Peer Review meetings (yes, we have those) it's made clear to me that my approach won't do.

But am I being cynical about Test Cases simply because I'm basically lazy and don't like having to do the boring parts of testing?

Others around me seem to have a belief in the magical, protective power of Test Cases. That if we have enough Test Cases everything will be ok.
Writing Test Cases early and often seems more important than actual testing. And if all the Test Cases have been written, then there might be time for Exploratory Testing.

But if you do any Exploratory Testing then you have to make sure you write up the Test Cases from it afterwards.