GA Tracking Code

Wednesday 14 September 2016

On board the Golgafrinchan 'B' Ark

At one point in Douglas Adams' The Restaurant at the End of the Universe the main characters find themselves aboard a huge spaceship filled with frozen bodies.

The ship's captain tells them that his planet, Golgafrincham, was facing impending doom and that the decision was made to evacuate on three huge Ark ships.

'A' Ark would carry all the best thinkers and artists, 'C' Ark all the workers - those who actually make and do practical things.

The ship they are now on - 'B' Ark - carried all the others ... the "middlemen".
Ark B was sent off first, and although they've never heard from them they assume that Arks A and C are following on behind somewhere.

Of course, what the Captain is unable to recognise is that Arks A and C never left, and the remaining Golgafrinchans simply wanted to rid themselves of the third of their population they regarded as "useless".





Periodically - and somewhat bizarrely - being a tester reminds me of this story. Because I can feel a bit like I'm on 'B' Ark. Unable to convince others of my value, and seemingly with little future.

Of course, it's a personal view and I wouldn't want to claim that it's true of testers generally.


What makes testers valuable?

Some of the skills that attracted me to testing several years ago don't seem to be as valuable anymore. Things like communication skills; business-knowledge; customer empathy; critical thinking; ability - and desire - to learn

Value in a tester - at least if you want to get hired - now seems to lie mainly in ready-made automation and tool use ability.
Arguably this is right - those skills are definitely important and useful, and they are certainly in demand.  Personally I'm always impressed by testers with deep technical skills and always striving to improve my own.

"Experience" is good if it means knowledge of the right technologies/tools. (And the right ones vary.)

But "experience" is bad if it means you've been around a few years and you're in your 40s.

In the wider context of software development some hold the view that dedicated testers aren't needed anymore anyway.  Testing might be - but that can be automated, can't it?
Or informed by actual users after going live. And then fixed quickly via a Continuous Deployment / DevOps pipeline. (As an aside: for a craft that is at its best context-driven, there seems to be little recognition in testing these days of a software world that's outside of web or mobile.)

Within the testing community, particuarly on Twitter, the impression sometimes given is that the testers with the most value are those who are able to go to, and especially speak at, conferences.


Conveying value

Of course in Adams' story the punchline is that included among those the Golgafrinchans have cut adrift (in a ship programmed to crash-land) were all their Telephone Sanitisers.

Which leads eventually to the two-thirds who stayed behind being wiped out by an epidemic which starts from a dirty telephone.

I doubt that the neglect of some testing skills will lead to any problems quite that dramatic.
But maybe there's something in the notion that actual value is not always recognised. And that conveying one's value - always a challenge - is harder when the prevailing opinions differ.

Wednesday 20 April 2016

Introducing Test Team Lean Coffee


Background
Our team has a weekly hour in the diary for a session called "Experiments and Learning." (It pre-dates my joining.) As you'd guess, it's intended to be an opportunity for knowledge-sharing.  The sad thing is that I'd estimate only 1 in 5 of those sessions doesn't get cancelled because no-one has anything to share.

One reason for that is that we tend to think of them as demonstrations or mini-lectures which would require preparation.  I've done a few myself, including on Inattentional Blindness and on Session-Based Test Management, and they took a lot of personal time to put together.

Rather than waste the opportunity we have every week, I proposed that we try Lean Coffee at one session. If we liked it, then we would always have the option to do Lean Coffee periodically after that.

Lean Coffee, of course, requires almost no preparation. And gets the whole team (or as many as want to be involved) actively involved in the session rather than passively listening.

After proposing the idea to our QA Manager and getting his buy-in, I emailed my teammates to introduce the concept and asked them to let me know if they were interested in trying it out.
I wasn't entirely surprised to find that the vast majority of the team didn't respond at all, but when it was brought up again in our team meeting it seemed that there was interest.
And so, more re-assured that there wouldn't be an empty (or embarassingly quiet) room, I went ahead.

To give everyone a starting point for ideas on what to discuss I shared the list of topics that had been suggested at TestBash 2016 in advance.


The topics on the day
At the session, after combining stickies suggesting the same topic, we ended up with the following on our "to be discussed" list:

- Motivation. Keeping testing interesting
- The testers role. Are testers needed anymore?
- Security testing
- Real-world use cases and environments
- TDD/ATDD/BDD


It was interesting that the first thing we talked about was the one that had been most voted for so - in theory - was the one most people were interested in.  Yet I think it was one of the shorter discussions. That may have been because everyone was a bit unsure/reticent about talking and hadn’t got warmed up yet. (Of course, maybe people also felt they weren't able to talk about feeling bored in front of our manager - although he was one of the ones posing that question!)

It so happened that we covered all of the topics we had in our session with just a 5 minute overrun. This was probably a good thing.  Whilst in general there should be no pressure to get through all the suggested items, as this was the first Lean Coffee session it was nice that no one was left feeling they had wasted their time coming up with topics.

Reflections and lessons learned
- We didn't have the typical group configuration for Lean Coffee. There were 11 of us, and the room didn't allow us to split into smaller groups at separate tables.

This turned out ok because our group weren't the normal self-selecting Lean Coffee group and some of the people there weren't so comfortable talking.  Conversely, I felt I probably talked too much myself. This was partly because as the instigator of the session (and the reason everyone had found themselves sitting there) I felt a need to keep conversation flowing.


- Initially I had intended to take the vote on whether to continue discussing a topic or not after 5 minutes. But in practice that didn't feel long enough - it seemed silly to ask if we should continue with something we were only just getting into.

Throwing myself into doubt about that also meant that I didn't take a proper "roman vote". I tended to ask the group if they wanted to continue and then people seemed to be looking around to see what everyone else was doing before they would risk putting their own hand up. Getting everyone to give thumbs up/thumbs down at the same time would probably have overcome that.

I think next time I might experiment with taking a first vote at 10 mins, and then at 5 minute intervals after that.


- After the session I emailed everyone to ask for feedback, and for any criticism/improvements.  (I felt I was slightly more likely to get honest feedback via email than I was by asking face-to-face.)

I got two responses - both positive and interested in doing it again.

It's hard to tell how valuable the rest of the team really found it.  But now they know what Lean Coffee is, and they know that it's there as a future option any time we want to make use of it.  And it's so simple that I don't need to be there to "run" it.

Sunday 31 January 2016

Using 'hdparm' to reduce the size of a hard disk

Why on earth would you want to reduce the size of a hard disk? Surely we always want as much disk space as possible?

Well, in my current role one of our product streams is full disk encryption for Windows.
And one problem for testing is that large hard disks can take a long time to encrypt - many hours, even days in extreme circumstances.  (Size of the disk isn't the only factor - processor power is another.)  So, sometimes, we want the system under test to have the smallest hard disk possible.

Since we often need to test against specific models of laptop, rather than build our own small-disk test rigs, that can lead to frustrating delays before we get a system into the right state to get the information we want.

A simple and quick solution that one of our developers tipped us off to, is to use the hdparm command in Linux.  It allows you to view and set parameters of your hard disk and one of its, perhaps lesser-used, features is the ability to force the disk to show as only having a certain number of sectors.

If you ever find yourself needing to do this, here are the simple steps which I have found to work for me.  (But, of course, this should be done with caution.  And it's not advised to try it on disks which already contain data you want to keep.)
You'll need to be able to boot your machine to Linux and for that I use an Ubuntu USB stick built from here.


1. Boot your system to the Ubuntu USB stick
You may need to go into the machine BIOS and select a temporary startup device. When it loads choose "Try Ubuntu without installing" to run the OS directly from the stick.


2. Find out how Ubuntu labels the HDD that you're going to change.
Typically, the main system disk of a machine will have the "logical name"  /dev/sda - but do make certain you're working on the right disk before you attempt any changes.  One way to check:
Press ctrl+alt+t to open the Terminal program. (You're going to need it to run hdparm anyway) and enter:

sudo -i  (this gives you the necessary rights to run the commands you need)

then enter:

lshw -class disk

Look through the information returned for disks found on the system and note the "logical name" of the HDD you want to change.  Check this carefully to be sure you have the right one.


Staying in the Terminal....


3. Find out how many sectors the disk has in total.

Run the following hdparm command:

hdparm - N [logical name of your disk]

You'll get a result something like this (depending on the disk size):



4. Work out how many sectors will give you the disk size you want

Assuming you know the original capacity of the hard disk, you can either:
a) Query or calculate the individual sector size, and from that work out how many sectors add up to your desired capacity
b) Simply apply a rough percentage based on the amount of disk space you want remaining after the change.  Eg.  If disk size is 500GB, but you want to reduce it to 50GB (10%), just divide the total number of sectors that hdparm reports by 10.


5. Set the number of visible sectors to the number you worked out at step 4

At this stage it's worth repeating that you should only do this with caution. the "-N" switch in hdparm is officially marked as "Dangerous"!

Enter the following hdparm command:

hdparm –N p[desired no of sectors] –yes-i-know-what-i-am-doing [logical name of your disk]












Note that "–yes-i-know-what-i-am-doing" string!  This is not a joke - you really have to enter that, and it's there with good reason.  Once you enter the command you won't be asked if you're sure - the change will be made.

The 'p' in front of the number of sectors indicates "permanent".  It means that the change to the number of sectors will stay in effect unless and until you repeat this process and set it to something else. (Yes - you can get your original disk size back again!)  If you don't include the 'p' the change you've made will be lost on a system restart.

Once you shut down Ubuntu, remove the USB, and restart your system normally the change should be in effect.
At this stage we would either install a new copy of Windows, or restore an existing image for that machine - giving us a clean OS to work on with an unusually tiny HDD to encrypt.

I haven't used this extensively so I can't vouch for it working on all types of disk and systems. My understanding is that it won't work on removable USB hard disks, for example.

And, remember, some of hdparm's functions can be dangerous to your system.
If you're at all uncertain I'd strongly recommend reading more about hdparm before use.




Wednesday 7 October 2015

Struggling to Convert the Test Case Believers (or: Apparently test cases ARE testing after all)

Frustrated by how attached to detailed static Test Cases my teammates seem to be, I recently presented to them on Session-Based Test Management. I wanted to show that test cases are not the only way to track and report on testing, and even give management those metrics that they love.

Talking about SBTM was also a jumping-off point for introducing other topics including "Testing vs Checking"; how exploratory testing is understood in Context-Driven Testing; and approaches to test design in ET eg. heuristics, personas.

There was nothing special about the presentation and I'm slightly hesitant to include it here.  Anyone who has worked in an organisation that takes ET and SBTM seriously will probably find it lacking.

To try and illustrate SBTM in action I also showed examples from a "home-brew" version of it that I had used in a previous job.  (I was a sole tester in my first testing job so, with no-one to guide me, I put a SBTM process together from blog posts and my own trial-and-error experiments.)

To be clear: although I am generally a fan, I claim neither to be a member of the Context-Driven Testing community, nor qualified to accurately represent their views.



My Exploratory Testing is better than your Exploratory Testing
My teammates certainly would argue that they do exploratory testing already but my impression is that it tends to be closer to the "ad hoc" approach that got ET an unfair reputation for being vague - and which SBTM was designed to overcome.

Also, in my team ET is definitely seen as a poor relation to test cases. And I have genuinely heard comments like "if there's time left after running all the tests in the spreadsheet we can do some exploratory testing".

Failing to create debate
I'm not delusional enough to think that one presentation from me would start a revolution, but I did hope we might at least have some debate about why they see so much value in test cases. And I hoped to hear some opinions/feedback on SBTM as a possible approach to delivering value through more thoughtful and skilled Exploratory Testing - with proper note-taking!

Sadly that just didn't happen. And what I thought would make a good blog post ended up being this one. Sorry.

It was almost as if what I was describing was a kind of interesting curiosity, but not relevant to them. Like when we watch a documentary about how people live in another culture.

Because they already do exploratory testing (as they see it). After all, exploratory testing is one of the things they use to come up with ideas for the real stuff - the Test Cases!

Now, of course, you'll be thinking that the failure to engage with my audience was down to my poor presentation skills. Maybe so.
Aware of that possibility, and of the fact that I was only skimming the surface of some big topics, I followed up the presentation by circulating links to articles by better communicators than me. (Reproduced below.)

I also offered to demo Rapid Reporter later to anyone was interested; and even emailed them all their own copy of the Test Heuristics Cheat Sheet.

A couple of weeks later, not only has no-one shown any interest, I find myself being asked to write even more test cases (because apparently our problem is that we don't have enough), and even attending Test Case Peer Review meetings.

Not giving up
It has to be said that after moaning about my presentation's lack of impact on Slack, I got some encouragement and a couple of good suggestions there.

Damian Meydac Jean pointed out that in situations like these there's often more to be gained by working on newbies than on people who've worked there for a while.

And Vernon Richards made a good point about how it can be more powerful to relate new ideas like these to specific issues or pain-points facing the team.

Seeking the case for test cases
But maybe I'm the one who's wrong. Why should the rest of the team/business change what they do to fit in with a lot of fancy-Dan ideas I'm repeating from the Twitter-bubble?

If I try to ask colleagues why they find test cases so valuable it does seem to strike them as an odd question to ask. There is just a sense in which there have to be test cases because, you know, we're doing testing. (Although they tend to say we're doing "QA".)
But more specific reasons include:
  • they provide scripts which can then be automated 
    • this does strike me as a pretty good reason although the reality is we have little automation and only a fraction of test cases written will ever be automated
  • they suggest test ideas 
    • by this it's meant that team members can read the test cases to get ideas for what tests to carry out on a feature/product
  • they serve as documentation
    • they can be useful as a way for testers to familiarise themselves with products they haven't worked on before
    • we have a situation where test cases are sometimes viewed as an oracle on "correct" product behaviour. Yikes.
  • they tell us when we're finished regression testing

Let's have the debate right here!
Now I couldn't resist unfairly undermining some of the reasons I just listed, but I would genuinely be interested to hear more about the positives regarding the test case approach and why it might provide more value than exploratory testing in certain contexts.

There is definitely a possibility that I am against them simply because I am too lazy to write them, and I find executing them boring.  (In which case, maybe I should just get out of testing - God knows I think about that a lot.)

So, if there's anyone reading this with a view on the value to be found in test cases please add a comment.  Despite my overall jokey tone here I really would like to be challenged on my cynical attitude.


-------------------------
For reference, below are the "further info" links I circulated, and added to our internal Wiki, after my presentation
-------------------------

Exploratory Testing

Why Scripted Testing Sucks, Steve Green

Exploratory Testing in a Regulated Environment, Josh Gibbs

Exploratory Testing, Michael Bolton
(Describes what ET is, and what it isn’t, according to Context-Driven Testing.  Also has a long list of links to further resources on ET)


Session-Based Test Management

Session-Based Test Management (overview), James Bach

Session-Based Test Management (practical details of using it to manage testing), Jonathan Bach

Managing Exploratory Testing, Rob Lambert

Learning to use Exploratory Testing in Your Organisation, Mike Talks

Benefits of session-based test management, Katrina Clokie


Exploratory Testing Frameworks

Generic Testing Personas, Katrina Clokie

Testing Tours, Mike Kelly

James Whittaker’s  Exploratory Testing Tours
https://msdn.microsoft.com/en-us/library/jj620911.aspx

Tuesday 6 October 2015

An Irrational Dislike of Test Cases


I don't like Test Cases.

I'm almost certainly the only one among my current teammates who winces every time I hear the term being used. And I hear it every day. (I deliberately avoid using it - if forced to discuss them I might say "scenarios" instead ... which is pretty childish behaviour on my part.)

Or maybe I don't like a certain form of Test Cases.
It depends on what we consider a Test Case to be.

It doesn't have to imply scripted testing, and it doesn't necessarily have to be limiting.
I certainly like to have a number of scenarios, described at a high level in a sentence or two, which I have identified and will cover off during testing.

In the past, when I used my own version of Session-Based Test Management, alongside a charter which set the scope of a test session I would often bullet point specific tests that I want to cover in that session. 
So maybe Test Cases are fine as long as they are a framework for exploratory testing and not used as a definition of "done".

But I definitely don't like detailed, step-by-step Test Cases.

Being tasked to execute them is tedious and frustrating and lets my mind wander to think about all the jobs I'd rather do than be a Tester.

In my current role it's quite usual to have regression cycles where a team of 3-4 Testers may spend 3-4 weeks working through Test Case sets which are:
- incomplete, even as a lightweight regression set (of course)
- out-of-date with changes in the product (of course)
- often unclear. (No doubt the person who wrote them understood them at the time).
- sometimes wrong. (The "expected result" is incorrect and doesn't seem like it could ever have been right.)

I don't blame the previous incumbents for not doing a complete job - they probably realised what a futile task that was. They probably just wanted to make a checklist of stuff that it was important not to forget.

I sympathise because I know that having to write detailed Test Cases - as I am expected to do - can be even more of a grind.

Each time I write a test case, I'm painfully aware of the limitations of the form.

I'm thinking "this doesn't cover the possibilities".
Should I write out all the paths and variations I can think of?  It would use up a lot of time that might be better spent actually testing - but more importantly I won't think of everything.  There will be potential problems I cannot conceive of until I have the software in front of me. (And I still won't think of everything even then.)

So I find myself writing test cases which are often no more than a title, and replacing detailed steps with "use initiative to ...."

But in Test Case Peer Review meetings (yes, we have those) it's made clear to me that my approach won't do.

But am I being cynical about Test Cases simply because I'm basically lazy and don't like having to do the boring parts of testing?

Others around me seem to have a belief in the magical, protective power of Test Cases. That if we have enough Test Cases everything will be ok.
Writing Test Cases early and often seems more important than actual testing. And if all the Test Cases have been written, then there might be time for Exploratory Testing.

But if you do any Exploratory Testing then you have to make sure you write up the Test Cases from it afterwards.