GA Tracking Code

Tuesday, 6 October 2015

An Irrational Dislike of Test Cases


I don't like Test Cases.

I'm almost certainly the only one among my current teammates who winces every time I hear the term being used. And I hear it every day. (I deliberately avoid using it - if forced to discuss them I might say "scenarios" instead ... which is pretty childish behaviour on my part.)

Or maybe I don't like a certain form of Test Cases.
It depends on what we consider a Test Case to be.

It doesn't have to imply scripted testing, and it doesn't necessarily have to be limiting.
I certainly like to have a number of scenarios, described at a high level in a sentence or two, which I have identified and will cover off during testing.

In the past, when I used my own version of Session-Based Test Management, alongside a charter which set the scope of a test session I would often bullet point specific tests that I want to cover in that session. 
So maybe Test Cases are fine as long as they are a framework for exploratory testing and not used as a definition of "done".

But I definitely don't like detailed, step-by-step Test Cases.

Being tasked to execute them is tedious and frustrating and lets my mind wander to think about all the jobs I'd rather do than be a Tester.

In my current role it's quite usual to have regression cycles where a team of 3-4 Testers may spend 3-4 weeks working through Test Case sets which are:
- incomplete, even as a lightweight regression set (of course)
- out-of-date with changes in the product (of course)
- often unclear. (No doubt the person who wrote them understood them at the time).
- sometimes wrong. (The "expected result" is incorrect and doesn't seem like it could ever have been right.)

I don't blame the previous incumbents for not doing a complete job - they probably realised what a futile task that was. They probably just wanted to make a checklist of stuff that it was important not to forget.

I sympathise because I know that having to write detailed Test Cases - as I am expected to do - can be even more of a grind.

Each time I write a test case, I'm painfully aware of the limitations of the form.

I'm thinking "this doesn't cover the possibilities".
Should I write out all the paths and variations I can think of?  It would use up a lot of time that might be better spent actually testing - but more importantly I won't think of everything.  There will be potential problems I cannot conceive of until I have the software in front of me. (And I still won't think of everything even then.)

So I find myself writing test cases which are often no more than a title, and replacing detailed steps with "use initiative to ...."

But in Test Case Peer Review meetings (yes, we have those) it's made clear to me that my approach won't do.

But am I being cynical about Test Cases simply because I'm basically lazy and don't like having to do the boring parts of testing?

Others around me seem to have a belief in the magical, protective power of Test Cases. That if we have enough Test Cases everything will be ok.
Writing Test Cases early and often seems more important than actual testing. And if all the Test Cases have been written, then there might be time for Exploratory Testing.

But if you do any Exploratory Testing then you have to make sure you write up the Test Cases from it afterwards.






Thursday, 10 September 2015

Webinar: Getting Started in Security Testing by Dan Billing

Brief notes on this webinar: presented by Dan Billing, and organised by Ministry of Testing
---------------------

Rumour has it that there are no testers in the world who didn't sit in on Dan Billing's intro to Security Testing webinar this evening.

But in the unlikely event that someone out there did miss it I can say that it's highly recommended.


Currently I work for a company whose business is secure technology, although leaning more towards endpoint/device security than web.  Our team doesn't actually tend to do the detailed security testing (because we're not expert Penetration Testers) but we obviously have security as a key point to keep in mind whilst doing functional testing. So the more I can learn about security the better.

For me this webinar dovetailed nicely with the Troy Hunt "Hack Yourself First" course which I've recently started working through. (With Dan himself using Troy Hunt's practice site for one of his examples and, like me, encountering the regular blasting of that site's database when wanting to demo something from it!)

What you'll learn

The webinar sets the context for the importance of security testing before giving an entertaining overview of recent high-impact security breaches.

Dan outlines the factors that define a good security testing approach, and a couple of helpful mnemonics for modelling potential threats to your application - including his own EXTERMINATE model.
And there's a list of areas presented which will typically be sources of security bugs in applications.

Inevitably the most fascinating part of the webinar is Dan's live demos of some of the key web vulnerabilities, eg. SQL injection, cross-site scripting (XSS), and how they can be easily exploited with no more than a browser and a bit of know-how.

The reality today - on the web particularly - is that the tools for malicious attacks are widely, easily and freely available and therefore the threat is very real.

I certainly came away with a couple of new ideas for exploits I can attempt when I get into the office tomorrow.
As I said, highly recommended.


A recording of the webinar will be made available on the Ministry of Testing Dojo soon.

Dan has also previously provided insights into Security Testing for the Testing in the Pub podcast series and those are also recommended.



Wednesday, 9 September 2015

Maybe testing isn't for me

It seems like every 3 or 4 months I find myself questioning whether testing is really for me.

I consider myself an enthusiastic tester, and I'm always striving to be better at it. (That's what this blog is mostly about, after all.)  
But I'm not sure that I offer what testing needs. Maybe it's an unrequited attraction.

I've been learning testing for a number of years now across a couple of roles but I've yet to find it as fulfilling and enjoyable as I think it can be. Is that because the roles haven't been quite right for me?

It sometimes seems there's an interesting and rewarding testing world that I might hear about on Twitter, but day-to-day testing can be frustrating or boring.
If you're not already in that other world - not already exposed to the "right" technologies and techniques, or at least supported in learning them - then it seems hard to reach it.

I admit I'm picky about the kind of products/industries I want to work with, and about how much I'm prepared to commute. And nor am I looking to be an SDET. So, of course, all of this limits my options.

But even so, in an unscientific sample look at tester job ads on LinkedIn I don't recognise myself:
- either they emphasise test scripts, documentation and following set processes. (And if that's testing then I definitely would prefer to do something else.)
- or they emphasise skills and experience in specific areas (usually tools) that I either haven't used, or don't feel confident I can offer to a good enough level when my knowledge is mostly from self-study.

"It's not you, it's me"
Increasingly, though, I think that wordings in job ads aren't the problem. Rather, the key part of the previous paragraph is the acknowledgment that I "don't feel confident" - arising from uncertainty of my own value as a tester.

When Katrina Clokie tweeted the testing community with the simple question "How do you know you're a good tester?", I had to respond "I don't".



Gaining Confidence
Of course, personality is a factor here - I'm not a particularly confident or extrovert person generally. But that just means I might have to work a bit harder at it than others. That's ok.

It's all very well having a groaning Trello backlog for learning. Maybe I need to put some of that effort into a strategy for understanding my value as a tester, and not base that value mostly on being able to conquer a huge "to learn" list.

So how can I actually find the confidence, or at least the perspective, that my roles up to now and my continuous learning process aren't giving me? Some initial ideas are:

- Wider "experience"?
I've only worked in limited, and perhaps not typical, testing contexts. Can I find more resources like the upcoming New Voice Media webinar which give insight into the realities of being a tester in a spectrum of organisations?

- Find a mentor?
Some short-term mentoring could be a good way to get feedback on what I do, or don't, have to offer.  

- Talk it through?
Simply initiate conversations with other testers, or with hiring managers, to gain a picture of the wider "market" and how I compare to it?

Definitely some things to work on here.

Tuesday, 9 June 2015

Resisting the Tyranny of the Learning Backlog


Whilst working through The Selenium Guidebook I caught myself doing something that I know I'm sometimes guilty of.
Trying to power through learning to get it "done" and move on to the next thing on my list.

If a course/book outlines a concept or works through an example and then encourages the student to play around with that in their own time, too often I don't do it. I'd rather continue "making progress".

Why? Because as the title I chose for this blog suggests, I allow myself to feel under the pressure of "too much to learn".

That learning backlog in Trello, and the list of tweeted links that I favourite for further investigation, get longer every day. And learning is slow when I mostly have to fit it into late evenings or weekends.

Learning shouldn't be a numbers game
Because of some sort of underlying insecurity, perhaps reinforced by most job ads, I feel that there is too much I don't know to call myself a good Tester. I worry that the majority of Testers are more skilled, and better informed, than I am.

I tend also to beat myself up if it takes me a long time to "get" a topic or exercise in my learning. A "long time" being longer than I imagine other people would need.

But where's my evidence for either of those thoughts? I need to apply the critical thinking skills that I claim to value!

I'm in danger of playing a numbers game with learning. Of thinking it's about quantity not quality.

And yet I know that I'm more likely to absorb material if I spend additional time working and practicing on it myself beyond the given examples. Sometimes I do that, but too often I neglect it to move on to another subject area that I feel I need to know about.
It's not such a surprise then that I can find learning doesn't "stick".

Specialise or generalise - that old question
I've often mulled in the past whether I should narrow down an area of testing to specialise in. (And risk narrowing my opportunities in the process.)

Generally, I do focus on broadly "web"-related learning because that's where I got into testing and where my interests mostly lie. But that's still a big area - and it's not even what I currently do for a day job.

Whilst technical skills are where I feel most lacking, I wouldn't want to neglect studying what I believe to be the core responsibilities of the tester (even if you wouldn't get that impression from most job ads) - thinking skills.

So it can pretty quickly seem like there is "too much to learn" and that I need to touch all of it to be taken seriously.

Intellectually I know that I can't be good at every tool or technology and at all kinds of testing. But emotionally I worry that I always need to be good at more stuff than I am.

Having an overview of multiple topics is no doubt good - but is it better than being well-informed on a few? (Especially when you consider that knowledge of tools/technologies needs to be constantly kept on top of and 'upgraded'?)

The "generalist" T-shaped Tester
I would regard myself as sharing Rob Lambert's view of the value of  the T-shaped Tester. And, having got in to testing quite late in my working life, I have other skills 

But if Rob sees "testing" as representing the vertical bar of the T, where I get hung up on is how far to generalise or to specialise within that "testing" bar.

Am I trying to be a kind of "Unicode character 0166-shaped" Tester?  (Not that that shape quite captures it either!)  With a broad range of technical knowledge?
Unicode character 0166

At the moment it feels like I have unrealistic expectations of my ability to learn.

Perhaps I need the confidence that not knowing something is ok providing you have the capacity and will to learn it when you need it. And that you always bring a set of core skills whatever the context.

Never stop learning
Learning is a continuous process and learning is a motivator for me. I wouldn't want to be in a situation where there was nothing new to learn.

But it shouldn't be stressful. Working through a learning backlog should be a source of pleasure and not a cloud hanging over me.

I need to make that mental shift, and maybe that requires narrowing my ambitions.

"The Selenium Guidebook" and Thoughts on my Learning Process


"The Selenium Guidebook: How To Use Selenium, successfully" by Dave Haeffner does not specifically set out to teach you how to automate elements on web pages with the WebDriver API. What it does set out to do - as the full title suggests - is show how you can practically implement WebDriver automation to add business value, avoiding common pitfalls in the process.

And in that vein, this post doesn't exactly review the book. It does a bit of that, but it's more about my personal experience of working through it, and reflects on how I might improve my learning process in the "code" domain going forward.

Some basics about the book
I worked on the Ruby version of the book. A version for Java is also now available.

Topics the book covers include:
  • Formulating an automation strategy
  • Understanding what constitutes the right approach to writing your checks
  • Using Page Objects/abstraction to make a more robust suite that your team will trust
  • Running your checks against multiple system and browser configurations on Sauce Labs
  • Parallelisation to keep large suites running in sensible time-frames
  • How to set up suites which run automatically as part of continuous integration
  • Multiple examples of how to approach specific automation challenges
What's great about the book:
  • Haeffner's style is ultra-clear
  • He provides not just a download of code from the book to compare yours with, but a real website to automate and practice on
  • There is also additional supporting material, eg. videos, available to purchase
There were points when I was working through The Selenium Guidebook that I felt frustrated - not with the book, which is very good - but with my own lack of progress.

The frustrations came when something wasn't working for me and I felt I didn't have the knowledge to understand why or fix it.  I tried to think about why I was allowing myself to get frustrated.

Coping with a lack of coding knowledge
First, a slightly boring aside to explain where my knowledge stood when I started the book. 

I had studied WebDriver using Alan Richardson's excellent Java-based course about a year before working through this book. However, in the intervening time I had taken a job that didn't involve web testing and so my knowledge had gone stale. In resurrecting it, I decided to go back to coding with Ruby - which I had been learning the basics of previously - because I felt the less intuitive syntax of Java hadn't helped my learning of WebDriver.

Haeffner advises that an ability to code is not necessarily needed upfront. Whilst that is certainly true, in my experience learning is slower if you don't have a lot of coding knowledge.

I think the biggest problem caused by my lack of coding experience was not always being able to make good guesses about the cause/nature of errors - and error messages - I encountered, and therefore struggling to correct them.

Troubleshooting hints for the examples in the book could be helpful, but are probably impractical given the variety of problems students might face.

It might have been useful to know exactly which versions of particular Ruby gems Dave had tested his examples with. I'm not sure if it was ever really the cause of a problem I hit (there are still a few examples that I haven't managed to get working) but I did wonder at times whether an issue might relate to my having installed the latest version of gem X whereas Dave possibly had been working with an earlier one.

Putting in the hours
Dave Haeffner does very generously offer up his own contact details in the book to answer questions. I deliberately resisted that because I didn't think it was fair on the author; and not helpful to me to have the answers provided too easily.

Mostly I got failing examples working by putting in the hours and using the routes you might expect: - Google and StackOverflow to look up error messages
- seeing if the provided sample files ran on my setup and, if so, comparing that sample file with what I had written to find the difference.

And in one extreme case using a file comparison tool to try and find what the hell was different between my failing project and the provided one. (The difference turned out to be not in my Ruby code but with gems I was using not being listed in my gemfile.)

Of course, this "pain" is actually good for learning and I need to remember that when the frustration bites. When I eventually managed to get the HTTP status codes example (with browsermob proxy) working there was a real sense of achievement because I had had to do research/thinking/work on my own to get there.

By the time I had gone through all the material in the book I felt it had been a really good investment and I had stopped worrying about whether I should have been able to get through it more smoothly. I shall certainly be keeping The Selenium Guidebook on hand and coming back to it.

Finding the right IDE/Editor
Something practical that I think would have helped me, and that I still need to sort out, was either a better IDE or better knowledge of the one I was using.  (I suppose this too falls under the heading of a lack of coding experience.)

After looking for free Ruby IDEs, I went with Aptana Studio. Quite possibly I don't have Aptana set up correctly - I struggled even to find useful documentation - but I found it of limited use beyond picking up syntax errors.

For Alan Richardson's Java course I had used the suggested IDE, Jetbrains' IntelliJ.  And I missed its extensive code completion suggestions here, and its ability to pick up basic typos on top of syntax errors.  Sadly, Jetbrains' "Rubymine" IDE is not free.

I also found that running commands with Aptana Studio's built-in terminal (running Git Bash) wasn't always helpful. Like the time when I could not get my first Sauce Labs checks to run and wasted an hour or more trying to figure out what the syntax/structural error was that Aptana seemed to report. When I ran the check in a simple Windows command prompt instead I straight away saw more useful feedback that I simply had my Saucelabs credentials wrong.

Give myself a chance
But the simplest way I could have improved the process for me was to relax a bit. Not to feel I was working against the clock to get through the material. Not to overly criticise myself when I might spend a whole evening on a few pages but still not get the example working by bedtime.

And to give myself more credit when I eventually did chase down, and correct, my errors.

This is a generic flaw I sometimes show in my learning process - unrealistic expectations of how quickly and how fully I can process something.  It's something I will blog on separately in my next post .....

---------------
"The Selenium Guidebook: How to use Selenium, successfully" by Dave Haeffner is available now as an e-book in both Ruby and Java versions. A range of packages with extra supporting material are also available at the same link. 

Haeffner also produces a free weekly Selenium tips email .