GA Tracking Code

Wednesday 7 October 2015

Struggling to Convert the Test Case Believers (or: Apparently test cases ARE testing after all)

Frustrated by how attached to detailed static Test Cases my teammates seem to be, I recently presented to them on Session-Based Test Management. I wanted to show that test cases are not the only way to track and report on testing, and even give management those metrics that they love.

Talking about SBTM was also a jumping-off point for introducing other topics including "Testing vs Checking"; how exploratory testing is understood in Context-Driven Testing; and approaches to test design in ET eg. heuristics, personas.

There was nothing special about the presentation and I'm slightly hesitant to include it here.  Anyone who has worked in an organisation that takes ET and SBTM seriously will probably find it lacking.

To try and illustrate SBTM in action I also showed examples from a "home-brew" version of it that I had used in a previous job.  (I was a sole tester in my first testing job so, with no-one to guide me, I put a SBTM process together from blog posts and my own trial-and-error experiments.)

To be clear: although I am generally a fan, I claim neither to be a member of the Context-Driven Testing community, nor qualified to accurately represent their views.



My Exploratory Testing is better than your Exploratory Testing
My teammates certainly would argue that they do exploratory testing already but my impression is that it tends to be closer to the "ad hoc" approach that got ET an unfair reputation for being vague - and which SBTM was designed to overcome.

Also, in my team ET is definitely seen as a poor relation to test cases. And I have genuinely heard comments like "if there's time left after running all the tests in the spreadsheet we can do some exploratory testing".

Failing to create debate
I'm not delusional enough to think that one presentation from me would start a revolution, but I did hope we might at least have some debate about why they see so much value in test cases. And I hoped to hear some opinions/feedback on SBTM as a possible approach to delivering value through more thoughtful and skilled Exploratory Testing - with proper note-taking!

Sadly that just didn't happen. And what I thought would make a good blog post ended up being this one. Sorry.

It was almost as if what I was describing was a kind of interesting curiosity, but not relevant to them. Like when we watch a documentary about how people live in another culture.

Because they already do exploratory testing (as they see it). After all, exploratory testing is one of the things they use to come up with ideas for the real stuff - the Test Cases!

Now, of course, you'll be thinking that the failure to engage with my audience was down to my poor presentation skills. Maybe so.
Aware of that possibility, and of the fact that I was only skimming the surface of some big topics, I followed up the presentation by circulating links to articles by better communicators than me. (Reproduced below.)

I also offered to demo Rapid Reporter later to anyone was interested; and even emailed them all their own copy of the Test Heuristics Cheat Sheet.

A couple of weeks later, not only has no-one shown any interest, I find myself being asked to write even more test cases (because apparently our problem is that we don't have enough), and even attending Test Case Peer Review meetings.

Not giving up
It has to be said that after moaning about my presentation's lack of impact on Slack, I got some encouragement and a couple of good suggestions there.

Damian Meydac Jean pointed out that in situations like these there's often more to be gained by working on newbies than on people who've worked there for a while.

And Vernon Richards made a good point about how it can be more powerful to relate new ideas like these to specific issues or pain-points facing the team.

Seeking the case for test cases
But maybe I'm the one who's wrong. Why should the rest of the team/business change what they do to fit in with a lot of fancy-Dan ideas I'm repeating from the Twitter-bubble?

If I try to ask colleagues why they find test cases so valuable it does seem to strike them as an odd question to ask. There is just a sense in which there have to be test cases because, you know, we're doing testing. (Although they tend to say we're doing "QA".)
But more specific reasons include:
  • they provide scripts which can then be automated 
    • this does strike me as a pretty good reason although the reality is we have little automation and only a fraction of test cases written will ever be automated
  • they suggest test ideas 
    • by this it's meant that team members can read the test cases to get ideas for what tests to carry out on a feature/product
  • they serve as documentation
    • they can be useful as a way for testers to familiarise themselves with products they haven't worked on before
    • we have a situation where test cases are sometimes viewed as an oracle on "correct" product behaviour. Yikes.
  • they tell us when we're finished regression testing

Let's have the debate right here!
Now I couldn't resist unfairly undermining some of the reasons I just listed, but I would genuinely be interested to hear more about the positives regarding the test case approach and why it might provide more value than exploratory testing in certain contexts.

There is definitely a possibility that I am against them simply because I am too lazy to write them, and I find executing them boring.  (In which case, maybe I should just get out of testing - God knows I think about that a lot.)

So, if there's anyone reading this with a view on the value to be found in test cases please add a comment.  Despite my overall jokey tone here I really would like to be challenged on my cynical attitude.


-------------------------
For reference, below are the "further info" links I circulated, and added to our internal Wiki, after my presentation
-------------------------

Exploratory Testing

Why Scripted Testing Sucks, Steve Green

Exploratory Testing in a Regulated Environment, Josh Gibbs

Exploratory Testing, Michael Bolton
(Describes what ET is, and what it isn’t, according to Context-Driven Testing.  Also has a long list of links to further resources on ET)


Session-Based Test Management

Session-Based Test Management (overview), James Bach

Session-Based Test Management (practical details of using it to manage testing), Jonathan Bach

Managing Exploratory Testing, Rob Lambert

Learning to use Exploratory Testing in Your Organisation, Mike Talks

Benefits of session-based test management, Katrina Clokie


Exploratory Testing Frameworks

Generic Testing Personas, Katrina Clokie

Testing Tours, Mike Kelly

James Whittaker’s  Exploratory Testing Tours
https://msdn.microsoft.com/en-us/library/jj620911.aspx

Tuesday 6 October 2015

An Irrational Dislike of Test Cases


I don't like Test Cases.

I'm almost certainly the only one among my current teammates who winces every time I hear the term being used. And I hear it every day. (I deliberately avoid using it - if forced to discuss them I might say "scenarios" instead ... which is pretty childish behaviour on my part.)

Or maybe I don't like a certain form of Test Cases.
It depends on what we consider a Test Case to be.

It doesn't have to imply scripted testing, and it doesn't necessarily have to be limiting.
I certainly like to have a number of scenarios, described at a high level in a sentence or two, which I have identified and will cover off during testing.

In the past, when I used my own version of Session-Based Test Management, alongside a charter which set the scope of a test session I would often bullet point specific tests that I want to cover in that session. 
So maybe Test Cases are fine as long as they are a framework for exploratory testing and not used as a definition of "done".

But I definitely don't like detailed, step-by-step Test Cases.

Being tasked to execute them is tedious and frustrating and lets my mind wander to think about all the jobs I'd rather do than be a Tester.

In my current role it's quite usual to have regression cycles where a team of 3-4 Testers may spend 3-4 weeks working through Test Case sets which are:
- incomplete, even as a lightweight regression set (of course)
- out-of-date with changes in the product (of course)
- often unclear. (No doubt the person who wrote them understood them at the time).
- sometimes wrong. (The "expected result" is incorrect and doesn't seem like it could ever have been right.)

I don't blame the previous incumbents for not doing a complete job - they probably realised what a futile task that was. They probably just wanted to make a checklist of stuff that it was important not to forget.

I sympathise because I know that having to write detailed Test Cases - as I am expected to do - can be even more of a grind.

Each time I write a test case, I'm painfully aware of the limitations of the form.

I'm thinking "this doesn't cover the possibilities".
Should I write out all the paths and variations I can think of?  It would use up a lot of time that might be better spent actually testing - but more importantly I won't think of everything.  There will be potential problems I cannot conceive of until I have the software in front of me. (And I still won't think of everything even then.)

So I find myself writing test cases which are often no more than a title, and replacing detailed steps with "use initiative to ...."

But in Test Case Peer Review meetings (yes, we have those) it's made clear to me that my approach won't do.

But am I being cynical about Test Cases simply because I'm basically lazy and don't like having to do the boring parts of testing?

Others around me seem to have a belief in the magical, protective power of Test Cases. That if we have enough Test Cases everything will be ok.
Writing Test Cases early and often seems more important than actual testing. And if all the Test Cases have been written, then there might be time for Exploratory Testing.

But if you do any Exploratory Testing then you have to make sure you write up the Test Cases from it afterwards.






Thursday 10 September 2015

Webinar: Getting Started in Security Testing by Dan Billing

Brief notes on this webinar: presented by Dan Billing, and organised by Ministry of Testing
---------------------

Rumour has it that there are no testers in the world who didn't sit in on Dan Billing's intro to Security Testing webinar this evening.

But in the unlikely event that someone out there did miss it I can say that it's highly recommended.


Currently I work for a company whose business is secure technology, although leaning more towards endpoint/device security than web.  Our team doesn't actually tend to do the detailed security testing (because we're not expert Penetration Testers) but we obviously have security as a key point to keep in mind whilst doing functional testing. So the more I can learn about security the better.

For me this webinar dovetailed nicely with the Troy Hunt "Hack Yourself First" course which I've recently started working through. (With Dan himself using Troy Hunt's practice site for one of his examples and, like me, encountering the regular blasting of that site's database when wanting to demo something from it!)

What you'll learn

The webinar sets the context for the importance of security testing before giving an entertaining overview of recent high-impact security breaches.

Dan outlines the factors that define a good security testing approach, and a couple of helpful mnemonics for modelling potential threats to your application - including his own EXTERMINATE model.
And there's a list of areas presented which will typically be sources of security bugs in applications.

Inevitably the most fascinating part of the webinar is Dan's live demos of some of the key web vulnerabilities, eg. SQL injection, cross-site scripting (XSS), and how they can be easily exploited with no more than a browser and a bit of know-how.

The reality today - on the web particularly - is that the tools for malicious attacks are widely, easily and freely available and therefore the threat is very real.

I certainly came away with a couple of new ideas for exploits I can attempt when I get into the office tomorrow.
As I said, highly recommended.


A recording of the webinar will be made available on the Ministry of Testing Dojo soon.

Dan has also previously provided insights into Security Testing for the Testing in the Pub podcast series and those are also recommended.



Wednesday 9 September 2015

Maybe testing isn't for me

It seems like every 3 or 4 months I find myself questioning whether testing is really for me.

I consider myself an enthusiastic tester, and I'm always striving to be better at it. (That's what this blog is mostly about, after all.)  
But I'm not sure that I offer what testing needs. Maybe it's an unrequited attraction.

I've been learning testing for a number of years now across a couple of roles but I've yet to find it as fulfilling and enjoyable as I think it can be. Is that because the roles haven't been quite right for me?

It sometimes seems there's an interesting and rewarding testing world that I might hear about on Twitter, but day-to-day testing can be frustrating or boring.
If you're not already in that other world - not already exposed to the "right" technologies and techniques, or at least supported in learning them - then it seems hard to reach it.

I admit I'm picky about the kind of products/industries I want to work with, and about how much I'm prepared to commute. And nor am I looking to be an SDET. So, of course, all of this limits my options.

But even so, in an unscientific sample look at tester job ads on LinkedIn I don't recognise myself:
- either they emphasise test scripts, documentation and following set processes. (And if that's testing then I definitely would prefer to do something else.)
- or they emphasise skills and experience in specific areas (usually tools) that I either haven't used, or don't feel confident I can offer to a good enough level when my knowledge is mostly from self-study.

"It's not you, it's me"
Increasingly, though, I think that wordings in job ads aren't the problem. Rather, the key part of the previous paragraph is the acknowledgment that I "don't feel confident" - arising from uncertainty of my own value as a tester.

When Katrina Clokie tweeted the testing community with the simple question "How do you know you're a good tester?", I had to respond "I don't".



Gaining Confidence
Of course, personality is a factor here - I'm not a particularly confident or extrovert person generally. But that just means I might have to work a bit harder at it than others. That's ok.

It's all very well having a groaning Trello backlog for learning. Maybe I need to put some of that effort into a strategy for understanding my value as a tester, and not base that value mostly on being able to conquer a huge "to learn" list.

So how can I actually find the confidence, or at least the perspective, that my roles up to now and my continuous learning process aren't giving me? Some initial ideas are:

- Wider "experience"?
I've only worked in limited, and perhaps not typical, testing contexts. Can I find more resources like the upcoming New Voice Media webinar which give insight into the realities of being a tester in a spectrum of organisations?

- Find a mentor?
Some short-term mentoring could be a good way to get feedback on what I do, or don't, have to offer.  

- Talk it through?
Simply initiate conversations with other testers, or with hiring managers, to gain a picture of the wider "market" and how I compare to it?

Definitely some things to work on here.

Tuesday 9 June 2015

Resisting the Tyranny of the Learning Backlog


Whilst working through The Selenium Guidebook I caught myself doing something that I know I'm sometimes guilty of.
Trying to power through learning to get it "done" and move on to the next thing on my list.

If a course/book outlines a concept or works through an example and then encourages the student to play around with that in their own time, too often I don't do it. I'd rather continue "making progress".

Why? Because as the title I chose for this blog suggests, I allow myself to feel under the pressure of "too much to learn".

That learning backlog in Trello, and the list of tweeted links that I favourite for further investigation, get longer every day. And learning is slow when I mostly have to fit it into late evenings or weekends.

Learning shouldn't be a numbers game
Because of some sort of underlying insecurity, perhaps reinforced by most job ads, I feel that there is too much I don't know to call myself a good Tester. I worry that the majority of Testers are more skilled, and better informed, than I am.

I tend also to beat myself up if it takes me a long time to "get" a topic or exercise in my learning. A "long time" being longer than I imagine other people would need.

But where's my evidence for either of those thoughts? I need to apply the critical thinking skills that I claim to value!

I'm in danger of playing a numbers game with learning. Of thinking it's about quantity not quality.

And yet I know that I'm more likely to absorb material if I spend additional time working and practicing on it myself beyond the given examples. Sometimes I do that, but too often I neglect it to move on to another subject area that I feel I need to know about.
It's not such a surprise then that I can find learning doesn't "stick".

Specialise or generalise - that old question
I've often mulled in the past whether I should narrow down an area of testing to specialise in. (And risk narrowing my opportunities in the process.)

Generally, I do focus on broadly "web"-related learning because that's where I got into testing and where my interests mostly lie. But that's still a big area - and it's not even what I currently do for a day job.

Whilst technical skills are where I feel most lacking, I wouldn't want to neglect studying what I believe to be the core responsibilities of the tester (even if you wouldn't get that impression from most job ads) - thinking skills.

So it can pretty quickly seem like there is "too much to learn" and that I need to touch all of it to be taken seriously.

Intellectually I know that I can't be good at every tool or technology and at all kinds of testing. But emotionally I worry that I always need to be good at more stuff than I am.

Having an overview of multiple topics is no doubt good - but is it better than being well-informed on a few? (Especially when you consider that knowledge of tools/technologies needs to be constantly kept on top of and 'upgraded'?)

The "generalist" T-shaped Tester
I would regard myself as sharing Rob Lambert's view of the value of  the T-shaped Tester. And, having got in to testing quite late in my working life, I have other skills 

But if Rob sees "testing" as representing the vertical bar of the T, where I get hung up on is how far to generalise or to specialise within that "testing" bar.

Am I trying to be a kind of "Unicode character 0166-shaped" Tester?  (Not that that shape quite captures it either!)  With a broad range of technical knowledge?
Unicode character 0166

At the moment it feels like I have unrealistic expectations of my ability to learn.

Perhaps I need the confidence that not knowing something is ok providing you have the capacity and will to learn it when you need it. And that you always bring a set of core skills whatever the context.

Never stop learning
Learning is a continuous process and learning is a motivator for me. I wouldn't want to be in a situation where there was nothing new to learn.

But it shouldn't be stressful. Working through a learning backlog should be a source of pleasure and not a cloud hanging over me.

I need to make that mental shift, and maybe that requires narrowing my ambitions.

"The Selenium Guidebook" and Thoughts on my Learning Process


"The Selenium Guidebook: How To Use Selenium, successfully" by Dave Haeffner does not specifically set out to teach you how to automate elements on web pages with the WebDriver API. What it does set out to do - as the full title suggests - is show how you can practically implement WebDriver automation to add business value, avoiding common pitfalls in the process.

And in that vein, this post doesn't exactly review the book. It does a bit of that, but it's more about my personal experience of working through it, and reflects on how I might improve my learning process in the "code" domain going forward.

Some basics about the book
I worked on the Ruby version of the book. A version for Java is also now available.

Topics the book covers include:
  • Formulating an automation strategy
  • Understanding what constitutes the right approach to writing your checks
  • Using Page Objects/abstraction to make a more robust suite that your team will trust
  • Running your checks against multiple system and browser configurations on Sauce Labs
  • Parallelisation to keep large suites running in sensible time-frames
  • How to set up suites which run automatically as part of continuous integration
  • Multiple examples of how to approach specific automation challenges
What's great about the book:
  • Haeffner's style is ultra-clear
  • He provides not just a download of code from the book to compare yours with, but a real website to automate and practice on
  • There is also additional supporting material, eg. videos, available to purchase
There were points when I was working through The Selenium Guidebook that I felt frustrated - not with the book, which is very good - but with my own lack of progress.

The frustrations came when something wasn't working for me and I felt I didn't have the knowledge to understand why or fix it.  I tried to think about why I was allowing myself to get frustrated.

Coping with a lack of coding knowledge
First, a slightly boring aside to explain where my knowledge stood when I started the book. 

I had studied WebDriver using Alan Richardson's excellent Java-based course about a year before working through this book. However, in the intervening time I had taken a job that didn't involve web testing and so my knowledge had gone stale. In resurrecting it, I decided to go back to coding with Ruby - which I had been learning the basics of previously - because I felt the less intuitive syntax of Java hadn't helped my learning of WebDriver.

Haeffner advises that an ability to code is not necessarily needed upfront. Whilst that is certainly true, in my experience learning is slower if you don't have a lot of coding knowledge.

I think the biggest problem caused by my lack of coding experience was not always being able to make good guesses about the cause/nature of errors - and error messages - I encountered, and therefore struggling to correct them.

Troubleshooting hints for the examples in the book could be helpful, but are probably impractical given the variety of problems students might face.

It might have been useful to know exactly which versions of particular Ruby gems Dave had tested his examples with. I'm not sure if it was ever really the cause of a problem I hit (there are still a few examples that I haven't managed to get working) but I did wonder at times whether an issue might relate to my having installed the latest version of gem X whereas Dave possibly had been working with an earlier one.

Putting in the hours
Dave Haeffner does very generously offer up his own contact details in the book to answer questions. I deliberately resisted that because I didn't think it was fair on the author; and not helpful to me to have the answers provided too easily.

Mostly I got failing examples working by putting in the hours and using the routes you might expect: - Google and StackOverflow to look up error messages
- seeing if the provided sample files ran on my setup and, if so, comparing that sample file with what I had written to find the difference.

And in one extreme case using a file comparison tool to try and find what the hell was different between my failing project and the provided one. (The difference turned out to be not in my Ruby code but with gems I was using not being listed in my gemfile.)

Of course, this "pain" is actually good for learning and I need to remember that when the frustration bites. When I eventually managed to get the HTTP status codes example (with browsermob proxy) working there was a real sense of achievement because I had had to do research/thinking/work on my own to get there.

By the time I had gone through all the material in the book I felt it had been a really good investment and I had stopped worrying about whether I should have been able to get through it more smoothly. I shall certainly be keeping The Selenium Guidebook on hand and coming back to it.

Finding the right IDE/Editor
Something practical that I think would have helped me, and that I still need to sort out, was either a better IDE or better knowledge of the one I was using.  (I suppose this too falls under the heading of a lack of coding experience.)

After looking for free Ruby IDEs, I went with Aptana Studio. Quite possibly I don't have Aptana set up correctly - I struggled even to find useful documentation - but I found it of limited use beyond picking up syntax errors.

For Alan Richardson's Java course I had used the suggested IDE, Jetbrains' IntelliJ.  And I missed its extensive code completion suggestions here, and its ability to pick up basic typos on top of syntax errors.  Sadly, Jetbrains' "Rubymine" IDE is not free.

I also found that running commands with Aptana Studio's built-in terminal (running Git Bash) wasn't always helpful. Like the time when I could not get my first Sauce Labs checks to run and wasted an hour or more trying to figure out what the syntax/structural error was that Aptana seemed to report. When I ran the check in a simple Windows command prompt instead I straight away saw more useful feedback that I simply had my Saucelabs credentials wrong.

Give myself a chance
But the simplest way I could have improved the process for me was to relax a bit. Not to feel I was working against the clock to get through the material. Not to overly criticise myself when I might spend a whole evening on a few pages but still not get the example working by bedtime.

And to give myself more credit when I eventually did chase down, and correct, my errors.

This is a generic flaw I sometimes show in my learning process - unrealistic expectations of how quickly and how fully I can process something.  It's something I will blog on separately in my next post .....

---------------
"The Selenium Guidebook: How to use Selenium, successfully" by Dave Haeffner is available now as an e-book in both Ruby and Java versions. A range of packages with extra supporting material are also available at the same link. 

Haeffner also produces a free weekly Selenium tips email .





Saturday 25 April 2015

Inattentional Blindness

Recently I presented to my teammates on Inattentional Blindness, having caught myself falling victim to it a couple of times. (Don't know how many times I fell victim and didn't catch myself, of course.)

Like a lot of testers I'm very interested in the cognitive factors which influence testing, and issues such as self-delusion. Very interested - but definitely not an expert. And a good way to consolidate what you know on a topic is to try explaining it to others.

Sharing with the team
For a while I had thought about leading some kind of presentation/discussion in the team. Not because I like presenting - I don't - but because I felt there were ideas which made testing interesting for me but that the team maybe weren't so familiar with.

I felt "IB" would be a good intro to the area of cognition and thinking skills. And I also saw an opportunity to talk about exploratory testing - doing that had been on my mind for a while and was given a good nudge by Karen Johnson's session at the TestBash Workshop Day 2015.

So I gave the guys my take on how Inattentional Blindness influences testing. And - whilst stressing that I wasn't claiming expertise - what techniques I thought we might use to reduce its impact.

The presentation
I tried summarising the content of the presentation in this post but it was going to be way too long - and not particularly original if you're familiar with the subject.  Instead I'll highlight some of the material I used to put my presentation together.
My slides are available on Slideshare for those with a morbid curiosity, and I'll embed them at the foot of this post.

I introduced the concept with this video (which I came across on the edX "Science of Everyday Thinking" course):


Inattentional Blindness is a pretty clunky term - not even easy to pronounce - but an example like that makes the concept clear to everyone. 

On the specifics of how we believe vision works, and how it really works, I used an extract from this Daniel Simons talk - "Seeing the World as it Isn't".  (The section I used is from 3:20 to 5:10 - but the whole video is only c 7mins and well worth watching)


And I have to make special mention of this cracking example of Inattentional Blindness in testing which Del Dewar (http://findingdeefex.com ; @deefex ) kindly sent me:




For a final piece of audience participation I used the subject of Focus/Defocus as an excuse to introduce the star of TestBash 3 - the spinning cat.

Some of the team had seen it before but it was still fun to find what works for different people to change the direction of spin they perceive.

Cut-price Derren Brown
I tried my own amateur experiment by having "Inattentional Blindness" as a footer on my slides - but on 2 or 3 of them changing it to "Inattentional Biscuits".  I was interested to see whether anyone spotted it and, if no-one did, whether I would have Derren Brown-ed the team into craving biscuits but not knowing why.

As it turned out my colleague Anna spotted the change at the first opportunity (to be fair, as she was sitting at the front she had the best chance) and collected the 50p prize. (Which I funded out of my own pocket, I'll have you know. Who says Scots are mean?)

Following up
It's hard for me to say if the presentation went well or not. I have the kind of personality that means I fixate on what I forgot to mention, or where I felt I should have been clearer.
The team seemed to find it interesting, though, and I've overheard a couple of references back to the themes since.

What was very encouraging was that in the morning before I had even given this presentation my manager had asked me to think about doing something further with the team on my experiences with Exploratory Testing.

Good news. But at the moment I'm feeling slightly overwhelmed working out where to start on that one ..... 

----------------------------------

Inattentional blindness from caltonhill



Wednesday 22 April 2015

Webinar: Practical Tips + Tricks for Selenium Test Automation (Dave Haeffner)

Notes on this webinar: presented by Dave Haeffner, and hosted by Sauce Labs
---------------------

Another Dave Haeffner webinar! This time rather than considering the broad why and how of introducing automation, he talks about what is certainly an area of expertise for him: specific ideas to extend, and/or overcome challenges in, Selenium browser automation.

The amount of good, free learning that Haeffner offers around Selenium is commendable. He also offers his own "the-internet" site/application which you can use to practice on - and for which the code is open-sourced on Github.

The format of the webinar was in line with Haeffner's free weekly Selenium tips email; and with the second half of his Selenium Guidebook.
(At least that's what I think the second half of the book does ... I'm still working through it!)

The learning here is not really about the details of using the WebDriver API to find, and interact with, elements in web pages. Rather it's about ways you can add value to your automation efforts.
My notes aren't extensive but here's what I took from it:

Headless automation
"Headless" automation, ie. performed without launching an actual browser instance, is especially useful for running on a Continuous Integration server providing feedback every time there is a build.
Haeffner mentioned:
-- Xvfb. Which is limited to running on linux, and which he suggests offers no real speed benefits over launching a browser
-- Ghostdriver (PhantomJS).  Which is multi-platform and does offer faster performance.

Visual Testing
I only became aware of the existence of "visual testing" recently and know very little about it. It sounded kind of counter-intuitive because checking for small changes in the UI was always exactly the kind of thing that automation couldn't sensibly do.  (I thought maybe this was just another example of over-promising by excitable automation tool vendors!)

However, there are now open-source tools that will work alongside WebDriver to look for problems like layout or font issues that WD on its own won't really handle. In effect giving "hundreds of assertions for a few lines of code".
This looked like interesting stuff although, without having tried it, it still sounds a little too good to be true and I would need convincing before I trusted it. As you'd expect, Haeffner does talk about the inherent complexity and that false positives are more of a problem than usual.  It seems like some human intervention is still necessary to confirm "failures" that the automation reports.

Proxy Tools
Configuring automation to run through a proxy tool (like Browsermob or Fiddler) opens up a range of extra options:
- Checking returned HTTP status codes at URLs
- "Blacklisting". An idea which appealed to me from past experience. Here you manipulate the traffic so that third-party elements like ads .... which are slow to load thus affecting your checks ... can be excluded.
- Load testing by capturing traffic and then converting it into a Jmeter file
- Checking for broken images by examining the status code of all IMG elements.  (Haeffner also talks about other ways to check for broken images without needing a proxy.)

Forgotten Password Scenarios
Difficult to fully automate something like forgotten password workflow when that typically involves the generation and receipt of email. At least I thought so.
But Haeffner describes how you can use Gmail's API to look for and scan an email rather than attempting the madness of automating your way in and out of the Gmail web front end.

A/B testing
Pesky A/B tests running on your site (Marketing! *shakes fist* ) can make automation checks fail because the page may not be as expected when the driver gets there. Haeffner shows ways to opt out of A/B tests when running your automation eg. by creating an opt-out cookie, or by using URLs which bypass the page variants.

File Management Scenarios
Scenarios involving file upload buttons are tricky because they will generally involve a standard system/OS file management dialog - which WebDriver can't automate. But get WebDriver to submit the full file path and you may be able to bypass system dialogs entirely.

Additional Explanatory Output
Haeffner showed how to add explicit messaging around the assertions in your checks by highlighting located web elements using Javascript. Having captured the original styling you can add your own styling with Javascript - and revert back to the original when done.


There was, of course, more information - and more detail in the webinar itself.
A recording of it is available here: http://sauceio.com/index.php/2015/04/recap-practical-tips-tricks-for-selenium-test-automation/

Sunday 12 April 2015

Webinar: Mastering the Essentials of UI Test Automation


Brief notes on this Webinar: presented by Jim Holmes and Dave Haeffner, and hosted by Telerik
---------------------
This Webinar caught my eye because I'm working my way through co-presenter Dave Haeffner's Ruby-based version of "The Selenium Guidebook".  (A post or two to follow on that, I'm sure.)

This short webinar turned out to be a general "primer" for teams/organisations looking to adopt automation and didn't specifically discuss tools or techniques.  But it was no less interesting for that.
I understand that it accompanies a series of blog posts on the topic - see the link right at the end of this post - but I haven't read those as yet.

It was nice that, despite being organised by tool vendor Telerik, the webinar did not push their automation tool.  And Holmes and Haeffner are definitely not naive enough to believe the "automate everything" myths that some tool vendors seem to push. (Though they didn't specifically refer to it, I suspect Holmes and Haeffner get the "Testing vs Checking" thing too.)

The following are my notes from the webinar and don't necessarily reflect direct quotes from Holmes or Haeffner.


Why automate?
  • Have a reason for your team to adopt automation, not just because everyone else is supposedly doing it. Question if it really adds value in your context.
  • Use automation to solve business problems, not to cut the size of your testing team.
  • Have the right expectations of automation. Organisations who expect to automate everything or to eliminate manual testing are misguided.
  • Automate the key, happy path stuff.


Implications of automation for testers in the team

  • Automation should enable people to be re-purposed and time to be re-allocated rather than removing the former and reducing the latter. It should be a way to free up testing resource to do things computers can't do. Enables a move from repetitive scripted or regression checking to Exploratory Testing, questioning the product.
  • For testers, automation needn't be intimidating. Good testers with the right mindset can utilise automation without the need to try and become developers instead. They can still have focus on the thinking skills intrinsic to rewarding testing.
  • Encourage them that automation code is not as intimidating as application code. Only a small subset of programming knowledge is necessary to get to grips with automating. In his Selenium book Haeffner talks about just 8 key concepts that he thinks are necessary for test automation. Indeed, the most successful implementations have developers helping the testers.


Implementation

  • Holmes and Haeffner suggest running a time-boxed pilot of any automation implementation. This is necessary to adapt processes and set internal expectations as necessary.
  • There is a danger of building badly-implemented automation that the team doesn't trust.  Avoid this with good abstraction, and robust tests. Use tools, techniques (eg. continuous integration) which make automation visible and accessible to the whole team. It shouldn't be running in isolation on someone's machine. Aim for constant feedback. Learn from failure and advertise success.
  • In Dave Haeffner's view, good automated checks in a suite:
    • are concise and specific. Checking one, clearly understood thing.
    • all run independently of each other
    • are descriptive


A recording of the full webinar, and some Q&A that they didn't have time for, can be found here.



Sunday 29 March 2015

TestBash Workshop Day 2015

After attending the TestBash 3 conference in 2014 I found that I had enjoyed it as a source of inspiration - at the time I was wondering whether there was really a future for me in testing - but probably would have preferred something that offered more direct learning and interaction.

The conference environment seems great if you have lots of contacts within the industry to catch up with, or are just confident about networking with new people. Neither applied to me - which meant I felt a little bit on the outside of what was going on at times.

For me the highlight of TestBash 2014 came right at the start with the Lean Coffee session. At which I was privileged to find myself in a group with Rob Lambert, James Christie and Iain McCowatt amongst others. Great, at a time when I was questioning my place in testing, to be able to discuss topics that mattered to me with "thought-leaders" in the community.



I was really interested, therefore, to see the addition of a TestBash Workshop Day to the schedule in Brighton this year.  And, thanks to the wonderfully generous half-price offer to self-funders made by Rosie Sherry, I was able to attend - getting to 4 sessions on the day.

"Mapping Exploratory Testing Tours & Personas" by Karen Johnson

Karen Johnson came to this session through her experience as a Test Lead/Test Manager encountering testers who, when tasked to perform exploratory testing, don't seem to know where to start. Or don't know to structure their ET and give meaningful feedback on what they've found.

Although I don't manage other testers, this resonated with me as I'm currently part of a team which, for my tastes, is overly test-case driven. And having gone in there as an enthusiastic exploratory tester I feel I'm reluctantly losing some of that sensibility.  (A future blog post, perhaps.)

Karen introduced different approaches which can be used to help structure exploratory testing and "unstick" testers used to scripted scenarios:
In groups we went on to frame an exploratory testing scenario and to discuss how helpful we found each of these for generating ideas, as well as any personal experiences we had in using them.
I was a little surprised how few people seemed to be familiar with, for example, Whittaker's "city tours" analogies or Elisabeth Hendrickson's heuristics cheat sheet. (Lest that sound arrogant I have no doubt there are numerous testing insights they have that I don't!)

The session was a timely reminder that I could be doing a lot more to inform and improve my ET skills.  And left me wondering how I can do more to encourage familiarity with these approaches, and exploratory testing in general, within my team.


"How to Break Your App: Best Practices in Mobile App Testing" by Daniel Knott

I've already praised Daniel Knott's book on this blog and was happy to revisit some of that material in a workshop situation.

Mobile testing absolutely has some unique considerations and challenges, and in groups we set about mind-mapping those based on our own thoughts and experiences.

Daniel then dug deeper and outlined some specific areas in which we might challenge a mobile app and "break" it. My scribbled notes (sorry but I'm not one of the cool sketch-note kids) included:

movement - sensors - storage - track battery usage - standby mode 
- notifications and interruptions - usability - updates - log files - communication (network packets etc) - permissions

Then the fun part, as in groups we set about attacking a mobile app of our choice using both mobile-specific and "general" test ideas of our choice.

I don't think any group had trouble in finding areas of concern. Perhaps reinforcing one of the striking statistics Daniel had shared with us - that only 45% of mobile apps are well-tested.


Supercharging your Bug Reports by Neil Studd

Neil Studd presented on a subject of almost universal relevance to testers - bug reporting.
And that strong familiarity meant a range of opinions and experience.

During an early group discussion on what a bug report should contain, or omit, I was struck by how some testers assumed that the way they recorded bugs reflected some kind of fundamental truth rather than their own context. Strong views that a bug report must always include "X" - because that's how they do it in their organisation.

Neil explained how a bug report is like a sales pitch not just for the bug itself but for us as testers. Our credibility in a team will be strongly influenced by the quality of our bug reporting.

The session included a lot of good material with examples of good and bad bug reporting; and thoughts on how to overcome objections which might mean your bug doesn't get taken as seriously as you think it should be.
We even got to evaluate an (as-yet) unreleased heuristic from Michael Bolton on what makes a good bug report.

There was a fascinating look at how bizarre, seemingly random and unreproducible, real-world bugs actually stemmed from an underlying pattern that needed some lateral thinking to get to.

With some background in digital marketing I was particularly interested in Neil's points on how ideas from Search Engine Optimisation can inform good bug logging. Something I hadn't consciously recognised before but is spot on. (To some extent this is dependent on which tool you use to log your bugs.)

This was definitely a session that could generate interest and ideas in my team. Especially as there's recently been some discussion about how we report bugs, and how closely that coincides with how our Engineering Director wants them reported!


Gamification of Software Testing by Nicola Sedgwick

The final - packed! - session of the day was largely relevant to the emergence of the crowd-sourced testing scene led by companies like uTest/Applause and Bugfinders.

I must admit that I was starting to tire by this point and, in the break beforehand, wondered if I would be able to maintain concentration for another two hours.
I needn't have worried. This was a session with a lot of in-group discussion time and I was lucky enough to find myself in a group that included Karen Johnson and Ron Werner. They brought the intellect - I was able to bring some real experience of working as a "crowd" tester.

A crowd tester typically has no personal investment in your product or your organisation. They find themselves in a frantic battle with other testers all over the world to find and report bugs in your application before anyone else does - because that's the only way they will get paid for their efforts.  And it's very easy to invest a lot of testing time for no reward.

It's in a crowd tester's interest, therefore, to ensure the best return on their efforts rather than ensure the quality of your product. For example, depending on how the testing mission has been framed and the reward payments are structured, this might mean reporting multiple minor bugs rather than one important one.

In our groups we identified how a testing mission might leave itself open to gaming and how to try and mitigate that.
More generally we looked at how to incentivise testing from outwith the team - be that by colleagues from other departments, users, or even friends/family.
Nicola has written about the topic here.


Overall it was very full and useful day. One on which I met a number of friendly and interesting people - I just wish I'd been more organised about getting and exchanging contact details to stay in touch!

Of course, there were other interesting sessions that I couldn't attend.
I was particularly sorry not to have been able to get to John Stevenson's workshop on creative and critical thinking - which sounds like it would have been right up my street.  I'm planning to make amends by reading the material on his blog and have just bought his book



I would definitely consider attending a TestBash Workshop Day again next year.
  
And maybe try to arrange it so I don't have to head straight off afterwards with work the next morning, while everyone else is enjoying themselves at the TestBash meetup!