GA Tracking Code

Saturday, 25 April 2015

Inattentional Blindness

Recently I presented to my teammates on Inattentional Blindness, having caught myself falling victim to it a couple of times. (Don't know how many times I fell victim and didn't catch myself, of course.)

Like a lot of testers I'm very interested in the cognitive factors which influence testing, and issues such as self-delusion. Very interested - but definitely not an expert. And a good way to consolidate what you know on a topic is to try explaining it to others.

Sharing with the team
For a while I had thought about leading some kind of presentation/discussion in the team. Not because I like presenting - I don't - but because I felt there were ideas which made testing interesting for me but that the team maybe weren't so familiar with.

I felt "IB" would be a good intro to the area of cognition and thinking skills. And I also saw an opportunity to talk about exploratory testing - doing that had been on my mind for a while and was given a good nudge by Karen Johnson's session at the TestBash Workshop Day 2015.

So I gave the guys my take on how Inattentional Blindness influences testing. And - whilst stressing that I wasn't claiming expertise - what techniques I thought we might use to reduce its impact.

The presentation
I tried summarising the content of the presentation in this post but it was going to be way too long - and not particularly original if you're familiar with the subject.  Instead I'll highlight some of the material I used to put my presentation together.
My slides are available on Slideshare for those with a morbid curiosity, and I'll embed them at the foot of this post.

I introduced the concept with this video (which I came across on the edX "Science of Everyday Thinking" course):


Inattentional Blindness is a pretty clunky term - not even easy to pronounce - but an example like that makes the concept clear to everyone. 

On the specifics of how we believe vision works, and how it really works, I used an extract from this Daniel Simons talk - "Seeing the World as it Isn't".  (The section I used is from 3:20 to 5:10 - but the whole video is only c 7mins and well worth watching)


And I have to make special mention of this cracking example of Inattentional Blindness in testing which Del Dewar (http://findingdeefex.com ; @deefex ) kindly sent me:




For a final piece of audience participation I used the subject of Focus/Defocus as an excuse to introduce the star of TestBash 3 - the spinning cat.

Some of the team had seen it before but it was still fun to find what works for different people to change the direction of spin they perceive.

Cut-price Derren Brown
I tried my own amateur experiment by having "Inattentional Blindness" as a footer on my slides - but on 2 or 3 of them changing it to "Inattentional Biscuits".  I was interested to see whether anyone spotted it and, if no-one did, whether I would have Derren Brown-ed the team into craving biscuits but not knowing why.

As it turned out my colleague Anna spotted the change at the first opportunity (to be fair, as she was sitting at the front she had the best chance) and collected the 50p prize. (Which I funded out of my own pocket, I'll have you know. Who says Scots are mean?)

Following up
It's hard for me to say if the presentation went well or not. I have the kind of personality that means I fixate on what I forgot to mention, or where I felt I should have been clearer.
The team seemed to find it interesting, though, and I've overheard a couple of references back to the themes since.

What was very encouraging was that in the morning before I had even given this presentation my manager had asked me to think about doing something further with the team on my experiences with Exploratory Testing.

Good news. But at the moment I'm feeling slightly overwhelmed working out where to start on that one ..... 

----------------------------------

Inattentional blindness from caltonhill



Wednesday, 22 April 2015

Webinar: Practical Tips + Tricks for Selenium Test Automation (Dave Haeffner)

Notes on this webinar: presented by Dave Haeffner, and hosted by Sauce Labs
---------------------

Another Dave Haeffner webinar! This time rather than considering the broad why and how of introducing automation, he talks about what is certainly an area of expertise for him: specific ideas to extend, and/or overcome challenges in, Selenium browser automation.

The amount of good, free learning that Haeffner offers around Selenium is commendable. He also offers his own "the-internet" site/application which you can use to practice on - and for which the code is open-sourced on Github.

The format of the webinar was in line with Haeffner's free weekly Selenium tips email; and with the second half of his Selenium Guidebook.
(At least that's what I think the second half of the book does ... I'm still working through it!)

The learning here is not really about the details of using the WebDriver API to find, and interact with, elements in web pages. Rather it's about ways you can add value to your automation efforts.
My notes aren't extensive but here's what I took from it:

Headless automation
"Headless" automation, ie. performed without launching an actual browser instance, is especially useful for running on a Continuous Integration server providing feedback every time there is a build.
Haeffner mentioned:
-- Xvfb. Which is limited to running on linux, and which he suggests offers no real speed benefits over launching a browser
-- Ghostdriver (PhantomJS).  Which is multi-platform and does offer faster performance.

Visual Testing
I only became aware of the existence of "visual testing" recently and know very little about it. It sounded kind of counter-intuitive because checking for small changes in the UI was always exactly the kind of thing that automation couldn't sensibly do.  (I thought maybe this was just another example of over-promising by excitable automation tool vendors!)

However, there are now open-source tools that will work alongside WebDriver to look for problems like layout or font issues that WD on its own won't really handle. In effect giving "hundreds of assertions for a few lines of code".
This looked like interesting stuff although, without having tried it, it still sounds a little too good to be true and I would need convincing before I trusted it. As you'd expect, Haeffner does talk about the inherent complexity and that false positives are more of a problem than usual.  It seems like some human intervention is still necessary to confirm "failures" that the automation reports.

Proxy Tools
Configuring automation to run through a proxy tool (like Browsermob or Fiddler) opens up a range of extra options:
- Checking returned HTTP status codes at URLs
- "Blacklisting". An idea which appealed to me from past experience. Here you manipulate the traffic so that third-party elements like ads .... which are slow to load thus affecting your checks ... can be excluded.
- Load testing by capturing traffic and then converting it into a Jmeter file
- Checking for broken images by examining the status code of all IMG elements.  (Haeffner also talks about other ways to check for broken images without needing a proxy.)

Forgotten Password Scenarios
Difficult to fully automate something like forgotten password workflow when that typically involves the generation and receipt of email. At least I thought so.
But Haeffner describes how you can use Gmail's API to look for and scan an email rather than attempting the madness of automating your way in and out of the Gmail web front end.

A/B testing
Pesky A/B tests running on your site (Marketing! *shakes fist* ) can make automation checks fail because the page may not be as expected when the driver gets there. Haeffner shows ways to opt out of A/B tests when running your automation eg. by creating an opt-out cookie, or by using URLs which bypass the page variants.

File Management Scenarios
Scenarios involving file upload buttons are tricky because they will generally involve a standard system/OS file management dialog - which WebDriver can't automate. But get WebDriver to submit the full file path and you may be able to bypass system dialogs entirely.

Additional Explanatory Output
Haeffner showed how to add explicit messaging around the assertions in your checks by highlighting located web elements using Javascript. Having captured the original styling you can add your own styling with Javascript - and revert back to the original when done.


There was, of course, more information - and more detail in the webinar itself.
A recording of it is available here: http://sauceio.com/index.php/2015/04/recap-practical-tips-tricks-for-selenium-test-automation/

Sunday, 12 April 2015

Webinar: Mastering the Essentials of UI Test Automation


Brief notes on this Webinar: presented by Jim Holmes and Dave Haeffner, and hosted by Telerik
---------------------
This Webinar caught my eye because I'm working my way through co-presenter Dave Haeffner's Ruby-based version of "The Selenium Guidebook".  (A post or two to follow on that, I'm sure.)

This short webinar turned out to be a general "primer" for teams/organisations looking to adopt automation and didn't specifically discuss tools or techniques.  But it was no less interesting for that.
I understand that it accompanies a series of blog posts on the topic - see the link right at the end of this post - but I haven't read those as yet.

It was nice that, despite being organised by tool vendor Telerik, the webinar did not push their automation tool.  And Holmes and Haeffner are definitely not naive enough to believe the "automate everything" myths that some tool vendors seem to push. (Though they didn't specifically refer to it, I suspect Holmes and Haeffner get the "Testing vs Checking" thing too.)

The following are my notes from the webinar and don't necessarily reflect direct quotes from Holmes or Haeffner.


Why automate?
  • Have a reason for your team to adopt automation, not just because everyone else is supposedly doing it. Question if it really adds value in your context.
  • Use automation to solve business problems, not to cut the size of your testing team.
  • Have the right expectations of automation. Organisations who expect to automate everything or to eliminate manual testing are misguided.
  • Automate the key, happy path stuff.


Implications of automation for testers in the team

  • Automation should enable people to be re-purposed and time to be re-allocated rather than removing the former and reducing the latter. It should be a way to free up testing resource to do things computers can't do. Enables a move from repetitive scripted or regression checking to Exploratory Testing, questioning the product.
  • For testers, automation needn't be intimidating. Good testers with the right mindset can utilise automation without the need to try and become developers instead. They can still have focus on the thinking skills intrinsic to rewarding testing.
  • Encourage them that automation code is not as intimidating as application code. Only a small subset of programming knowledge is necessary to get to grips with automating. In his Selenium book Haeffner talks about just 8 key concepts that he thinks are necessary for test automation. Indeed, the most successful implementations have developers helping the testers.


Implementation

  • Holmes and Haeffner suggest running a time-boxed pilot of any automation implementation. This is necessary to adapt processes and set internal expectations as necessary.
  • There is a danger of building badly-implemented automation that the team doesn't trust.  Avoid this with good abstraction, and robust tests. Use tools, techniques (eg. continuous integration) which make automation visible and accessible to the whole team. It shouldn't be running in isolation on someone's machine. Aim for constant feedback. Learn from failure and advertise success.
  • In Dave Haeffner's view, good automated checks in a suite:
    • are concise and specific. Checking one, clearly understood thing.
    • all run independently of each other
    • are descriptive


A recording of the full webinar, and some Q&A that they didn't have time for, can be found here.