GA Tracking Code

Sunday 29 March 2015

TestBash Workshop Day 2015

After attending the TestBash 3 conference in 2014 I found that I had enjoyed it as a source of inspiration - at the time I was wondering whether there was really a future for me in testing - but probably would have preferred something that offered more direct learning and interaction.

The conference environment seems great if you have lots of contacts within the industry to catch up with, or are just confident about networking with new people. Neither applied to me - which meant I felt a little bit on the outside of what was going on at times.

For me the highlight of TestBash 2014 came right at the start with the Lean Coffee session. At which I was privileged to find myself in a group with Rob Lambert, James Christie and Iain McCowatt amongst others. Great, at a time when I was questioning my place in testing, to be able to discuss topics that mattered to me with "thought-leaders" in the community.



I was really interested, therefore, to see the addition of a TestBash Workshop Day to the schedule in Brighton this year.  And, thanks to the wonderfully generous half-price offer to self-funders made by Rosie Sherry, I was able to attend - getting to 4 sessions on the day.

"Mapping Exploratory Testing Tours & Personas" by Karen Johnson

Karen Johnson came to this session through her experience as a Test Lead/Test Manager encountering testers who, when tasked to perform exploratory testing, don't seem to know where to start. Or don't know to structure their ET and give meaningful feedback on what they've found.

Although I don't manage other testers, this resonated with me as I'm currently part of a team which, for my tastes, is overly test-case driven. And having gone in there as an enthusiastic exploratory tester I feel I'm reluctantly losing some of that sensibility.  (A future blog post, perhaps.)

Karen introduced different approaches which can be used to help structure exploratory testing and "unstick" testers used to scripted scenarios:
In groups we went on to frame an exploratory testing scenario and to discuss how helpful we found each of these for generating ideas, as well as any personal experiences we had in using them.
I was a little surprised how few people seemed to be familiar with, for example, Whittaker's "city tours" analogies or Elisabeth Hendrickson's heuristics cheat sheet. (Lest that sound arrogant I have no doubt there are numerous testing insights they have that I don't!)

The session was a timely reminder that I could be doing a lot more to inform and improve my ET skills.  And left me wondering how I can do more to encourage familiarity with these approaches, and exploratory testing in general, within my team.


"How to Break Your App: Best Practices in Mobile App Testing" by Daniel Knott

I've already praised Daniel Knott's book on this blog and was happy to revisit some of that material in a workshop situation.

Mobile testing absolutely has some unique considerations and challenges, and in groups we set about mind-mapping those based on our own thoughts and experiences.

Daniel then dug deeper and outlined some specific areas in which we might challenge a mobile app and "break" it. My scribbled notes (sorry but I'm not one of the cool sketch-note kids) included:

movement - sensors - storage - track battery usage - standby mode 
- notifications and interruptions - usability - updates - log files - communication (network packets etc) - permissions

Then the fun part, as in groups we set about attacking a mobile app of our choice using both mobile-specific and "general" test ideas of our choice.

I don't think any group had trouble in finding areas of concern. Perhaps reinforcing one of the striking statistics Daniel had shared with us - that only 45% of mobile apps are well-tested.


Supercharging your Bug Reports by Neil Studd

Neil Studd presented on a subject of almost universal relevance to testers - bug reporting.
And that strong familiarity meant a range of opinions and experience.

During an early group discussion on what a bug report should contain, or omit, I was struck by how some testers assumed that the way they recorded bugs reflected some kind of fundamental truth rather than their own context. Strong views that a bug report must always include "X" - because that's how they do it in their organisation.

Neil explained how a bug report is like a sales pitch not just for the bug itself but for us as testers. Our credibility in a team will be strongly influenced by the quality of our bug reporting.

The session included a lot of good material with examples of good and bad bug reporting; and thoughts on how to overcome objections which might mean your bug doesn't get taken as seriously as you think it should be.
We even got to evaluate an (as-yet) unreleased heuristic from Michael Bolton on what makes a good bug report.

There was a fascinating look at how bizarre, seemingly random and unreproducible, real-world bugs actually stemmed from an underlying pattern that needed some lateral thinking to get to.

With some background in digital marketing I was particularly interested in Neil's points on how ideas from Search Engine Optimisation can inform good bug logging. Something I hadn't consciously recognised before but is spot on. (To some extent this is dependent on which tool you use to log your bugs.)

This was definitely a session that could generate interest and ideas in my team. Especially as there's recently been some discussion about how we report bugs, and how closely that coincides with how our Engineering Director wants them reported!


Gamification of Software Testing by Nicola Sedgwick

The final - packed! - session of the day was largely relevant to the emergence of the crowd-sourced testing scene led by companies like uTest/Applause and Bugfinders.

I must admit that I was starting to tire by this point and, in the break beforehand, wondered if I would be able to maintain concentration for another two hours.
I needn't have worried. This was a session with a lot of in-group discussion time and I was lucky enough to find myself in a group that included Karen Johnson and Ron Werner. They brought the intellect - I was able to bring some real experience of working as a "crowd" tester.

A crowd tester typically has no personal investment in your product or your organisation. They find themselves in a frantic battle with other testers all over the world to find and report bugs in your application before anyone else does - because that's the only way they will get paid for their efforts.  And it's very easy to invest a lot of testing time for no reward.

It's in a crowd tester's interest, therefore, to ensure the best return on their efforts rather than ensure the quality of your product. For example, depending on how the testing mission has been framed and the reward payments are structured, this might mean reporting multiple minor bugs rather than one important one.

In our groups we identified how a testing mission might leave itself open to gaming and how to try and mitigate that.
More generally we looked at how to incentivise testing from outwith the team - be that by colleagues from other departments, users, or even friends/family.
Nicola has written about the topic here.


Overall it was very full and useful day. One on which I met a number of friendly and interesting people - I just wish I'd been more organised about getting and exchanging contact details to stay in touch!

Of course, there were other interesting sessions that I couldn't attend.
I was particularly sorry not to have been able to get to John Stevenson's workshop on creative and critical thinking - which sounds like it would have been right up my street.  I'm planning to make amends by reading the material on his blog and have just bought his book



I would definitely consider attending a TestBash Workshop Day again next year.
  
And maybe try to arrange it so I don't have to head straight off afterwards with work the next morning, while everyone else is enjoying themselves at the TestBash meetup!

No comments:

Post a Comment