GA Tracking Code

Sunday, 29 March 2015

TestBash Workshop Day 2015

After attending the TestBash 3 conference in 2014 I found that I had enjoyed it as a source of inspiration - at the time I was wondering whether there was really a future for me in testing - but probably would have preferred something that offered more direct learning and interaction.

The conference environment seems great if you have lots of contacts within the industry to catch up with, or are just confident about networking with new people. Neither applied to me - which meant I felt a little bit on the outside of what was going on at times.

For me the highlight of TestBash 2014 came right at the start with the Lean Coffee session. At which I was privileged to find myself in a group with Rob Lambert, James Christie and Iain McCowatt amongst others. Great, at a time when I was questioning my place in testing, to be able to discuss topics that mattered to me with "thought-leaders" in the community.



I was really interested, therefore, to see the addition of a TestBash Workshop Day to the schedule in Brighton this year.  And, thanks to the wonderfully generous half-price offer to self-funders made by Rosie Sherry, I was able to attend - getting to 4 sessions on the day.

"Mapping Exploratory Testing Tours & Personas" by Karen Johnson

Karen Johnson came to this session through her experience as a Test Lead/Test Manager encountering testers who, when tasked to perform exploratory testing, don't seem to know where to start. Or don't know to structure their ET and give meaningful feedback on what they've found.

Although I don't manage other testers, this resonated with me as I'm currently part of a team which, for my tastes, is overly test-case driven. And having gone in there as an enthusiastic exploratory tester I feel I'm reluctantly losing some of that sensibility.  (A future blog post, perhaps.)

Karen introduced different approaches which can be used to help structure exploratory testing and "unstick" testers used to scripted scenarios:
In groups we went on to frame an exploratory testing scenario and to discuss how helpful we found each of these for generating ideas, as well as any personal experiences we had in using them.
I was a little surprised how few people seemed to be familiar with, for example, Whittaker's "city tours" analogies or Elisabeth Hendrickson's heuristics cheat sheet. (Lest that sound arrogant I have no doubt there are numerous testing insights they have that I don't!)

The session was a timely reminder that I could be doing a lot more to inform and improve my ET skills.  And left me wondering how I can do more to encourage familiarity with these approaches, and exploratory testing in general, within my team.


"How to Break Your App: Best Practices in Mobile App Testing" by Daniel Knott

I've already praised Daniel Knott's book on this blog and was happy to revisit some of that material in a workshop situation.

Mobile testing absolutely has some unique considerations and challenges, and in groups we set about mind-mapping those based on our own thoughts and experiences.

Daniel then dug deeper and outlined some specific areas in which we might challenge a mobile app and "break" it. My scribbled notes (sorry but I'm not one of the cool sketch-note kids) included:

movement - sensors - storage - track battery usage - standby mode 
- notifications and interruptions - usability - updates - log files - communication (network packets etc) - permissions

Then the fun part, as in groups we set about attacking a mobile app of our choice using both mobile-specific and "general" test ideas of our choice.

I don't think any group had trouble in finding areas of concern. Perhaps reinforcing one of the striking statistics Daniel had shared with us - that only 45% of mobile apps are well-tested.


Supercharging your Bug Reports by Neil Studd

Neil Studd presented on a subject of almost universal relevance to testers - bug reporting.
And that strong familiarity meant a range of opinions and experience.

During an early group discussion on what a bug report should contain, or omit, I was struck by how some testers assumed that the way they recorded bugs reflected some kind of fundamental truth rather than their own context. Strong views that a bug report must always include "X" - because that's how they do it in their organisation.

Neil explained how a bug report is like a sales pitch not just for the bug itself but for us as testers. Our credibility in a team will be strongly influenced by the quality of our bug reporting.

The session included a lot of good material with examples of good and bad bug reporting; and thoughts on how to overcome objections which might mean your bug doesn't get taken as seriously as you think it should be.
We even got to evaluate an (as-yet) unreleased heuristic from Michael Bolton on what makes a good bug report.

There was a fascinating look at how bizarre, seemingly random and unreproducible, real-world bugs actually stemmed from an underlying pattern that needed some lateral thinking to get to.

With some background in digital marketing I was particularly interested in Neil's points on how ideas from Search Engine Optimisation can inform good bug logging. Something I hadn't consciously recognised before but is spot on. (To some extent this is dependent on which tool you use to log your bugs.)

This was definitely a session that could generate interest and ideas in my team. Especially as there's recently been some discussion about how we report bugs, and how closely that coincides with how our Engineering Director wants them reported!


Gamification of Software Testing by Nicola Sedgwick

The final - packed! - session of the day was largely relevant to the emergence of the crowd-sourced testing scene led by companies like uTest/Applause and Bugfinders.

I must admit that I was starting to tire by this point and, in the break beforehand, wondered if I would be able to maintain concentration for another two hours.
I needn't have worried. This was a session with a lot of in-group discussion time and I was lucky enough to find myself in a group that included Karen Johnson and Ron Werner. They brought the intellect - I was able to bring some real experience of working as a "crowd" tester.

A crowd tester typically has no personal investment in your product or your organisation. They find themselves in a frantic battle with other testers all over the world to find and report bugs in your application before anyone else does - because that's the only way they will get paid for their efforts.  And it's very easy to invest a lot of testing time for no reward.

It's in a crowd tester's interest, therefore, to ensure the best return on their efforts rather than ensure the quality of your product. For example, depending on how the testing mission has been framed and the reward payments are structured, this might mean reporting multiple minor bugs rather than one important one.

In our groups we identified how a testing mission might leave itself open to gaming and how to try and mitigate that.
More generally we looked at how to incentivise testing from outwith the team - be that by colleagues from other departments, users, or even friends/family.
Nicola has written about the topic here.


Overall it was very full and useful day. One on which I met a number of friendly and interesting people - I just wish I'd been more organised about getting and exchanging contact details to stay in touch!

Of course, there were other interesting sessions that I couldn't attend.
I was particularly sorry not to have been able to get to John Stevenson's workshop on creative and critical thinking - which sounds like it would have been right up my street.  I'm planning to make amends by reading the material on his blog and have just bought his book



I would definitely consider attending a TestBash Workshop Day again next year.
  
And maybe try to arrange it so I don't have to head straight off afterwards with work the next morning, while everyone else is enjoying themselves at the TestBash meetup!

Wednesday, 25 March 2015

A Ruby course to avoid

"Ruby Programming from Scratch - Beginner and Advanced" by EduCBA IT Academy

I'm cheating somewhat in reviewing this Udemy course, because I haven't finished it.  And I doubt that I ever will.  I finally realised I was falling victim to the sunk cost fallacy and gave up. 

That was in Lecture 78 (of 175), in which our tutor's uninspiring delivery is all but drowned out by what could be someone chasing electric cabling into a wall with a mallet and bolster. (Or - for a more attractive image - perhaps chiseling out a piece of fine marble sculpture.)

The list price for this course on Udemy is a jaw-dropping £233 (That's right - two hundred and thirty-three) which I imagine must make it one of the most expensive courses on the site.

As I said in an earlier post, you should never pay full price for a course on Udemy*. I paid $10 for this one, but regret even that. $10 seems like more than the creators spent on putting the course together.

Format

The format of the course (unless it changes in the sections I haven't watched) is to go through elements of Ruby, define them, and give a basic example of the relevant syntax. 
There is no encouragement for you to try out the constructs for yourself: no exercises, or project running through the course.
I suppose it could serve as a look-up for a particular syntax with an example of its' use.  But then you could just look up the ruby documentation yourself.

The presentation is flat and uninspiring. Worse, it seems to have been put together with contempt for the student.

Lazily produced

Aside from visible basic spelling errors on the site like those above, the actual video content is poorly and lazily produced.

For example, within a single video lecture we start with a discussion of the second point on a slide (string interpolation), before the video jump-cuts back to the first point on the slide (explaining what a string is). Clearly wrongly edited.

It gets worse in the section on comparison operators.

It seems odd that the first lecture in that section launches into discussing the second operator (!=) in a table of them.  "What about the first one?", you think.  Maybe == is so obvious it doesn't need explaining?

But, no, after a couple of minutes the video jump-cuts back to give a general introduction to the table we've already been looking at.... and describe the == operator.  And at 5 mins 40 in the same video we jump again to what was obviously meant to be the start of the lecture - with a basic definition of what a comparison operator is.
All confusing enough, but it gets worse as the next video explains the != operator to us again as if we'd never seen it before.

Quality control

I can only conclude that they didn't bother to make even basic checks of the course content before publishing it. And Udemy clearly don't practice any quality control themselves. I should have followed my own advice.

Before long, even the small things about this course were getting on my nerves.
Like the poor audio with various distracting background noises. 
And that every time the tutor wants to run one of his ruby files we have to watch him open a command prompt and "cd" four times to get to his working directory. If you can't organise your presentation a bit better, why not at least cd once with the full path?!

*takes deep breath*

There are, of course, a number of alternative Ruby courses on Udemy.  A couple which I've tried and would rate higher than this one are:
Wait for the $10 per course offers, though!



*Except Alan Richardson's magnificent WebDriver course, but even that can be taken outside Udemy here.


Tuesday, 17 March 2015

"The Leprechauns of Software Engineering" by Laurent Bossavit


"The software engineering profession as a whole needs to feel ashamed that it will so readily repeat, as if it were a universal truth, something that is only a half-baked figure from a single project in a single business..."
- Laurent Bossavit


Laurent Bossavit's book questions how truisms in software development come into being without compelling evidence. And it specifically debunks a few of them - like the "10x" variation in productivity between developers; and the "fact" that defects are exponentially more expensive to fix the later in the development process they are identified.

Unfortunately, we accept and repeat claims that sound intuitively plausible like these without stopping to think that we don't even have universal agreement on how to measure "productivity", or on what exactly is the definition of a "defect".



Clearly decisions should be made based on evidence, not on hearsay or what "everyone else is doing". 
Organisations shouldn't adopt Agile practices, for example, just because they're fashionable. That wouldn't be rational.  And yet....

If I ask myself how this adoption of folklore happens, I can think of a couple of possible contributing factors. (Given the topic, I should stress that I have done no research, and can offer no evidence.)

How do these claims propagate?

As I read the book I was reminded how information-transfer often takes place in "punchy" presentations, documents and web pages. 

Points need to be made quickly, often visually. A few individual data points get represented as a more dramatic curve for "clarity".  If we have a point that we want to get across then the cliche (along with including a Wikipedia definition) is to Google for some - any - source that appears to back us up with statistics. 

We don't check any deeper for proof of what we already know. No-one really has time for detail anyway.

As Bossavit points out: "If you can cite a source for your claim then it is probably true" is a dangerous heuristic.

If someone tells us that "evidence shows" 71% of failed software projects suffer from poor requirements then, fair enough, we don't have the time or inclination to check that.
But that doesn't mean we have to believe it.
And why do we think that what applies to other people's projects applies cleanly to ours?

Why are these claims accepted?  

Well, personally I suspect I fall for something like the "argument from authority" fallacy.

I tend to assume that everyone (and I do mean everyone) in the software world knows more than I do, especially if they're confident enough to state those views publicly. 
But knowing about some things in software doesn't equate to being an expert on every topic. Nor mean that they can be bothered to fully research the claims they repeat any more than I can.

My default position is too often a trusting "ah - I didn't know that" rather than "what's the evidence?". 

(Luckily the fact that I have now have a blog myself should rid me of the belief that writing a blog automatically makes you a credible source for anything.)

Critical thinking

Bossavit wants to see a bit more of what Testers like me should already be used to applying - critical thinking.
He also believes that research into software engineering should be informed by the methods of the cognitive and social sciences, and not simply look to arrive at quotable metrics by taking shortcuts.

In leading us through exactly how he has uncovered the lack of evidence behind these "facts", Bossavit gives a run-down of how we can use Google for leprechaun hunting ourselves - and encourages us to practice on claims we find questionable.  

I might try it on some of those statements made about the miraculous properties of test automation.


"The Leprechauns of Software Engineering" by Laurent Bossavit is available now as an e-book from Leanpub.

Wednesday, 4 March 2015

Testing in the Pub Podcast, Episode 15 - Pairing

Stephen Janaway and Dan Ashby's "Testing in the Pub" podcast is a nice source of free learning, and they deserve thanks for taking the time to put it together.

Having started early last year they're up to Episode 15 already with this one about pairing.

For years I worked as a sole Tester in an organisation where all the dev was outsourced. I used to dream of pairing (ok, not literally dream) if only to have a second set of eyes checking what I was doing.  Since then I've been fortunate to have the chance to do some informal pairing with other Testers and have really found it valuable.

In this podcast I found it particularly interesting to hear Janaway and Ashby talk about how pairing can mean more than just sharing a screen with another tester to tackle a challenging scenario; or with a developer to provide immediate feedback on what they're coding.

As Testers, pairing can also help us influence decision-making; raise our profile in the team; and enlighten those who don't fully understand the role of testing - or believe the misconceptions. We can add value to a project pairing with other roles than just test and dev. (I've had a valuable session pair-testing with a Project Manager in the past.)

Pairing is also a great way of learning. Dan Ashby talks about a nice set of simple question words he uses as a framework whilst pairing to gain information and understanding which will inform testing decisions.

As someone who finds Session Based Test Management a useful approach to Exploratory Testing, I'm embarrassed I hadn't already thought about applying a similar approach to pairing. 
Don't just spontaneously attempt to start a pairing session with a colleague. Get their agreement in advance, have a goal for the session and put a time-limit on it.

But before you do any of that, as the guys discuss, you might need to convince Management that pairing is a good use of resources.

Lots to think about and act on after listening to this discussion. And it can be easily digested during your commute!


You can access this episode, and all of the Testing in the Pub podcasts at: http://testinginthepub.co.uk/testinginthepub/