GA Tracking Code

Showing posts with label Books. Show all posts
Showing posts with label Books. Show all posts

Tuesday, 9 June 2015

"The Selenium Guidebook" and Thoughts on my Learning Process


"The Selenium Guidebook: How To Use Selenium, successfully" by Dave Haeffner does not specifically set out to teach you how to automate elements on web pages with the WebDriver API. What it does set out to do - as the full title suggests - is show how you can practically implement WebDriver automation to add business value, avoiding common pitfalls in the process.

And in that vein, this post doesn't exactly review the book. It does a bit of that, but it's more about my personal experience of working through it, and reflects on how I might improve my learning process in the "code" domain going forward.

Some basics about the book
I worked on the Ruby version of the book. A version for Java is also now available.

Topics the book covers include:
  • Formulating an automation strategy
  • Understanding what constitutes the right approach to writing your checks
  • Using Page Objects/abstraction to make a more robust suite that your team will trust
  • Running your checks against multiple system and browser configurations on Sauce Labs
  • Parallelisation to keep large suites running in sensible time-frames
  • How to set up suites which run automatically as part of continuous integration
  • Multiple examples of how to approach specific automation challenges
What's great about the book:
  • Haeffner's style is ultra-clear
  • He provides not just a download of code from the book to compare yours with, but a real website to automate and practice on
  • There is also additional supporting material, eg. videos, available to purchase
There were points when I was working through The Selenium Guidebook that I felt frustrated - not with the book, which is very good - but with my own lack of progress.

The frustrations came when something wasn't working for me and I felt I didn't have the knowledge to understand why or fix it.  I tried to think about why I was allowing myself to get frustrated.

Coping with a lack of coding knowledge
First, a slightly boring aside to explain where my knowledge stood when I started the book. 

I had studied WebDriver using Alan Richardson's excellent Java-based course about a year before working through this book. However, in the intervening time I had taken a job that didn't involve web testing and so my knowledge had gone stale. In resurrecting it, I decided to go back to coding with Ruby - which I had been learning the basics of previously - because I felt the less intuitive syntax of Java hadn't helped my learning of WebDriver.

Haeffner advises that an ability to code is not necessarily needed upfront. Whilst that is certainly true, in my experience learning is slower if you don't have a lot of coding knowledge.

I think the biggest problem caused by my lack of coding experience was not always being able to make good guesses about the cause/nature of errors - and error messages - I encountered, and therefore struggling to correct them.

Troubleshooting hints for the examples in the book could be helpful, but are probably impractical given the variety of problems students might face.

It might have been useful to know exactly which versions of particular Ruby gems Dave had tested his examples with. I'm not sure if it was ever really the cause of a problem I hit (there are still a few examples that I haven't managed to get working) but I did wonder at times whether an issue might relate to my having installed the latest version of gem X whereas Dave possibly had been working with an earlier one.

Putting in the hours
Dave Haeffner does very generously offer up his own contact details in the book to answer questions. I deliberately resisted that because I didn't think it was fair on the author; and not helpful to me to have the answers provided too easily.

Mostly I got failing examples working by putting in the hours and using the routes you might expect: - Google and StackOverflow to look up error messages
- seeing if the provided sample files ran on my setup and, if so, comparing that sample file with what I had written to find the difference.

And in one extreme case using a file comparison tool to try and find what the hell was different between my failing project and the provided one. (The difference turned out to be not in my Ruby code but with gems I was using not being listed in my gemfile.)

Of course, this "pain" is actually good for learning and I need to remember that when the frustration bites. When I eventually managed to get the HTTP status codes example (with browsermob proxy) working there was a real sense of achievement because I had had to do research/thinking/work on my own to get there.

By the time I had gone through all the material in the book I felt it had been a really good investment and I had stopped worrying about whether I should have been able to get through it more smoothly. I shall certainly be keeping The Selenium Guidebook on hand and coming back to it.

Finding the right IDE/Editor
Something practical that I think would have helped me, and that I still need to sort out, was either a better IDE or better knowledge of the one I was using.  (I suppose this too falls under the heading of a lack of coding experience.)

After looking for free Ruby IDEs, I went with Aptana Studio. Quite possibly I don't have Aptana set up correctly - I struggled even to find useful documentation - but I found it of limited use beyond picking up syntax errors.

For Alan Richardson's Java course I had used the suggested IDE, Jetbrains' IntelliJ.  And I missed its extensive code completion suggestions here, and its ability to pick up basic typos on top of syntax errors.  Sadly, Jetbrains' "Rubymine" IDE is not free.

I also found that running commands with Aptana Studio's built-in terminal (running Git Bash) wasn't always helpful. Like the time when I could not get my first Sauce Labs checks to run and wasted an hour or more trying to figure out what the syntax/structural error was that Aptana seemed to report. When I ran the check in a simple Windows command prompt instead I straight away saw more useful feedback that I simply had my Saucelabs credentials wrong.

Give myself a chance
But the simplest way I could have improved the process for me was to relax a bit. Not to feel I was working against the clock to get through the material. Not to overly criticise myself when I might spend a whole evening on a few pages but still not get the example working by bedtime.

And to give myself more credit when I eventually did chase down, and correct, my errors.

This is a generic flaw I sometimes show in my learning process - unrealistic expectations of how quickly and how fully I can process something.  It's something I will blog on separately in my next post .....

---------------
"The Selenium Guidebook: How to use Selenium, successfully" by Dave Haeffner is available now as an e-book in both Ruby and Java versions. A range of packages with extra supporting material are also available at the same link. 

Haeffner also produces a free weekly Selenium tips email .





Tuesday, 17 March 2015

"The Leprechauns of Software Engineering" by Laurent Bossavit


"The software engineering profession as a whole needs to feel ashamed that it will so readily repeat, as if it were a universal truth, something that is only a half-baked figure from a single project in a single business..."
- Laurent Bossavit


Laurent Bossavit's book questions how truisms in software development come into being without compelling evidence. And it specifically debunks a few of them - like the "10x" variation in productivity between developers; and the "fact" that defects are exponentially more expensive to fix the later in the development process they are identified.

Unfortunately, we accept and repeat claims that sound intuitively plausible like these without stopping to think that we don't even have universal agreement on how to measure "productivity", or on what exactly is the definition of a "defect".



Clearly decisions should be made based on evidence, not on hearsay or what "everyone else is doing". 
Organisations shouldn't adopt Agile practices, for example, just because they're fashionable. That wouldn't be rational.  And yet....

If I ask myself how this adoption of folklore happens, I can think of a couple of possible contributing factors. (Given the topic, I should stress that I have done no research, and can offer no evidence.)

How do these claims propagate?

As I read the book I was reminded how information-transfer often takes place in "punchy" presentations, documents and web pages. 

Points need to be made quickly, often visually. A few individual data points get represented as a more dramatic curve for "clarity".  If we have a point that we want to get across then the cliche (along with including a Wikipedia definition) is to Google for some - any - source that appears to back us up with statistics. 

We don't check any deeper for proof of what we already know. No-one really has time for detail anyway.

As Bossavit points out: "If you can cite a source for your claim then it is probably true" is a dangerous heuristic.

If someone tells us that "evidence shows" 71% of failed software projects suffer from poor requirements then, fair enough, we don't have the time or inclination to check that.
But that doesn't mean we have to believe it.
And why do we think that what applies to other people's projects applies cleanly to ours?

Why are these claims accepted?  

Well, personally I suspect I fall for something like the "argument from authority" fallacy.

I tend to assume that everyone (and I do mean everyone) in the software world knows more than I do, especially if they're confident enough to state those views publicly. 
But knowing about some things in software doesn't equate to being an expert on every topic. Nor mean that they can be bothered to fully research the claims they repeat any more than I can.

My default position is too often a trusting "ah - I didn't know that" rather than "what's the evidence?". 

(Luckily the fact that I have now have a blog myself should rid me of the belief that writing a blog automatically makes you a credible source for anything.)

Critical thinking

Bossavit wants to see a bit more of what Testers like me should already be used to applying - critical thinking.
He also believes that research into software engineering should be informed by the methods of the cognitive and social sciences, and not simply look to arrive at quotable metrics by taking shortcuts.

In leading us through exactly how he has uncovered the lack of evidence behind these "facts", Bossavit gives a run-down of how we can use Google for leprechaun hunting ourselves - and encourages us to practice on claims we find questionable.  

I might try it on some of those statements made about the miraculous properties of test automation.


"The Leprechauns of Software Engineering" by Laurent Bossavit is available now as an e-book from Leanpub.

Sunday, 22 February 2015

"Hands-on Mobile App Testing" by Daniel Knott

Daniel Knott has produced a great book, full of practical information.
Next time I have to undertake a mobile testing project this will be my starting point for gathering ideas.



Knott doesn't skimp - giving us everything from an overview of the history and growth of mobile technology to the factors you should consider to ensure a successful app launch.

Indeed, my overriding impression of this book is how practical and useful it is if, like me, you're not as familiar with mobile testing as you'd like to be. It made me realise how shallow some of the app testing I've done in the past has been.

As context to inform your test approach, the book describes the special challenges of mobile testing, eg. device fragmentation and unique hardware factors like the use of sensors.
There are also comprehensive chapters on "How to Test Mobile Apps" and on mobile test automation. These chapters work, often literally, as ready-made checklists which you can put straight into action.

Having read through Hands-on Mobile App Testing once, I'll be going through it again to try out some of the practical test ideas and exercises Knott outlines.  For example his suggestion (which I won't detail here - buy the book!) on how to use the information app providers publish about their updates to improve your own testing and test design.

It feels like nothing is left out here, with discussion of topics like: understanding and profiling your user; usability; security - even how you might source and manage those ever-changing devices.  And if you need to go deeper on a subject Knott points you towards sources for that too.

You'll find information and advice relevant to whatever testing you do. Knott's section on planning a mobile testing strategy is helpful when compiling any test strategy; his chapter on skills mobile testers need is applicable to testers in any area.

If I have one tiny nitpick it's that there are some "literals" in the book which might have been picked up with another proof-read. (They're not spelling errors that a spellcheck would see.)
But this is understandable given the self-published approach through Leanpub, and as Knott has announced on his blog that a deal has been done to publish the book through a major publishing house then I'm sure those will be polished out.
In any case, it's a minor concern which didn't impact the quality of this book for me. I strongly recommend it.

Please note:
Since I wrote this review of the Leanpub version of "Hands-On Mobile App Testing" a revised and expanded edition has been published.