GA Tracking Code

Showing posts with label Free Learning. Show all posts
Showing posts with label Free Learning. Show all posts

Wednesday, 20 April 2016

Introducing Test Team Lean Coffee


Background
Our team has a weekly hour in the diary for a session called "Experiments and Learning." (It pre-dates my joining.) As you'd guess, it's intended to be an opportunity for knowledge-sharing.  The sad thing is that I'd estimate only 1 in 5 of those sessions doesn't get cancelled because no-one has anything to share.

One reason for that is that we tend to think of them as demonstrations or mini-lectures which would require preparation.  I've done a few myself, including on Inattentional Blindness and on Session-Based Test Management, and they took a lot of personal time to put together.

Rather than waste the opportunity we have every week, I proposed that we try Lean Coffee at one session. If we liked it, then we would always have the option to do Lean Coffee periodically after that.

Lean Coffee, of course, requires almost no preparation. And gets the whole team (or as many as want to be involved) actively involved in the session rather than passively listening.

After proposing the idea to our QA Manager and getting his buy-in, I emailed my teammates to introduce the concept and asked them to let me know if they were interested in trying it out.
I wasn't entirely surprised to find that the vast majority of the team didn't respond at all, but when it was brought up again in our team meeting it seemed that there was interest.
And so, more re-assured that there wouldn't be an empty (or embarassingly quiet) room, I went ahead.

To give everyone a starting point for ideas on what to discuss I shared the list of topics that had been suggested at TestBash 2016 in advance.


The topics on the day
At the session, after combining stickies suggesting the same topic, we ended up with the following on our "to be discussed" list:

- Motivation. Keeping testing interesting
- The testers role. Are testers needed anymore?
- Security testing
- Real-world use cases and environments
- TDD/ATDD/BDD


It was interesting that the first thing we talked about was the one that had been most voted for so - in theory - was the one most people were interested in.  Yet I think it was one of the shorter discussions. That may have been because everyone was a bit unsure/reticent about talking and hadn’t got warmed up yet. (Of course, maybe people also felt they weren't able to talk about feeling bored in front of our manager - although he was one of the ones posing that question!)

It so happened that we covered all of the topics we had in our session with just a 5 minute overrun. This was probably a good thing.  Whilst in general there should be no pressure to get through all the suggested items, as this was the first Lean Coffee session it was nice that no one was left feeling they had wasted their time coming up with topics.

Reflections and lessons learned
- We didn't have the typical group configuration for Lean Coffee. There were 11 of us, and the room didn't allow us to split into smaller groups at separate tables.

This turned out ok because our group weren't the normal self-selecting Lean Coffee group and some of the people there weren't so comfortable talking.  Conversely, I felt I probably talked too much myself. This was partly because as the instigator of the session (and the reason everyone had found themselves sitting there) I felt a need to keep conversation flowing.


- Initially I had intended to take the vote on whether to continue discussing a topic or not after 5 minutes. But in practice that didn't feel long enough - it seemed silly to ask if we should continue with something we were only just getting into.

Throwing myself into doubt about that also meant that I didn't take a proper "roman vote". I tended to ask the group if they wanted to continue and then people seemed to be looking around to see what everyone else was doing before they would risk putting their own hand up. Getting everyone to give thumbs up/thumbs down at the same time would probably have overcome that.

I think next time I might experiment with taking a first vote at 10 mins, and then at 5 minute intervals after that.


- After the session I emailed everyone to ask for feedback, and for any criticism/improvements.  (I felt I was slightly more likely to get honest feedback via email than I was by asking face-to-face.)

I got two responses - both positive and interested in doing it again.

It's hard to tell how valuable the rest of the team really found it.  But now they know what Lean Coffee is, and they know that it's there as a future option any time we want to make use of it.  And it's so simple that I don't need to be there to "run" it.

Wednesday, 22 April 2015

Webinar: Practical Tips + Tricks for Selenium Test Automation (Dave Haeffner)

Notes on this webinar: presented by Dave Haeffner, and hosted by Sauce Labs
---------------------

Another Dave Haeffner webinar! This time rather than considering the broad why and how of introducing automation, he talks about what is certainly an area of expertise for him: specific ideas to extend, and/or overcome challenges in, Selenium browser automation.

The amount of good, free learning that Haeffner offers around Selenium is commendable. He also offers his own "the-internet" site/application which you can use to practice on - and for which the code is open-sourced on Github.

The format of the webinar was in line with Haeffner's free weekly Selenium tips email; and with the second half of his Selenium Guidebook.
(At least that's what I think the second half of the book does ... I'm still working through it!)

The learning here is not really about the details of using the WebDriver API to find, and interact with, elements in web pages. Rather it's about ways you can add value to your automation efforts.
My notes aren't extensive but here's what I took from it:

Headless automation
"Headless" automation, ie. performed without launching an actual browser instance, is especially useful for running on a Continuous Integration server providing feedback every time there is a build.
Haeffner mentioned:
-- Xvfb. Which is limited to running on linux, and which he suggests offers no real speed benefits over launching a browser
-- Ghostdriver (PhantomJS).  Which is multi-platform and does offer faster performance.

Visual Testing
I only became aware of the existence of "visual testing" recently and know very little about it. It sounded kind of counter-intuitive because checking for small changes in the UI was always exactly the kind of thing that automation couldn't sensibly do.  (I thought maybe this was just another example of over-promising by excitable automation tool vendors!)

However, there are now open-source tools that will work alongside WebDriver to look for problems like layout or font issues that WD on its own won't really handle. In effect giving "hundreds of assertions for a few lines of code".
This looked like interesting stuff although, without having tried it, it still sounds a little too good to be true and I would need convincing before I trusted it. As you'd expect, Haeffner does talk about the inherent complexity and that false positives are more of a problem than usual.  It seems like some human intervention is still necessary to confirm "failures" that the automation reports.

Proxy Tools
Configuring automation to run through a proxy tool (like Browsermob or Fiddler) opens up a range of extra options:
- Checking returned HTTP status codes at URLs
- "Blacklisting". An idea which appealed to me from past experience. Here you manipulate the traffic so that third-party elements like ads .... which are slow to load thus affecting your checks ... can be excluded.
- Load testing by capturing traffic and then converting it into a Jmeter file
- Checking for broken images by examining the status code of all IMG elements.  (Haeffner also talks about other ways to check for broken images without needing a proxy.)

Forgotten Password Scenarios
Difficult to fully automate something like forgotten password workflow when that typically involves the generation and receipt of email. At least I thought so.
But Haeffner describes how you can use Gmail's API to look for and scan an email rather than attempting the madness of automating your way in and out of the Gmail web front end.

A/B testing
Pesky A/B tests running on your site (Marketing! *shakes fist* ) can make automation checks fail because the page may not be as expected when the driver gets there. Haeffner shows ways to opt out of A/B tests when running your automation eg. by creating an opt-out cookie, or by using URLs which bypass the page variants.

File Management Scenarios
Scenarios involving file upload buttons are tricky because they will generally involve a standard system/OS file management dialog - which WebDriver can't automate. But get WebDriver to submit the full file path and you may be able to bypass system dialogs entirely.

Additional Explanatory Output
Haeffner showed how to add explicit messaging around the assertions in your checks by highlighting located web elements using Javascript. Having captured the original styling you can add your own styling with Javascript - and revert back to the original when done.


There was, of course, more information - and more detail in the webinar itself.
A recording of it is available here: http://sauceio.com/index.php/2015/04/recap-practical-tips-tricks-for-selenium-test-automation/

Sunday, 12 April 2015

Webinar: Mastering the Essentials of UI Test Automation


Brief notes on this Webinar: presented by Jim Holmes and Dave Haeffner, and hosted by Telerik
---------------------
This Webinar caught my eye because I'm working my way through co-presenter Dave Haeffner's Ruby-based version of "The Selenium Guidebook".  (A post or two to follow on that, I'm sure.)

This short webinar turned out to be a general "primer" for teams/organisations looking to adopt automation and didn't specifically discuss tools or techniques.  But it was no less interesting for that.
I understand that it accompanies a series of blog posts on the topic - see the link right at the end of this post - but I haven't read those as yet.

It was nice that, despite being organised by tool vendor Telerik, the webinar did not push their automation tool.  And Holmes and Haeffner are definitely not naive enough to believe the "automate everything" myths that some tool vendors seem to push. (Though they didn't specifically refer to it, I suspect Holmes and Haeffner get the "Testing vs Checking" thing too.)

The following are my notes from the webinar and don't necessarily reflect direct quotes from Holmes or Haeffner.


Why automate?
  • Have a reason for your team to adopt automation, not just because everyone else is supposedly doing it. Question if it really adds value in your context.
  • Use automation to solve business problems, not to cut the size of your testing team.
  • Have the right expectations of automation. Organisations who expect to automate everything or to eliminate manual testing are misguided.
  • Automate the key, happy path stuff.


Implications of automation for testers in the team

  • Automation should enable people to be re-purposed and time to be re-allocated rather than removing the former and reducing the latter. It should be a way to free up testing resource to do things computers can't do. Enables a move from repetitive scripted or regression checking to Exploratory Testing, questioning the product.
  • For testers, automation needn't be intimidating. Good testers with the right mindset can utilise automation without the need to try and become developers instead. They can still have focus on the thinking skills intrinsic to rewarding testing.
  • Encourage them that automation code is not as intimidating as application code. Only a small subset of programming knowledge is necessary to get to grips with automating. In his Selenium book Haeffner talks about just 8 key concepts that he thinks are necessary for test automation. Indeed, the most successful implementations have developers helping the testers.


Implementation

  • Holmes and Haeffner suggest running a time-boxed pilot of any automation implementation. This is necessary to adapt processes and set internal expectations as necessary.
  • There is a danger of building badly-implemented automation that the team doesn't trust.  Avoid this with good abstraction, and robust tests. Use tools, techniques (eg. continuous integration) which make automation visible and accessible to the whole team. It shouldn't be running in isolation on someone's machine. Aim for constant feedback. Learn from failure and advertise success.
  • In Dave Haeffner's view, good automated checks in a suite:
    • are concise and specific. Checking one, clearly understood thing.
    • all run independently of each other
    • are descriptive


A recording of the full webinar, and some Q&A that they didn't have time for, can be found here.



Wednesday, 4 March 2015

Testing in the Pub Podcast, Episode 15 - Pairing

Stephen Janaway and Dan Ashby's "Testing in the Pub" podcast is a nice source of free learning, and they deserve thanks for taking the time to put it together.

Having started early last year they're up to Episode 15 already with this one about pairing.

For years I worked as a sole Tester in an organisation where all the dev was outsourced. I used to dream of pairing (ok, not literally dream) if only to have a second set of eyes checking what I was doing.  Since then I've been fortunate to have the chance to do some informal pairing with other Testers and have really found it valuable.

In this podcast I found it particularly interesting to hear Janaway and Ashby talk about how pairing can mean more than just sharing a screen with another tester to tackle a challenging scenario; or with a developer to provide immediate feedback on what they're coding.

As Testers, pairing can also help us influence decision-making; raise our profile in the team; and enlighten those who don't fully understand the role of testing - or believe the misconceptions. We can add value to a project pairing with other roles than just test and dev. (I've had a valuable session pair-testing with a Project Manager in the past.)

Pairing is also a great way of learning. Dan Ashby talks about a nice set of simple question words he uses as a framework whilst pairing to gain information and understanding which will inform testing decisions.

As someone who finds Session Based Test Management a useful approach to Exploratory Testing, I'm embarrassed I hadn't already thought about applying a similar approach to pairing. 
Don't just spontaneously attempt to start a pairing session with a colleague. Get their agreement in advance, have a goal for the session and put a time-limit on it.

But before you do any of that, as the guys discuss, you might need to convince Management that pairing is a good use of resources.

Lots to think about and act on after listening to this discussion. And it can be easily digested during your commute!


You can access this episode, and all of the Testing in the Pub podcasts at: http://testinginthepub.co.uk/testinginthepub/

Tuesday, 17 February 2015

Using Udemy for Online Courses

Udemy.com is a website offering online, video-based courses.  A large number of which relate to technology, programming - and even specifically to testing.
I have bought a lot of online courses from Udemy. So many, in fact, that I always have a backlog of ones in progress or waiting to be started.  I expect many of the posts on this blog as it develops will relate to Udemy courses.

Courses can be taken on the website, of course, and there's also a mobile app which allows video lessons to be downloaded and watched offline. (I use the Android app - I assume the same is possible on iOS.)
The latter is very handy for learning during a commute, although some courses will work better than others if - like me - you don't have a tablet.  Even a large phone screen doesn't always give a clear view of what's going on in an IDE or command line.

Udemy don't produce learning content themselves but rather act as a kind of clearing-house for courses produced by "trainers" ranging from individuals to organisations; and from the highly professional to the decidedly amateur.
Sadly there is no guarantee that a course sold on Udemy will be of good quality - and you might not find that out until after you've bought it.  Udemy do offer a 30-day money back guarantee although personally I have no experience of trying to claim on it.

Here are a few points I follow when deciding whether or not to buy Udemy courses.

1. Never pay full price for a course

With a small number of exceptions that course you're interested in on Udemy will soon be available at much less than the full price if you just wait a little while.

As I mentioned, I always seem to have a backlog of Udemy courses and that's mainly because I'm easily tempted to buy them in the regular sales and offers.  To give you an idea of what's possible here's an excerpt - just an excerpt, mind you! - of my Udemy purchase history.


Many instructors offer multiple courses on Udemy and these giveaways are "loss-leaders" so that they can market other courses to you.  But that doesn't mean you have to buy the other courses - and many of those will also be offered at a cut price if you wait a while.  

In fact, you can learn without paying a penny. There are a large number of free courses available on Udemy.

2. Sign up before you buy 

By signing up, and maybe wishlisting a few courses, you'll soon start to receive emails with the offers that feed my first point.  Registering will also allow you to .....


3. Preview the course before you buy

Once you're signed up to the site you can generally select a free preview of the course you're interested in.  This amounts to a small number of sample "lessons" from the course which you view.  Of course, these sample lessons don't tend to give away any valuable content - they might just be an introduction, or cover how to install the software tools the course uses - but it gives you the chance to gauge the quality of the production and the presenting style of the trainer. Do they speak clearly and engagingly, or do they have a monotonous delivery that you'll struggle to pay attention to?

4. Look at the course's "Discussions" thread

The main page for any course, which lists out the individual lessons, will also have a discussions panel available on the right.
Look here to see whether the instructor is engaging with their students and providing helpful feedback.  It's not uncommon to see discussion threads where seemingly straightforward questions placed by students weeks or months ago sit unanswered.  
On the other hand, the best instructors can be seen clearly taking the time and effort to provide extra value - and these are the kind of instructors who are most likely to add new material to the course over time or refresh parts that go out of date.  Remember, one of the benefits Udemy push is that you'll have lifetime access to the courses once purchased.


One more thing...
At the time of writing it looks like Udemy have now introduced pricing in sterling for UK customers - and it doesn't work in our favour.  A quick check suggests that courses priced at $99 have a sterling cost of £79 - when the current exchange rate would work out to around £65. 
So a disincentive for UK users, perhaps. Certainly another reason to always hold out for those cut-price offers.