GA Tracking Code

Showing posts with label Webinar. Show all posts
Showing posts with label Webinar. Show all posts

Thursday, 10 September 2015

Webinar: Getting Started in Security Testing by Dan Billing

Brief notes on this webinar: presented by Dan Billing, and organised by Ministry of Testing
---------------------

Rumour has it that there are no testers in the world who didn't sit in on Dan Billing's intro to Security Testing webinar this evening.

But in the unlikely event that someone out there did miss it I can say that it's highly recommended.


Currently I work for a company whose business is secure technology, although leaning more towards endpoint/device security than web.  Our team doesn't actually tend to do the detailed security testing (because we're not expert Penetration Testers) but we obviously have security as a key point to keep in mind whilst doing functional testing. So the more I can learn about security the better.

For me this webinar dovetailed nicely with the Troy Hunt "Hack Yourself First" course which I've recently started working through. (With Dan himself using Troy Hunt's practice site for one of his examples and, like me, encountering the regular blasting of that site's database when wanting to demo something from it!)

What you'll learn

The webinar sets the context for the importance of security testing before giving an entertaining overview of recent high-impact security breaches.

Dan outlines the factors that define a good security testing approach, and a couple of helpful mnemonics for modelling potential threats to your application - including his own EXTERMINATE model.
And there's a list of areas presented which will typically be sources of security bugs in applications.

Inevitably the most fascinating part of the webinar is Dan's live demos of some of the key web vulnerabilities, eg. SQL injection, cross-site scripting (XSS), and how they can be easily exploited with no more than a browser and a bit of know-how.

The reality today - on the web particularly - is that the tools for malicious attacks are widely, easily and freely available and therefore the threat is very real.

I certainly came away with a couple of new ideas for exploits I can attempt when I get into the office tomorrow.
As I said, highly recommended.


A recording of the webinar will be made available on the Ministry of Testing Dojo soon.

Dan has also previously provided insights into Security Testing for the Testing in the Pub podcast series and those are also recommended.



Wednesday, 22 April 2015

Webinar: Practical Tips + Tricks for Selenium Test Automation (Dave Haeffner)

Notes on this webinar: presented by Dave Haeffner, and hosted by Sauce Labs
---------------------

Another Dave Haeffner webinar! This time rather than considering the broad why and how of introducing automation, he talks about what is certainly an area of expertise for him: specific ideas to extend, and/or overcome challenges in, Selenium browser automation.

The amount of good, free learning that Haeffner offers around Selenium is commendable. He also offers his own "the-internet" site/application which you can use to practice on - and for which the code is open-sourced on Github.

The format of the webinar was in line with Haeffner's free weekly Selenium tips email; and with the second half of his Selenium Guidebook.
(At least that's what I think the second half of the book does ... I'm still working through it!)

The learning here is not really about the details of using the WebDriver API to find, and interact with, elements in web pages. Rather it's about ways you can add value to your automation efforts.
My notes aren't extensive but here's what I took from it:

Headless automation
"Headless" automation, ie. performed without launching an actual browser instance, is especially useful for running on a Continuous Integration server providing feedback every time there is a build.
Haeffner mentioned:
-- Xvfb. Which is limited to running on linux, and which he suggests offers no real speed benefits over launching a browser
-- Ghostdriver (PhantomJS).  Which is multi-platform and does offer faster performance.

Visual Testing
I only became aware of the existence of "visual testing" recently and know very little about it. It sounded kind of counter-intuitive because checking for small changes in the UI was always exactly the kind of thing that automation couldn't sensibly do.  (I thought maybe this was just another example of over-promising by excitable automation tool vendors!)

However, there are now open-source tools that will work alongside WebDriver to look for problems like layout or font issues that WD on its own won't really handle. In effect giving "hundreds of assertions for a few lines of code".
This looked like interesting stuff although, without having tried it, it still sounds a little too good to be true and I would need convincing before I trusted it. As you'd expect, Haeffner does talk about the inherent complexity and that false positives are more of a problem than usual.  It seems like some human intervention is still necessary to confirm "failures" that the automation reports.

Proxy Tools
Configuring automation to run through a proxy tool (like Browsermob or Fiddler) opens up a range of extra options:
- Checking returned HTTP status codes at URLs
- "Blacklisting". An idea which appealed to me from past experience. Here you manipulate the traffic so that third-party elements like ads .... which are slow to load thus affecting your checks ... can be excluded.
- Load testing by capturing traffic and then converting it into a Jmeter file
- Checking for broken images by examining the status code of all IMG elements.  (Haeffner also talks about other ways to check for broken images without needing a proxy.)

Forgotten Password Scenarios
Difficult to fully automate something like forgotten password workflow when that typically involves the generation and receipt of email. At least I thought so.
But Haeffner describes how you can use Gmail's API to look for and scan an email rather than attempting the madness of automating your way in and out of the Gmail web front end.

A/B testing
Pesky A/B tests running on your site (Marketing! *shakes fist* ) can make automation checks fail because the page may not be as expected when the driver gets there. Haeffner shows ways to opt out of A/B tests when running your automation eg. by creating an opt-out cookie, or by using URLs which bypass the page variants.

File Management Scenarios
Scenarios involving file upload buttons are tricky because they will generally involve a standard system/OS file management dialog - which WebDriver can't automate. But get WebDriver to submit the full file path and you may be able to bypass system dialogs entirely.

Additional Explanatory Output
Haeffner showed how to add explicit messaging around the assertions in your checks by highlighting located web elements using Javascript. Having captured the original styling you can add your own styling with Javascript - and revert back to the original when done.


There was, of course, more information - and more detail in the webinar itself.
A recording of it is available here: http://sauceio.com/index.php/2015/04/recap-practical-tips-tricks-for-selenium-test-automation/

Sunday, 12 April 2015

Webinar: Mastering the Essentials of UI Test Automation


Brief notes on this Webinar: presented by Jim Holmes and Dave Haeffner, and hosted by Telerik
---------------------
This Webinar caught my eye because I'm working my way through co-presenter Dave Haeffner's Ruby-based version of "The Selenium Guidebook".  (A post or two to follow on that, I'm sure.)

This short webinar turned out to be a general "primer" for teams/organisations looking to adopt automation and didn't specifically discuss tools or techniques.  But it was no less interesting for that.
I understand that it accompanies a series of blog posts on the topic - see the link right at the end of this post - but I haven't read those as yet.

It was nice that, despite being organised by tool vendor Telerik, the webinar did not push their automation tool.  And Holmes and Haeffner are definitely not naive enough to believe the "automate everything" myths that some tool vendors seem to push. (Though they didn't specifically refer to it, I suspect Holmes and Haeffner get the "Testing vs Checking" thing too.)

The following are my notes from the webinar and don't necessarily reflect direct quotes from Holmes or Haeffner.


Why automate?
  • Have a reason for your team to adopt automation, not just because everyone else is supposedly doing it. Question if it really adds value in your context.
  • Use automation to solve business problems, not to cut the size of your testing team.
  • Have the right expectations of automation. Organisations who expect to automate everything or to eliminate manual testing are misguided.
  • Automate the key, happy path stuff.


Implications of automation for testers in the team

  • Automation should enable people to be re-purposed and time to be re-allocated rather than removing the former and reducing the latter. It should be a way to free up testing resource to do things computers can't do. Enables a move from repetitive scripted or regression checking to Exploratory Testing, questioning the product.
  • For testers, automation needn't be intimidating. Good testers with the right mindset can utilise automation without the need to try and become developers instead. They can still have focus on the thinking skills intrinsic to rewarding testing.
  • Encourage them that automation code is not as intimidating as application code. Only a small subset of programming knowledge is necessary to get to grips with automating. In his Selenium book Haeffner talks about just 8 key concepts that he thinks are necessary for test automation. Indeed, the most successful implementations have developers helping the testers.


Implementation

  • Holmes and Haeffner suggest running a time-boxed pilot of any automation implementation. This is necessary to adapt processes and set internal expectations as necessary.
  • There is a danger of building badly-implemented automation that the team doesn't trust.  Avoid this with good abstraction, and robust tests. Use tools, techniques (eg. continuous integration) which make automation visible and accessible to the whole team. It shouldn't be running in isolation on someone's machine. Aim for constant feedback. Learn from failure and advertise success.
  • In Dave Haeffner's view, good automated checks in a suite:
    • are concise and specific. Checking one, clearly understood thing.
    • all run independently of each other
    • are descriptive


A recording of the full webinar, and some Q&A that they didn't have time for, can be found here.