Optimising Your Selenium Tests

Wednesday, 21 January 2009

When a team is using Selenium or Webdriver there are usually a few points in the delivery cycle where the team find the suite of tests is running way too slow and causing pain. Here are a few thoughts on how I go about solving these problems when they turn up.

Not all these problems are technical,  they can apply to any software under test  - particularly any set of acceptance tests. I’ve applied some of these steps when optimising the acceptance tests of some heavyweight C++ libraries.

Get a birds eye view

I tend to take a breadth first look at the tests, take a quick scan a drill down where I think I can get most gain. I do follow my gut a little and look for patterns so your millage may vary.

Where’s the party?

First I want to identify the cause - Is every test slow or can you see some big offenders to optimize to get the biggest gain for your hard work?

Selenium’s browser set up and tear down

The usual first pain point with Selenium is having tests that tear down and set up the browser for each test. Having the browser open for a full run of tests really speeds up the run, though you need to ensure that you don't have browser windows piling up on your cruise box over time. I'm thinking you want to lower the set up costs to separate threads would make it slower with that thinking?

Investment or debt

I only want to be running high value tests. Are there tests for every story and every path no matter how important; or for the critical user journeys and regression guards?

The same thing doing the same thing

 The same goes for checking you are not testing the same thing again and again. Are there too many tests testing the same things? Is there value in thinning them out?

Check the set-up

I take a look for intensive set up and repetitive tests - are they all doing the same set up? Are they splitting up a user journey into little steps doing a tonne of set up each time. Could you change the set up?

Inventive testing

are there sleeps/waits inserted into the screen/clicking code to get round a problem inserted by well meaning developers? There might also be other funky test / screen code, trying to get tests passing that don't pan out well.

Error cases & external sources

Are there some tests that want to see error behaviour that have to deal with long waits / time-outs? Are there some components that don't render or are included from an external source, which might have slow time-out outs on firewalled boxes? (adverts or xml feeds on web apps)?

No need for sloooow?

I take a look at the tests themselves - is the app slow, is that the real problem? Are the tests setting the app to solve too big a problem for the level of testing you want?

Do you need that browser? (from Sam Newman).

Testing in a fake browser can be significantly faster than testing with a real browser. You could also consider reducing test time by setting up and testing the view and wiring without the full supporting stack for some tests.

What do you do?

No comments :