Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and publisher. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them. Educated at the universities of Oxford and Imperial College London, he is a Principal of Gerrard Consulting Limited, the host of the UK Test Management Forum and a business coach for Enterprising Macclesfield.
CLOSING KEYNOTE: Rethinking Test Automation
Test (execution) automation has been a goal since the earliest programs were written. The mechanics of automated tests have evolved with the technology used to build software, but the fundamental problems of test automation have not changed. Establishing a consistent environment, creating integral and re-usable test data, handling genuine failures, false negatives (and positives), tear-down, and clean-up. These are well-understood challenges. Developers and testers battle with flaky environments, test frameworks, and buggy software under test much the same way they always did. There is little debate about these technical or logistical matters. The use of unit test frameworks to test low-level components and integration is well-understood and usually most effective. But where the user interface is graphical and/or where tests of larger, integrated systems are required, test automation is more challenging. These tests tend to be longer, slower, and more complex and consequently, they are harder to write, debug, and maintain. These tests also run relatively slowly. All in all, long-winded, complex tests are flaky and far less efficient and economic. Two models dominate people’s thinking in this area – the four-quadrant model and the test automation pyramid. They have some value, but practitioners and managers need something better to guide their thinking.
This is the “state of automation” and has been for many years.
In this talk, Paul sets out a way of thinking about testing and test automation that helps to answer the strategic questions: What does test automation actually do for us? When and how is automation the right choice? How do we justify automation? Can automation replace testers? What new tools and skills do we need to implement automation in the future?