Michael Bolton is a consulting software tester and testing teacher who helps people to solve testing problems that they didn't realize they could solve. In 2006, he became co-author (with James Bach) of Rapid Software Testing (RST), a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure. Since then, he has flown over a million miles to teach RST in 35 countries on six continents.
Michael has over 30 years of experience testing, developing, managing, and writing about software. For over 20 years, he has led DevelopSense, a Toronto-based testing and development consultancy. Prior to that, he was with Quarterdeck Corporation for eight years, during which he managed the company's flagship products and directed project and testing teams both in-house and around the world.
Due to the current situation with Coronavirus the TASSQ Board has decided to continue with monthly free online events until further notice, in support of the QA community!
Machines pushing virtual buttons at inhuman rates appears impressive. Someone who doesn’t look too closely, or doesn’t understand what testers do, could see the flashing, flickering screen and believe “Lo, there be testing!”
Automated output checking at the GUI level can be tricky. Products and services intended for human users are sometimes not so friendly to machines. It can take a lot of time to write code that successfully works around the unfriendliness—for a while. To show some kind of progress, testers sometimes install a simple check that examines one or two simple factors in the output on a screen. When the check returns a happy result, everyone breathes a sigh of relief, then moves on to the next screen. Yet this might not be the greatest idea, because the time and effort required to program automated GUI checks may displace the search for problems that matter more to people than to machines—and those problems are where the business risk lives.
Why not use the power of tools to investigate the product? How about combining the power of the machine with the human capacity for recognizing new problems?
How can we use tools effectively to lower cost, increase test coverage, and find problems that matter? What help might we need to get the job done more quickly, and how do we ask for it? In this talk, Michael Bolton will examine these questions and present some answers to them.