How Automation Works with a Human-Centered Testing Approach
We are human. The technical side of the world may be within our collective power to figure out, but we still make decisions without knowing everything about it. In speaking to GeePaw Hill (@geepawhill on Twitter), our capacity for acting on incomplete data suddenly came into focus with a comment of his (paraphrased) “People ask a technical question: they learn that I ascribe my wins to the heart.” This, to me, is the essence of decision making: that gut instinct that tells you when something is correct. And part of that is trust: How much are we willing to trust someone or something else to do tasks, and still be confident of the results.
Our reliance on our gut instincts has only been complicated by adding machines into the mix: we now have a way to ensure that a specific set of actions can be repeated, very precisely, over and over. And we have seen it both be helpful, and fail in a very visible way.
There is, in some areas, a decided difference of opinion in the usefulness of having a machine check what another machine does. (With all the flurry of words, concepts, and demonstrations, this is what automated testing boils down to.) Only a human can catch some of the “That’s odd” errors, but a human will also introduce their own errors when they have to redo the same action many times. How much are you willing to release control, and in what areas?
The baseline for testing software is always human, but moving away from that point is fraught with decisions, some of which may have to change over time. And that is never a comfortable position to be in for a business. Making the choices on what to test with an automated system starts with the heart, and then the brain gives reasons for the heart's choice. This emotional component, I think, is the cause of much of the discussion. It isn’t that the brain's reasons are not clear; rather, that someone doesn’t trust the heart's choice “in their gut”.
I have, however, noticed the pendulum swing in a direction I personally agree with: there are places to use both manual and automated testing tools, and bad times and better times to switch them around. No one, I hope, can advocate for the exclusive use of one or the other. From spotting an error in display format, to an inconsistent correction or answer, all of these tests can be done in a manner that is inefficient, and frustrates the company, or even the teams.
Getting the best possible product to the customer is the priority, and this is where the focus should be. Verifying the output with the tools you choose, and being confident of the results - and being willing to change those tools when your customers are not pleased with your product - is a good business practice. Humans are the end-users, in all current cases, and humans are tool-users. Finding the balance between automated and manual tests for your product should be an ongoing goal, and should be evaluated on a regular basis. Your head, your heart, and your instincts should be satisfied that this is the best that can be done, within your constraints. Make sure that your automation and human inputs are providing what you actually want to achieve, and are giving the information and results that you expect.