Testers, or as some call us, "QAs" (quality assurance), get a bad rap. We're often the bearer of bad news, and it can be interpreted as criticism of developers or processes upstream. Here are some misperceptions I've come across in my career.
1. We like breaking your product
We don't like it. Actually, we LOVE it.
Let me rephrase that. It gives us a great deal of satisfaction to perform our role. I hope that UX, designers, developers, BAs, PMs and everyone else in the SDLC (software delivery lifecycle) get that feeling too. Few of us feel that we genuinely love our work, so NOT loving your work is a sign of something far deeper going wrong. Please don't begrudge us QAs for taking pride in our work and enjoying what we do.
Your product's code is often already broken. Testers are merely exposing that for the benefit of the employer and client so that the product becomes better, more robust and more reliable. Mindful testing is conducted without judgment of the skills of the people involved. How can producing a better, more robust product be seen as a bad thing? The issue seems to be the perception that exposing the breakage is a personal criticism. It simply isn't.
Yes, testers can make errors of tact, and people react differently. Everyone has different ways they prefer to be approached, and not everyone knows everyone's preferred method. Some people like tickets/bug reports, some prefer email, some IM, some to just be left alone (not a good sign!) Find an approach that works for your team. Sometimes a single approach works for a department or company, sometimes it doesn't. I know developers who refuse point blank to work with JIRA, so how do they know what their bugs to tackle are? Maybe that isn't the right company for them. Testers can't fix that, but above all else please remember that reporting defects isn't personal.
There is a HUGE difference between loving my work and revelling in breaking someone's work. That would be schadenfreude (revelling in someone else's misfortune). We don't intend anyone to have bad days (we all have them) or to feel that we are criticising their abilities. We all make mistakes. We are all able to be misled by poor or ambiguous requirements, assumptions (always a risk factor), and we all make genuine mistakes. No-one breaks their own code on purpose.
I once heard the phrase "QA - because pobody's nerfect". I liked it so much that I put it on my business card (and I love a bit of wordplay).
2. QA is to blame for late delivery
A project ideally has to deliver a combination of "on time, on budget, with quality". The triangle has also been described as "on time, within budget, within scope". This is the Project Management Triangle.
Anyone who has ever worked on a project has probably heard the follow-up joke, "pick two". If you're really in trouble and/or cynical, "pick one".
Something has to "give" in order to deliver on time. That may mean descoping something, or getting more resources (with budget implications). When you start a project, the risks are usually not apparent, and these will usually be uncovered along the way, whether by testers or anyone else.
The testers' role is to call out risk, so it gets mitigated (fixed) or accepted (and usually not fixed) for the original deadline to stand.
I once worked on a project where I joined the team 3 weeks before the go-live date. There was no working back end, and the templated front end wasn't yet finished, nor integrated. The deadline was set in stone, ready for submission for an award in the first week of the following month (i.e. in 3 weeks time). Suffice to say, that didn't happen. It went live in stages, page by page, story by story, over the following 4 months. Did it cost them the award? Yes, but was it ever fit for purpose within the timeframe? No. Did testers do that? No. We, testers, were the bearer of bad news, not the authors of it. It was too risky to go live with no completed core functionality. Had we had 1 or more core stories working, we could have said, "Story 1 is ready, stories 2-99 are not, now it's your (the client services manager's) call whether you think that this is an acceptable compromise."
Presented with the weight of evidence and level of confidence, we didn't go live with anything for the initial delivery date, but we could focus on core journeys and deliver them in incremental batches over the next few months, with lesser journeys thereafter. Did QA break it? No,and it would be hugely unfair to blame us. The initial scope was too ambitious, everything was priority 1 (always a bad sign). Many requirements were not in scope but were also suddenly "urgent" and "blockers", even ahead of the functionality already in development.
3. Testers are sabotaging the project by adding requirements
As a follow-on from the point 2, it would be infantile to blame testers for calling out the risks. QA didn't break it, the process leading up to what QA uncover is sometimes broken. It could be a missed requirement, one that isn't covered in enough detail.
If the requirements are inaccurate, ambiguous or there was the infamous scope creep, then you cannot reasonably expect more output in the same timeframe. You got what you asked for in the timeframe you wanted it. You just didn’t ask for what you wanted. If you add or clarify requirements and don’t deprioritise other activities, delivery will take longer, more things will need testing, the chance of breakage is higher, and so the risk of late delivery is higher.
If you then persist with these additional and/or clarified requirements without amending the deadline, people end up working more hours, get tired, then they make mistakes, get burnt out, and/or despise you for it. People are also a process, and they, like a process, can break, and if that happens too often, they will leave. Fix the process and the people won’t break, and nor will your deadlines be at risk.
If testers (individually or as a team) are blamed for exposing deficiencies in process, or exposing and escalating risk, then there is a wider attitude and blame culture issue upstream. If this is happening consistently without learning from mistakes, I would leave (and I have done).
Testing can (and should be) involved from the early stages of a project to mitigate these risks so they are scoped and prioritised. Testing is not just an optional bolt-on at the end of a project.
Testers should be able to ask questions, as early in a project as possible, free from intimidation, such as:
- Do we need to migrate data from the existing system?
- If yes, are we going to build a tool (or use an existing tool) to do so, and prototype it so that we maintain formatting, images etc.
- Is the site going to be in more than 1 language, and if so, which ones are required for go-live?
- Which are core journeys that we should focus on in case we come to a tight timeline and find we have to deprioritise features?
- The MoSCoW method is useful in situations like this.
Clarification of requirements is in addition to the usual test activities such as functional and non-functional testing, documentation, etc. Without accurate requirements, each ambiguity could potentially block the project delivery and/or delay it. The earlier these are called out, the better, and the cheaper it will be to rectify it.
"Failing to plan is planning to fail." Just don't shoot the messenger.
4. UAT (User acceptance testing): Testers delivered a deficient product
This is an old subject that has been around forever. The tester team is (usually) small, often only 1 person. They can test some things extremely well, sometimes many things in less depth. Edge cases can be missed, changes along the way can (and do) introduce regressions which may or may not be found.
Usually the test environment is not hooked up to a live (or representative-of-live) system with live (or representative-of-live) data. How can a tester see what would happen in a live system without live data?
- UAT usually has more people available to conduct testing. More people will find more issues when there are more users on the system. In UAT, a larger pool of users perform actions in more real scenarios that a tester can reasonably have thought of. An element of chaos from a wider pool of testers is good for testing. This has many names: “monkey testing”, “dog-fooding” and many others.. It’s all just additional valuable input after a reasonable attempt by the testers.
- Testing is usually conducted against more representative data that wasn't available to testers.
- UAT has users who have fresh eyes. Testers have been working on this for weeks/months/years. Often testers are so accustomed to the site or tool that small issues or lapses in functionality are either not noticed or explicable, whereas fresh eyes can't rationalise or explain them, so these get called out, and rightly so, if they aren't glitches in the test environment, e.g. timeout errors that "wouldn't happen in Production".
- API response times: Often there are API endpoints specifically for developers and testers which are different for UAT. They may have different responses and/or response times. Testing in QA won't have seen these. Often this causes new requirements and fixes to be required, for scenarios not originally envisaged.
If anything, UAT is MEANT to find these sorts of issues. We're all a team, trying to get the best tool we can delivered.
Imagine what UAT would be without testers having stopped dozens, hundreds or even thousands of other issues before it even arrived in UAT.
No amount of testing is ever "enough". UAT is just more, and that can only be a good thing.
5. Testers can assure quality. That's why it is called "quality assurance"
No, testers can identify and suggest improvements to quality, but can't assure anyone of it. Note that testers don't improve it, that's up to the developers to implement. There will always be something more, something unpredictable, some far-fetched edge case, servers that break down, or whatever. Sometimes the bugs are found but deprioritised and released anyway. Standards change. Browsers change. Things that used to work sometimes no longer work. Flash is almost dead, HTML5 came in while IE8 (and to some extent later browsers) couldn't support it. There are many variables to consider, and more than may not even get considered.
Michael Bolton (not the singer) wrote a post about the mistitled role of "quality assurance" in 2010. I suggest you read it, and the comments too.
To quote Michael,
We are not the brains of the project. That is to say, we don't control it. Our role is to provide ESP—not extra-sensory perception, but extra sensory perception. We're extra eyes, ears, fingertips, noses, and taste buds for the programmers and the managers. We're extensions of their senses. At our best, we're like extremely sensitive and well-calibrated instruments—microscopes, telescopes, super-sensitive microphones, vernier calipers, mass spectrometers. Bomb-sniffing detectors[...] We help the programmers and the managers to see and hear and otherwise sense things that, in the limited time available to them, and in the mindset that they need to do their work, they might not be able to sense on their own.
All that we, testers, are doing is exposing risk where we can find it or foresee it, but it can never be entirely eliminated. We're in the de-risking game, we're not insurance policy salespeople.
You may have more anecdotes and myths to dispel that I have not mentioned. Here you have the benefit of some of mine.