As testers we can't remember everything, and as much as we would love to think so, we're not Superman, or any of his friends or relatives.
Here are some things I have forgotten to test from time to time.
This article is part 1 of a series. Read part 2 over here.
When I pick my battles, accessibility isn't always anywhere near to close to the top of my list of things to do and test. However, here's some reasoning why it should be.
I was once interviewed by an agency specialising in accessibility. I was asked, "How do you impress on developers with no concept of accessibility why we should do it?" My answer was relatively simple. "Imagine you're in India with, say, 1.2bn people. Now imagine if 5% of users were disabled in a way that means they can't see your site. That's (after a quick calculation in my head) 60m users, or 3 New Delhis of people who can't use your site when they access it." That is a lot of people, and that's in 1 country, and 5% is a low estimate.
Here in the UK it is a legal requirement that a website provides accessibility for disabled users, though the law is untested in court. That doesn't mean, however, that you should skip the accessibility testing. If anything, as QA we should be fighting for accessibility's right to be treated equally alongside other requirements, not pushed to the sidelines never to be seen again. You wouldn't do that to a disabled person, why do that to the tools that people would actually use?
It is also the one part of every project I've worked on that got descoped/relegated to the backlog and is always "we'll do that another time", which rarely arrives.
In case you're not familiar with them, there are accessibility standards guidelines called "WCAG 2.0" (at the time of writing, though v2.1 is on the way), with 3 levels of conformance, each more onerous than the last. You should be aiming for at least "Level A" (the lowest standard).
To implement accessibility, HTML attributes called "ARIA tags" can be added to HTML code so that assistive technologies (e.g. screenreaders) can describe elements on a page when the element gets the focus when the user tabs around or hovers over an element. ARIA means Accessible Rich Internet Applications. More information about ARIA is available here.
Easy wins for accessibility are:
- All images have populated and meaningful ALT text, e.g.
ALT="image_240x300.png"is not helpful, but
ALT="Sunflowers by Van Gogh"is
- All CTAs and links have meaningful text that describes what the button/link is doing. Anything that says "Read more" or "Click here" is a poor experience for someone with visual impairment. Think about what they would gain from clicking the button, e.g. "Click to here more about <the subject of the page you are going to>"
- All CTAs (buttons) have ARIA attribute
- Error messages appear next to the fields they relate to and are colour coded (usually red)
- All videos have subtitles and that controls can be accessed
- No objects in the site say things like Click the green button. It's not helpful if you can't see it, or you can but are unable to see green/red/blue or whichever colour is described.
- Input fields have attribute
- Text is high contrast compared to its background, e.g. not white on a light blue background. There are technical tests than can be run but common sense is a good guide.
I have used the NVDA screenreader software in the past, but it gets very annoying very quickly and I always have to relearn how to turn it off, since every action results in a Stephen Hawking-esque response, and it interrupts itself if you type quickly or complete many actions at a reasonable speed, even when not trying to test something e.g. writing a JIRA ticket or even when clicking the menus within NVDA or the help section you just Googled. Often I just end up muting my computer entirely just... so... the.. robot... voice... stops... annoying... me.
With NVDA I tab around the site (using the TAB key) to all of the clickable elements or anything that can take the focus, and then click on a button using ENTER or SPACE. With other software you can click on a text block (even on non-clickable elements) and check that the text is recognised and that static images are described by their
Please fight for accessibility to be on the table before launch. The site's SEO isn't helpful if the person who arrives at the site can't access the content in a meaningful way. You may not win the fight, but you'd be on the right side of website history (maybe, if anyone could be bothered to write such a thing).
WCAG guidelines: https://www.w3.org/TR/WCAG20/
WCAG 2.0 guidelines checklists:
More about ARIA: https://www.w3.org/TR/using-aria
So you're not an SEO expert. So what? Many places I've worked had no SEO analysts. SEO becomes something everyone, including QA, should consider. You don't have to be an expert but you can toss a hand-grenade into the mix and get some quick, useful results.
You're not not so much creating and maintaining the SEO as ensuring that any SEO that is done does what it should do. That is to say, you're testing it, improving the quality of it, and calling out if it is missing.
If you request changes to the SEO, you also make the site more understandable from an accessibility perspective. If the site isn't logical and/or readable to a search engine, it isn't logical or understandable to a person requiring assistive technology either (and perhaps not to an able-bodied user too!) Headings and logical text sequences aren't just for search engines. Nor are images. If a search engine doesn't know what a image represents (i.e. poor or no
ALT text), a person who can't see it also can't interpret its purpose. If you improve the SEO, you improve the accessibility of the site too. 2 fixes for the price of 1!
So here is something that I encountered recently on a job assignment: We built a site for a mobile network provider company, and the most common text on the homepage was "PDF" (because of the many PDF links in the global footer). The site did not mention the name of the brand whose site it was anywhere within the homepage copy text, nor the product description of what they were offering (it was a SIM-only mobile phone deal without a contract). Many pictures of SIM cards and images contained the price, but no text descriptions were present saying what the product is. If we went only on the SEO you'd think it was a site for PDF reader. An avoidable huge SEO miss which wasn't hard to fix. That's not a criticism of the site, but we hadn't considered the copy from an SEO perspective.
Consider the meta description. This is the text that a user would see if they found your site on a search engine. Without it, your site or product's purpose won't be seen.
SEO isn't just about copy. It's about structure and layout too.
A site I worked on had an
H1 tag that didn't talk about the product or brand. It was cheery and playful with words, but not optimised at all for SEO. If something is the most important thing on the page such as a strapline or product name, it should be an
Other things I encountered were:
H1tag on some pages
- More than one
H1tag on a page, no H2s, many H3s, many H4s etc.
- H1/2/3/4/5 in no particular sequence down a page, e.g.
What we want is only one
H1, a small number of H2s, and as many H3/H4/H5 as are necessary. They should run logically down a page, just like code indentation, i.e.
H1 at the top before any other H tags, then H2s, with H3s nested under each
H2, H4s nested under H3s, H5s nested under H4s. Stylistically I don't mind how they look, but from an SEO perspective, they should nest logically and run sequentially down a page.
Visual example of an expected sequence:
Made-up example of poor H-tag sequencing:
So you should think about the content on the page, the message in the
meta description first, and the sequence of the H-tags on each page.
3. Cross browser testing
How could you possibly forget cross browser testing? Well, maybe you didn't forget about it, but here's how you can make life a bit easier... by choosing what to strategically "forget" to test!
You can spend a very long time doing cross browser testing for devices that realistically have almost no users, so that isn't a good use of your (and therefore your company's) time.
On one project, we found out that over 60% of our users were on iOS Safari, and over 30% were on PC Chrome. That was 90% or more of our users on 2 browsers. With usage stats like that, did we really want to invest the time on IE11 and Edge, Firefox and Mac Safari, or even Android, especially since IE11 needs a lot of polyfills to look vaguely passable?
Maybe you will choose to do or to skip cross browser testing, but it needs to be an informed decision, and backed up by usage stats, if you have them.
If you are going to agree to cross browser testing, perhaps you should rank the browsers in groupings. Use stats to make your case. Of course, if your client uses, for example, IE11, you may have to prioritise it even if users would rarely use that browser in the real world.
Stats to support your argument can be found at http://gs.statcounter.com/browser-market-share (which you can refine by country).
Try grouping browsers and devices in batches. Move browsers and devices in or out of groups as needed, but only focus on the As and maybe Bs for go-live.
- Group A (must be tested as a priority): Chrome (PC) and iOS Safari
- Group B (should be tested but not a priority): Android Chrome, Samsung native browser, Firefox (PC), Mac Chrome, Mac Safari, Mac Firefox
- Group C (might get tested if time and budget allows): IE11, Edge, Opera etc.
Here’s an example of where a minor browser can become a major blockage: I worked on a job where IE11 was the main browser requirement as that was what the client used. The live (or test) API data was only available within the client’s network. They provided a locked-down PC with access to their network via VPN, but QA found that the machine only had IE10 and IT had locked out all upgrade paths. That was frustrating as IE10 wasn’t even supported by Microsoft and IE10 was not in the support matrix. We either had to go live without having tested the API feed (since we didn’t have access to it) or until the PC had been returned to the client for upgrading. My contract ended before I found out what happened, but, suffice to say, we could not test with the expected browser and API data combination provided. The project was delayed as QA had called out the risks, as well we should have. Presumably this blockage got rectified and it eventually went live successfully as a more robust product.
That's part 1 of the things that I often park or forget when conducting testing. Stay tuned for more things that we as testers should remember but sometimes don’t.
There are likely to be many other topics that you would include instead of or in additon to these, and that's fine. Either way it's food for thought, and hopefully next time you'll remember (or decide what to forget) too!