Most test tools are easy enough to start up. Download a driver, call a code library,
new() up a browser or a phone simulator, feed it a web page, and have the tool start clicking. Three days, three weeks, or three months later, someone will ask why you didn't catch a bug, and, as it turns out, that bug only exists in Firefox. Or Safari. Or, perhaps, on a particular version of the iPhone. Management says no problem, just re-run the tests on all the pertinent browsers.
First question: Which browsers are "pertinent?" Second, does our tool even support them? Third, how are we ever going to get this to work?
Today we'll find answers to those questions, in the order that makes sense.
First pick the right tool
This one seems obvious in hindsight. Some test tools support all the popular browsers, others support … less. Some run fast, and headless, others can only run fast and headless. By headless we mean the computer does not actually render the graphics of the browser on the screen. This generates a faster run time, allows tests to run on "machines" that do not have any graphic element, such as Docker containers, and also allows the user to keep working while tests run, without a cluttered screen executing the tests. On the other hand, without graphics, test failures can be difficult to debug and re-run.
The same is true for native mobile applications. Appium, for example, can drive both iOS and Android. Even if the user interface between the two varies slightly, that can be fixed with an object repository, which is a lookup table. The code calls
find_element returning the correct locator string for the device that is currently running. That's a strategy for multi-platform support we'll discuss later.
The point now is either pick the right tool to start, write it twice, or let the platform go untested.
Part of that will be picking what platforms to run.
Now pick the platforms (mobile devices, browsers, versions) to test on
There is an intricate dance here, as not all tools support all browsers. With less than 6% market share for Edge and barely 2% for Internet Explorer, you could decide to simply not support unpopular devices and form factors. More than the global pattern is what the customers actually use. For example, testing for one luxury brand we found that nearly all sales came through Safari on iOS. As it turned out, non-technical users who could afford a $700 coat adopted that pattern almost exclusively. The people willing to purchase the phone were slightly different from the general browsing pattern one would find with a simple log search.
Before deciding what to test and support, take a look at the bugs that come in. If none of the defects that matter are unique to a platform, you might not need to test against it. Another option is "we fix blocker bugs on that platform but we don't develop test tools against it."
Finally, when it comes to multi-platform, consider that the time to run tests is going to multiply by the number of browsers. Consider if the tool will need to run tests in parallel, especially on a commercial grid such as Sauce Labs or BrowserStack. The commercial tool vendors can provide a variety of browsers, mobile simulators and real devices. Even then, testing on every combination can lead to management and measurement overhead.
Still, it does not seem right to have the test tool drive what browsers are supported. Select the right tool, or combination of them to get things out on the right foot. Realistically, this might be a lesson for the next project, or your next company … or the mobile version of your application.
Once this process is over, you should know what browsers are pertinent and have a tool that can test them. Now to talk about how.
How to run tests multi-platform
Some patterns have evolved since Jason Huggins took Selenium public in 2004. One is to have a default browser, but accept another through the command line. The code below is a sample of using Ruby's options library, at run-time to get the browser name. It is part of a complete, working program in GitHub that selects and launches the browser type at runtime.
require 'optparse' browser = 0; OptionParser.new do |parser| parser.on("-b browser", "type browser name: ff, safari or chrome") do |launchthis| browser = launchthis puts "browser is " + browser.to_s() end end.parse! #If browser is not defined it'll be chrome browser ||= "chrome";
Launching the right browser is only part of the strategy. There are a few more pieces to consider. The common strategy is to separate functionality into tests that run in five seconds to perhaps one minute. The architect will create a mechanism to run sets of tests at a time. The simplest way to do that is separate text files that contain the names of the test. The ability to "tag" a test, then run sets of tests, makes it possible to build all kinds of suites. Assuming that functionality exists, here are a few things to consider on multi-platform tests.
Locators and differences. Running a test on the different browsers, in general, is easy. Follow the example above to change the string-name of the browser on construction. Mobile applications, however, have an entirely different mapping of objects to locations. Here are a few examples:
//Java Example, Find by ID using Selenium driver.findElement(By.id("btnLogin")).click(); //Java iOS Example, find visible button starting with String selector = "type == 'XCUIElementTypeButton' AND value BEGINSWITH[c] 'btnLogin' AND visible == 1"; driver.findElement(MobileBy.iOSNsPredicateString(selector)); //Java iOS Example, driver.findElement(By.id("com.example.testapp:id/btnLogin"));
With a great deal of combined effort it might be possible to use similar identifiers, buttons, and workflows, so that one set of code works with the appropriate driver. In practice, it is more common to have a lookup-table, or even a separate implementation of each business workflow by platform. That way the code is the same, but the object contains the workflow for that particular object. When "login" is called, the login that is executed is specific to iOS, Android, or a classic web browser.
Speed. Every time you add a browser or new platform it will increase the time it takes for a test run. You could run them in parallel, but that will, at the very least, increase compute time. Also, if the tests catch more different bugs (the goal), then debug and fix time will go up. That calls for careful study of exactly what defects appear in what platforms, to run the minimum number of possible platform configurations. Another option is to run the dominant browser all the time, and the supported browsers overnight.
If most bugs break most platforms, one final option with a cloud partner is to rotate through the browsers, running one completely for each test run. This approach provides coverage, fast feedback, and a reasonable cost.
That leads to options that look like this:
|Run all tests on one platform with every CI run||Less effort to manage, fastest possible results, could miss compatibility problems. Consider doing manual tests in a different browser.|
|Run all tests on one platform with every CI run, all browsers overnight||Catches compatibility problems at least overnight. Could come in to a great number of compatibility bugs; feedback delayed by a business day.|
|Rotate through platforms on each CI run||Highest amount of coverage. Could lead to delayed feedback in some instances or confusing test results|
|All the tests that run on some limited number of platforms on every CI run||Increases the coverage. Question of balance: How much is enough?|
The final choice of exactly how to implement multiple browser support for test automation is yours. Today, we just tried to give you some things to consider.
Where to go for more
Once tests are running multi-platform, there will be other problems, like how to map tests to a specific test server on a specific branch, how to manage the test data, and so on. Andrew Knight, the "Automation Panda", has been doing some pioneering work in this area to create a pattern language for testing. That language makes it possible to debate strategy and make decisions with clearly enumerated tradeoffs and without confusion. Andy's talk, "Managing the Test Data Nightmare", has not only slides on SlideShare, but video from his recent live presentation at SauceCon.