DEV Community

Cover image for Automate vs manual cross-browser testing
Anastasia
Anastasia

Posted on

Automate vs manual cross-browser testing

In a perfect world, there is only one browser and device on which all programs work. In the real world, there are many browsers and platforms with their requirements for efficient work. That's why we need cross-browser testing to ensure accessibility for all users.

Cross-browser testing is a type of software testing with a primary goal to confirm that the application has full functionality on different browsers with multiple combinations. Professionals recommend conducting cross-browser testing after the system has been checked for defects by other types of testing. Only in this case, it will be possible to say that the revealed incorrect scenarios are related specifically to the features of the browser, and were not missed at other stages.

Usually, clients decide on target web browsers for a product. But as QA-engineers you should analyze a product and present to a client the best decisions. According to Statcounter, the most popular worldwide web browser is Chrome with up to 64% of users. The second place is Safari(19%) and the third is Firefox(almost 4%).

You may face three big challenges during conducting cross-browser testing:

  1. You can't test all combinations. Browsers depend on the operating systems on which they are installed. We have a deal with different versions of OS, 32-bit, and 64-bit processors, versions of updates, etc. There are thousands of combinations of browser version and OS version. And a number of these combinations are growing as new browsers or new versions of OS release. It means it's impossible to test the app with all of them.

  2. Auto-updates. Browsers don't require downloading updates manually anymore. It's happening automatically, even without the user's attention or interaction. Browsers frequently update — approximately every eight weeks and each browser has its dates to update. It can lead to bugs or incorrect function responses in a testing product due to new browser updates.

  3. Automation is a hard deal. Two previous challenges are resolved by automation. But it isn't easy to implement. First, most tools for automation have limited facilities. Second, it takes deep knowledge and vast experience to write automation code and prepare test cases.

  4. Failures in browsers. Some browsers have bugs or incorrect implementation of new functions. It can affect testing websites or web apps.

There are two classic techniques that are applied for cross-browser testing.

Automation testing

Automation testing is a process in software testing using automated tools, scripts and algorithms.

During automation testing, you can face many challenges such as:

  • Incorrect response. Sometimes during testing, a system can give a false-positive response, even if there are no issues in code. Thus, algorithms can mislead QA-engineers and they will waste time searching for unexisting errors. Vice versa, a false-negative can happen when a system has failures, but an automated algorithm doesn't trace it. This situation is more dangerous as missed failures can cause new ones.

  • Wrong indicators. Sometimes testers wrongly assign a value to ID for web-elements or miss it. It leads to failures and problems as automation scripts can't find a proper web-element.

  • Automation in a cloud. One of the disadvantages of automation testing is a requirement to test scripts within browsers that are installed on your computer. Thus, you need to install hundreds of browser versions, which is inconvenient. The solution is to use cloud platforms that can maintain up to 2000 browsers.

  • What to automate. Many QA-engineers don’t understand which test cases they should automate and which shouldn’t. Some of them try to automate as many test cases as they can. As a result, development costs increase, whereas work efficiency doesn't. Some engineers rely on luck and automate random test cases. But automation testing is useful only if you have a deep understanding of what you should automate.

Despite challenges to automate cross-browser testing is a robust way to increase speed, volume, and efficiency of work. It has many benefits that can increase the efficiency of your testing process.

  1. You can use it as a form of integration testing. It can help to reveal problems that were missed during unit testing. For example, such problems can be failures within compound interactions among components of code or critical changes in API that lead to failures in code.

  2. High efficiency of repetitive tests. Usually, cross-browser test cases are permanently repetitive for different browsers and operating systems. Such a task becomes tougher with the expansion of a project. Lest to waste the time and efforts of testers, repetitive tests should be automated. But you should ensure that all areas of testing are included in a script.

  3. Facilitate regression testing. Regression testing helps to ensure that new features don't cause crashes in a system. When you work with an already released application it's important to perform regression testing to implement new functionality faster. It can be exhausting and continuous with manual testing. Yet such delay can affect popularity among users, as they can switch to your competitors who already have new features. But with automation, you can shrink regression testing time from weeks to a couple of hours.

  4. Better test accuracy. Monotonous work can tire engineers and be time-consuming. Besides, tests with a large amount of data have a higher chance to get incorrect results due to errors related to the human factor. Through automation, you can avoid such problems. Engineers can leave monotonous work to algorithms and focus on the primary goals of the project. Besides, automation can make reports that facilitate writing new test cases and analyzing a situation.

Manual testing

Manual testing is easier and cheaper than automation, but also, it's time-consuming and reduces testers' efficiency. However, sometimes you can't deal without manual testing during cross-browser testing. Such cases cover areas when automation can't replace humans' mind and perception.

Let's consider cases when manual testing is necessary:

  • To reveal unobvious failures. Finding some failures depends on the experience of testers and the knowledge of the target system and browser. Also, some bugs can happen under specific conditions that automation testing can't cover. Through exploratory testing, which is always manual, testers can find atypical bugs and problems.

  • Check a visual environment. Automation can cope with checking the correct location of visual elements. But testing of app appearance, smoothness of animations, and overall usability it's work for humans. Thus, only through manual testing, you can ensure how animations work or how components of design look in different browsers and under different conditions. HTML5 and CSS3 open new opportunities and allow developers to create new effects and elements. Moreover, effects can be displayed even if JavaScript is disabled. But as HTML5 and CSS3 aren't standard, their abilities can be displayed in some browsers incorrectly.

  • Check UI. Design components must not only look good but respond correctly. With manual functional testing, testers can check how different fields, buttons and forms behave on different browsers.

Sum up

We considered two classic methods of cross-browser testing. Each of them has both pros and cons. Automation testing facilitates a testing process when manual testing is getting goals in the areas where automation is helpless.
They both were designed to cooperate and work simultaneously for the best results.

Top comments (1)

Collapse
 
cpave3 profile image
Cameron Pavey

An interesting point around Manual Testing being cheaper than automated. The upfront cost of automated testing is certainly higher, but I wonder how the comparison looks in the long term. Unfortunately, there is the ongoing cost of maintaining automated tests, but I would think that this is smaller in the long term than the comparatively high long term cost of manual testing, especially in larger systems.