Menschen von oben fotografiert, die an einem Tisch sitzen.

adesso Blog

1. Web app test automation is crucial for companies

If companies are asking themselves where they can save costs, for example by limiting the number of fat client licences they have or reducing rollout and maintenance costs, but are worried because they still need to ensure that applications run stably given the multitude of browsers and versions, then it’s time to think about web app test automation. This is because

  • web applications are very important.
  • Browsers are being used to provide a wide range of applications more and more each day – both on intranets and on the Internet.
  • Users are demanding an increasing level of quality.

There is a variety of solutions on the market that are designed to overcome this exact challenge. Each of them has its own advantages and disadvantages. Cost is only one factor in the decision whether to opt for a solution or not.

One option is to use a cloud-based testing platform such as SAUCELABS. This offers a way to test software on real devices and/or virtual machines and with a variety of configuration options. This type of test solution means that fundamental infrastructure issues need to be clarified in addition to the costs. The application to be tested must be made available to test devices not located in the local application environment, such as those reached via SSL tunnels, for example.

This solution of performing a cross-browser and cross-device test is a valid one, but there is a much easier way to do it.

To illustrate the problem, let’s imagine the following scenario: a new application is being created to replace an old portal in the banking industry. Like the old portal, the new one will be used by internal and external customers. It is not possible to specify which browsers are used. The frontend is developed with Angular 8, TypeScript and NodeJS, while the backend is developed with Java microservices and interfaces to other technologies. On top of this, two mobile apps are being developed, one for Android and the Google Play Store, the other for iOS and the App Store. Both serve as wrappers for the same website, but they also store data across sessions to improve usability.

Since this software is so important for the end users, it’s essential that the application undergoes rigorous testing. In the worst case scenario, errors can result in the bank or its customers losing money. It has already been decided that testing must not be done on devices outside the development landscape’s firewall. Costs must be kept as low as possible. This means that based on the scenario and the decisions that have been made within it, there are three possible options:

1. Manual testing with real hardware
2. Automated testing with virtual machines or
3. Container-based testing

2. Virtual machines v. containers

Before we define what we want to achieve in this scenario, we need to understand what containers are and how they will make this all possible. They are comparable to the better-known technology of virtual machines. Virtual machines (often called VMs) are primarily known as operating systems that run in a program on a host operating system. VirtualBox from Oracle and VMware are examples of this type of software. It allows video gamers to run Windows XP for older games on their Windows 10 installation and allows developers to run Linux on Windows for special requirements. For businesses, it provides them with a way to use one piece of hardware to map different server instances in order to specialise software and roles. These are just a few examples of how virtual machines can be used. What these examples have in common is that the VMs are separate ‘computers’ running on a shared network with their host. They don’t share critical OS-level software with the host and provide a complete environment for any type of use, but do require a larger amount of storage (somewhere between 5 and 25 or even more GB of disk space), meaning they are more difficult to migrate from one disk to another.

Containers, on the other hand, are lightweight, are used for virtually any task and can contain any type of software. However, they share important operating system files with the host. Software to use containers is available from Docker, AWS and Google, to name just a few. This software allows IT professionals to create an image of the software needed for a system and then share this image with colleagues. Containers are created from these images, which are used for their task and then shut down. They range in size from a few KB to a few hundred MB.

3. Container-based testing with Selenoid and Docker

Selenoid is an open-source project that enables multiple Selenium tests to be performed locally and in parallel. It allows tests to be run in any browser and any version for which an image has been created. Opera, Chrome, Firefox and others are available. Microsoft browsers present something of a problem because Docker is based on Linux, but they can still be run. Since Docker containers are used, there is no need to create a local machine as a VM running the specific browser, nor does it need to be present in the test network. The required image is downloaded and swapped out as needed. This is simplicity at its best. It removes the need to build complex Selenium Grid networks or use cloud-based Selenium Grid networks. It also rules out the possibility of accidentally upgrading your browser. The fact it is open source means there are no licensing costs for the company and seeing as Selenoid runs on Linux, there are no licensing costs for the operating system on the computer on which the tests are run.

The tests themselves are still Selenium tests that would most likely have been created anyway. Selenium offers the possibility to run behaviour-based tests in programming languages such as Java, TypeScript/JavaScript (with Codecept.js), C#, Perl, Python, Ruby and many others. The tests can be further abstracted into human-written text using Gherkin syntax, meaning that the authors of the tests don’t need to have programming skills, but can instead rely on a few programmers to create abstract test steps that can be used anywhere in the application.

4. Example scenario with Selenoid

Let’s return to the scenario from the start of this post. A company wants to test its software internally without accessing external services. Costs play a role in this. The following plan is executed.

  • Developers will continue to write unit tests for all parts of the application (microservices, frontend, interfaces and so on). These unit tests are carried out as part of the CI/CD process.
  • A test infrastructure team is put together. This team is responsible for creating the test environment and has the following tasks:
    • Implement and configure Selenoid
    • Maintain the server on which Selenoid is running
    • Communicate with the infrastructure department
    • Support and train the testers

They are not responsible for the business logic or for performing the tests.

  • Test developers are responsible for writing the code that performs the steps in the browser. Every function these developers create is written to work across the entire website that is being tested. If need be, they will inform the developers that certain changes need to be made to the website to enable these general functions (add an attribute or report that the user interface has not been implemented consistently, for example). They support the test authors in using the existing functions or by providing training, if they need it.
  • Test authors use Gherkin (Cucumber) to write tests that are executed via the UI. This has the advantage that the people writing the tests focus on the business logic and user interface rather than on implementing code. These test documents also serve as live documentation of the application’s capabilities and behaviour.
  • Manual tests are kept to an absolute minimum. While they are specified and can be reused at a later date, they cannot be used for regression testing. Moreover, these tests are expensive because they have to be carried out by a person in real time.

Manual tests can be used in the development phase in an agile team to perform the checks required for a definition of done. As soon as the development ticket is closed, new tickets are opened for the development of automated tests.

  • Integration tests are carried out using industry standards and are automated.
  • Frontend tests
    • Are done with Selenium using the Codecept.js implementation because it is the same language as the primary UI (TypeScript),
    • Are executed with Selenoid in containers, which is installed on Linux machines.
    • Are described and tracked in JIRA. PDF files with details of each test step, including screenshots, are created and uploaded.
    • The Windows browsers are also run on a Windows machine that is offline when nothing is being tested, to keep costs down.
    • 5,000 to 10,000 tests are expected to be created in a full execution period of three days. The tests will run in parallel, meaning the process will actually take between five and seven hours. It will run every night between 1 am and 8 am.

The reasons for preferring Selenoid over other Selenium Grid solutions, whether in-house or cloud-based, are:

  • The container-based architecture eliminates the overhead of building a complex network of test machines, in turn eliminating the cost of these machines, whether virtual or physical. Costs are reduced even further because the number of administrators required is also lower.
  • The container-based architecture increases the number of real-world testing opportunities by providing an easy way to increase the number of browsers and versions under which the tests are run.
  • The Selenoid developers have created a GUI that allows you to view tests being performed in real time. The test infrastructure team does not have to rely solely on screenshots in PDF files to diagnose potential problems.
  • Selenoid offers an in-house solution without expensive cloud-based subscriptions.
  • There is no need for complex changes to the infrastructure or SSH tunnels. What is behind the firewall stays behind the firewall.

5. Summary

There is no getting around the fact that software testing is a must. Web-based testing is becoming increasingly complex as the number of browsers used increases and the number of versions that companies want and need to support continues to grow. However, complexity, cost and computer power must not get in the way of software testing. Software programs such as Selenoid are becoming all the more important to keeping the cost factor for testing as low as possible. Saying ‘we have to run tests, but we have to limit the number of tests because it’s just too expensive’ should be a thing of the past and disappear from sprint retrospectives – because with Selenoid and container-based testing, cost is no longer an obstacle.

Would you like to learn more about exciting topics from the world of adesso? Then check out our latest blog posts.

Picture Gregory  Reeder

Author Gregory Reeder

Gregory Reeder is a Senior Software Engineer in the Line of Business Insurance at the adesso’s location in Hanover. His focus has mainly been on web technologies, working on both intranet applications and customer-facing applications. In the process, he gained experience in both Java and Microsoft technologies.

Save this page. Remove this page.