Software test automation is the process of using specific tools to control and automate the execution of test cases. Actual execution results are automatically compared to expected results. This saves time and therefore money. In this article we build a deeper understanding of what test automation is, how to set up a test automation process, and how to choose the right test automation tool for you use case.
What is the difference between test automation and manual testing?
For smaller projects testers usually do so-called “explanatory testing”.
With this informal approach, the tester does not follow a testing procedure, but instead explores the application’s user interface with as many of its functions as possible and uses information from previous tests to intuitively derive more tests.
This is done by clicking, dragging, entering text via keyboard, etc.
For bigger projects, it can be useful to have an executable plan on which parts of the application to test and in which order.
The plan can be a text document, a spreadsheet, or dedicated software for test management.
In contrast to manual testing, test automation instructs a computer to do that manual work automatically.
In most cases, the computer needs to be instructed on how to do this type of work. This is usually done with help of programming languages such as Java or Python. But there are also tools that allow you to create automatable tests via a recorder or some graphical user interface.
How to set up a test automation process?
Setting up a test automation process can require a lot of knowledge and is usually a process of weeks, rather than days. If you want to make sure you don’t miss anything, we recently posted a 6 step app testing checklist, make sure to check it out!
The complexity of the test automation setup usually correlates with the complexity of the software which needs to be tested.
Here is a top-down list of steps needed for setting up test automation:
- Specify which tests needs to be tested
- Choose the right automation tool
- Develop the test cases
- Execute the test cases
- Fix the broken tests and repeat 4. and 5. till all tests pass
- Maintenance: Regularly run test cases and fix broken tests
Specify which tests to automate
Think about which parts of the software are crucial and MUST NOT fail.
For example: Think of the login. If users are not able to log in, they won’t be able to use any other part of the application. That makes the “log in” use case a very crucial part to test.
Contrary, if a username is displayed with the wrong font size, that might not be such a big deal.
The other important aspect is “ease of test automation”:
Some use cases are really hard to test automatically, so it will require huge efforts to develop automatic test cases for them.
For example: Think of a “forgot my email” button, which will send users a password reset link to their inbox. To test this, we would need to automate 2 completely separate apps: The app under test AND the email client. That makes things complicated.
So what test cases should be automated then?
The way to do it is to come up with a spreadsheet with all use cases you can think of and then assign a score for “Importance” and “Ease of implementation”:
Here is an example:
|Use Case||Importance||Ease of Implementation||Score|
|Show profile image||2||3||6|
In the end, you can order your use cases by the sum of both columns (See column “Score”). That gives you some idea of where to start with the implementation of your test cases.
Choose the right Automation Tool
The selection of the right automation tool depends on the nature of the application under test. For example, Selenium is used for web-based applications, or Repeato or Espresso are used for Android applications and MS coded UI for desktop applications. Tools come at all shapes and prices and it can be difficult to find your way through the software jungle.
Developing the test cases
Following steps will be involved in implementing the test automation:
- To make sure that tests will pass, it makes sense to do a manual test before test implementation
- For UI testing, make sure that the user interface of the application should be quite stable. If it still changes a lot because it’s still under heavy development, it can cause a lot of additional work along the way, because test cases need to be edited most of the time when the UI changes.
- If you have somebody involved who has less technical knowledge but knows how to manually test the app, instruct that person to create a list of interactions for each test case. This can be easily done in a text document or a spreadsheet. With this test automation specification, it’s going to be easier for the developer to focus on the implementation of the tests.
- If you don’t have a developer at hand or the developers are busy with fixing bugs, take a look at no-code test automation tools such as Repeato, which can reduce the complexity and cost of test automation drastically.
Execution of test cases
In this phase, your test case implementations are executed. Depending on the tool you use, a more or less detailed report is going to be generated.
The execution can be done locally (on your own desktop computer) or you can embed your test automation setup into a so-called “Continuous Integration” flow. Continuous Integration takes care of running your tests automatically whenever it’s needed. This can be each time after a change in the codebase or before each release. You can read more about continuous integration here.
Rather sooner than later, features in the software under test will change and tests will (hopefully!) fail.
This will require you to adapt some of the tests to cover the new functionality.
Many companies make an initial effort to set up test automation but then fail in committing resources to test maintenance. This renders all initial efforts useless. Test automation is only useful when tests are run regularly, fixed, refactored, and require just as much attention as the actual software product.
Which types of tests automation frameworks are there?
While pretty much every type of test can be done manually, some types are typically automated and executed automatically.
Here is the list of testing types used in test automation:
Smoke testing describes an initial testing run to reveal critical failures. Their main goal is to ensure that major functionality works exactly as expected. The result of this testing is used to decide whether to proceed with further, more detailed testing or not.
It is also known as “Build Verification Testing”. Smoke Testing is usually used in Integration Testing, System Testing, and Acceptance Testing.
- It highlights integration issues.
- Smoke testing often can be implemented quickly because it focuses on the core functionality of the application
- It improves confidence that changes to the software have not affected the core functionality of the application.
Unit Testing is a type of software testing in which we testt individual modules or components. It is done to ensure that each specific unit is working as expected. In procedural programming, a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/ super class, abstract class, or derived/ child class.
Unit Testing is a form of “White Box Testing”: The test has access to the smallest parts of the software. It can see inside the box so to say. (Compared to Black Box Testing, which deals with software that doesn’t offer ways to access its components).
Advantages of Unit testing
- Easy to implement (compared to UI testing)
- You can implement it from the very start of the project. Even before the software is implemented (see Test-Driven Development “TDD”)
- Ensures quality of code
- Facilitates proper software architecture and component APIs
- Provides implicit documentation
- Simplifies debugging
Integration testing is a type of software testing in which individual units are combined and tested as a group. This type of testing mainly focuses on exposing bugs when units interact with each other during integration. Test drivers and test stubs are used to assist in Integration Testing.
Advantages of integration testing
- The confidence level is higher (than with Unit Testing) because the interaction between components can be tested.
- Code coverage can be tracked quite easily
- Top-Down or Bottom-Up integration testing can be started at the early stage of the project and bugs can be caught early.
- Tests run faster than with end-to-end testing.
Functional testing is the type of testing which is carried by testing functional requirements. Features are tested by giving input values and examining output results. It ensures that functional requirements are satisfied as per the software requirement specification document. It is a result-driven approach, not a process-driven approach.
Functional testing is a form of Black Box Testing because the internal logic of the system being tested is most often not known by the tester.
Advantages of functional testing
- It provides high confidence in the full functionality of the application
- It ensures that requirements are met
- Like other testing types, it improves the quality of the product
This type of testing is applied after all components, including the user interface, database, etc. have been patched together. Initially, end-to-end tests are typically executed manually. But the more mature a software becomes, the more benefits you will get from automating your end-to-end testing.
Advantages of end-to-end testing
- It provides the highest confidence in the full functionality of the application
- It ensures that requirements including the user interface are met
UI Testing deals with the User Interface of an application (GUI Testing with the Graphical User Interface). Often UI testing is performed after a good portion of an application is implemented. The reason is that the user interface of an application often requires data and the tester needs to be able to navigate between different parts of the app, so initially, there is little possibility to test.
This is the reason that UI Testing is often mixed up with End-to-end testing. Because testing the UI of an almost finished software will result in the application being End-to-end tested.
Regression testing is the type of software testing that ensures that changes to the software have not affected it adversely. Bugfixes and new features should not impact the core functionality of the system. By running your regression tests regularly (test automation can be very handy in this case), a case where previously implemented features break because of new ones can be prevented.
Regression testing can be performed on each level of testing (Unit, Integration, System, or Acceptance) but it is most relevant during System Testing.
Advantages of regression testing
- Detect and eliminate all bugs or defects that may cause detrimental effects on product functionalities early in the deployment cycle.
- It helps to reduce the time, cost and effort invested to resolve build-up defects.
- Accelerates the time-to-market of the software product.
Black Box Testing
Black box testing is the type of software testing in which the internal system of the software that is being tested is unknown to the tester. These can be true for functional and non-functional tests.
Advantages of black box testing
- Tests are done from a user’s point of view and will help in exposing discrepancies in the specifications.
- Tester does not need to deal with the details of the implementation (programming language, framework, etc.)
- Tests can be conducted by staff independent from the developers, enabling an objective perspective and the avoidance of developer bias.
- Test cases can be designed as soon as the specifications are complete.
White Box Testing
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a software testing method that examines an application’s internal structures or workings. Compared to black-box testing which focuses mostly on functionality.
In white-box testing, test cases are created using an internal perspective of the system as well as programming skills, so usually developers are taking care of building white-box tests.
The developer selects inputs from which to exercise code routes and identify expected outputs. In the software testing process, white-box testing can be used at the unit, integration, and system levels. Although conventional testers thought of white-box testing as being done at the unit level, it is now more commonly utilized for integration and system testing. During a system–level test, it can test paths within a unit, paths between units during integration, and paths between subsystems.
Though this technique of test design can find numerous faults or problems, it has the ability to overlook elements of the specification that have not been implemented or criteria that have not been met. When white-box testing is design-driven, that is, when it is driven only by agreed-upon specifications of how each component of software should act (as in DO-178C and ISO 26262 processes), white-box test approaches can detect unimplemented or missing requirements.
Non-functional testing is the process of evaluating a software program or system for non-functional needs, such as how the system works rather than specific behaviors. Functional testing, on the other hand, tests against functional requirements that characterize a system’s and its components’ functionality. Because of the overlap in scope between distinct non-functional requirements, the titles of several non-functional tests are frequently interchanged.
Software performance, for example, is a broad phrase that encompasses numerous specific needs such as dependability and scalability.
Non-functional testing includes:
- Baseline testing
- Compliance testing
- Documentation testing
- Endurance testing or reliability testing
- Load testing
- Localization testing and Internationalization testing
- Performance testing
- Recovery testing
- Resilience testing
- Security testing
- Scalability testing
- Stress testing
- Usability testing
- Volume testing
- Accessibility testing
This method separates the documentation of the test cases, from the specification of how the test cases are executed. This divides the test creation process into two distinct phases: a design and development phase and an execution phase. The design phase includes requirements analysis and evaluation as well as data analysis, definition, and population.
This implies maintaining a list of keywords that are associated with the specific test steps which are performed on execution. Keyword-driven testing lists are usually documented using a table format. The first column holds the name of the keyword.
An example of keyword driven testing
|Input Userdata||clinton99||Tom Clinton||szej_sz6|
|Input Userdata||septocul||Lia Tomson||Xd83=8|
As you can see the same keyword can be used several times with different data. The implementation of the keyword however is kept in a different place.
There are high-level keywords that deal with a high-level view of the application. “Fill out the order form” could be such a keyword. And low-level keywords that deal with the implementation of the interactions. “Click in the address field” would be an example.
High-level keywords can be composed of low-level keywords.
Advantages of keyword-driven testing
- Test cases are easier to read
- Test cases are easier to understand for non-technical staff
- Changes in a program do often only require changes in low-level keywords but not in high-level keywords
What are the best Testing Practices?
The term FIST describes a simple collection of criteria for a proper test automation setup:
- Fast: Tests should complete rapidly.
- Independent: Tests should be independent and isolated from one another.
- Repeatable: Every time you run a test, you should get the same results. External data providers or concurrency difficulties could cause intermittent failures.
- Self-validating: Better create tests in an automated and self-validating way. Instead of relying on a programmer’s interpretation of a log file, the output should be either “pass” or “fail.”
- Timely: Depending on whether you do Unit Testing or UI Testing, try to develop your tests along with the features of the application. You can write Unit Tests even before you implement a feature. We call that test-driven development.
Like this article? there’s more where that came from.
- Testproject.io retired. What alternatives are there?
- Testing with Espresso: use, challenges, and alternatives
- Manual Testing vs Automation Testing: what is better and when to use it?
- How to solve “tool ‘xcodebuild’ requires Xcode, but active developer directory ‘/Library/Developer/CommandLineTools’ is a command line tools instance”
- How to report code coverage data from Flutter tests?