In this article, we will learn how to have seamless script execution planning and reporting for a successful automation project. Let’s get started.
So far, we have discussed several things related to test automation. We have discussed how to start the automation process in your organization, how to select the tool for automation, what types of frameworks we can develop as well as how we can create scripts in a maintainable manner.
Click here for a list of all the tutorials in this automation testing series.
Table of Contents:
Seamless Test Automation Script Execution
We will now discuss the execution plan of the scripts.
The most important thing about the execution plan is that it has to be planned before the creation of scripts. You need to know some points before creating the scripts. For example, consider the application under test as a web application
Create some scripts to test the login feature of the website. Now it seems simple at first glance to create 8 to 10 scripts to test the login feature covering major aspects. But, once you will try to execute test cases, you will face many difficulties.
Before script development, it’s recommended to ask these questions.
Q #1) What is the execution environment?
This is the most important question. This environment means that where test cases will be executed.
If you are testing a web application that is hosted in the local environment. You can create test cases, keeping in mind that the website is hosted locally. But in the future, if this website is to be hosted on a live server on the Internet, we should know that beforehand.
Once we have that information, we can fix it by storing multiple editable values outside our script. For example, the URL of the website, the usernames, the passwords, the emails, and many other things that could be changed with the change in the server.
We will store these changeable values outside of our script (in an Excel file, database, or configuration settings file) so that we can easily modify them without having to edit our script.
Second, where will our tests be executed? Will they be executed on a VM (Virtual Machine) or on a physical machine? If you are going to use a VM, how many will be needed, and how much RAM and other processor requirements on each VM are required? All this needs to be answered.
The important thing to know is, can we execute the test cases without installing the tool on these VMs? Because installing the tool on a VM means buying another license for that tool which will be very expensive. For this purpose, some tools give an execution engine, which is a stripped-down and less expensive version of the tool and it is only used to execute test cases.
For example, the MS Coded UI gives “MS Test Agent” and Test Complete gives “TestExecute” for this purpose.
The advantage of using a VM is that you can start the execution, minimize the VM, and continue doing other work on your physical machine. So the time taken by scripts to execute can create new scripts.
The third point related to the environment is how many browsers you want the scripts to be executed? If we need to execute on Chrome, Firefox, and IE, we will make sure that the techniques we use to identify elements should work on all 3 browsers. If we are using CSS techniques, we will make sure that these CSS attributes are supported on all browsers for seamless execution.
Q #2) When and by whom will these test cases be executed?
The “When” question seems easy to answer. “On every build”.
But that was not the intention of this question. Nowadays, companies are using Continuous Integration.
What is continuous integration? In simple words, you can say as soon as a developer checks in their code, a routine will be executed, which will integrate all the parts of the application and release the build automatically. In some cases, this build will be deployed automatically and will be tested automatically by automated scripts.
The purpose of this question is to determine whether our automated scripts will be part of a continuous integration. If yes, then we have to make sure that our scripts can properly integrate with the CI servers such as Jenkins, MS Build, Hudson, etc.
Some companies use nightly builds. In such cases, we have to make sure that our scripts can be started and executed automatically without any human intervention.
Some companies do not use any build server. They manually release the build and from there, we install the software on our system and execute the automated test case. In this scenario, sometimes the scripts are executed by the automation team.
In a few companies, manual testers run these scripts, or a dedicated resource is hired to execute scripts. It all depends on the scale of the application and the number of environments. If the number of environments is higher, I suggest hiring a dedicated resource for script execution.
Q #3) Should test cases be stopped on the first failure or should they continue execution?
Some applications are such that their flow is heavily dependent on previous actions.
Consider a situation where you are testing a payroll module from an ERP application. The tests are such that, in the first test case, you have to create an employee in the database. In the next test case, you have to print a salary check for that employee.
Now consider this situation:
If the employee is not created (due to a bug in an application), you cannot give him any salary. So in this kind of scenario, if the first test case fails there is no point in running the next test case.
So before designing the test cases, we have to see the dependency between test cases. If they are dependent on each other, we should respectively order them. We should stop the execution of the first failure and report the bug.
Some applications are such that they have a base state.
For example, a dashboard page from which all links can be opened. This dashboard page will become the base state of our test cases. Every test case will start from that base state and end at this base state. So that the new test case can start without any problem. In this kind of scenario, we do not stop the execution on failures.
If any test case fails, we mark it as failed, return to base state, and continue to the next test case (it is also called the recovery scenario). At the end of execution, we have a report of how many test cases have failed and how many of them have passed.
We debug failed test cases and identify and report bugs. (The login page scenario, discussed at the beginning of the article, also contains a base state).
To carry out successful script development and seamless execution, automation engineers must ask the above questions and know their answers. A simple automated login script is easy to create but the execution is a lot tougher if we try to create scripts without considering the above points.
Reporting
Tips for Effective Reporting of Test Execution
If the scripts are great, but the reporting is not, then it is difficult to find bugs through automation.
Also Read => How to Report Test Execution Smartly for manual testing projects.
After completing script execution, clear and comprehensive reports assist us in reaching a conclusion.
The reporting formats are very different in each tool, but I will try to list the most important aspects that should be present in the report.
#1) Report for batch of scripts
If multiple test cases are included in a batch and that batch is executed, then the important points to include in the report are:
- Total Number of scripts.
- List of all test cases in tabular form.
- Test Results (Passed or Failed in front of every test case).
- Duration (in front of every test case).
- Machine/Environment Name (in front of every test case)
#2) Detailed report for individual test cases
Now when you click on any test case to see the detailed report, the detailed report of an individual test case should contain the following information:
- Name of the test case.
- The id of the test case if it is connected to the test case repository.
- Duration of the test case. (min: ss).
- Status (Passed or Failed).
- Screen Shots (only on failure or on every step).
- Where it fails. (The exact line number in the script where it was failing).
- Other helpful logs that we have written in the script to be displayed in the report.
Useful Examples
An Example of a Batch Report (Selenium C# Web Driver and MSTest)
Example of a Detailed Report for a Passed Test Case
Example of Detailed Report for a Failed Test Case
(Click on the image to get an enlarged view)
Points To Remember
- Duration is an important factor. If the test cases are taking a longer duration to complete, they must be carefully debugged and optimized for speedy performance. The quicker the test case gets completed, the better it will be.
- Screenshots should only be taken on failure to speed up the execution.
- The Report should be exportable in a shareable format, such as a PDF or Excel File.
- Custom Messages should be written on assertions and checkpoints so that with a Failed Assertion, we know exactly by just seeing the report what has gone wrong.
- A good practice is to Log a line of text before every action. This log will be displayed in the report and a layman will know on which action the test case has failed.
Conclusion
Seamless execution and proper reporting are important factors in test automation.
I have tried my best to explain these factors in simple language with my experience. We faced a lot of difficulty when we did not have an execution plan in place. But in our other projects, we came up with the proper execution planning and it resulted in much less pain.
We would love to hear your point of view as well. Please post your opinion, feedback, and queries in the comments section below. Looking forward to hearing from you.
PREV Tutorial #5 | NEXT Tutorial #7
Hello
can these results be exported using any tool to custom test report?
Very Useful Infomation
Thank you for sharing. very useful series indeed.