So far we have discussed many things related to test automation. We have discussed how to start the automation process in your organization, how to select the tool for automation, what are the types of frameworks we can develop and how we can create scripts in a maintainable manner.
Check Here the list of all tutorials in this automation testing series.
We will now discuss the execution plan of scripts.
What You Will Learn:
Execution Plan
Tips for Successful Script Execution Plan:
The most important thing about execution plan is that it has to be planned before the creation of scripts. You need to know some points before creating the scripts.
For example, consider the application under test is a web application. You have to create some scripts to test the login feature of the website.
Now it seems simple at a first glance, to create 8 to 10 scripts to test the login feature covering major aspects. But, once you will try to execute test cases you will face many difficulties. So it is better to ask these questions before starting to create scripts.
These Questions to ask include:
Q #1. What is the execution environment?
This is actually the most important question. The environment means that where test cases will be executed.
For example, you are testing a web application which is hosted on the local environment. You will create test cases, keeping in mind that this website is hosted locally. But in future, if this website is to be hosted on a live server on the Internet, we should know that beforehand. Once we know it, we can rectify that by keeping many values outside our script that can be changed easily.
For example the URL of the website, the usernames, the passwords, the emails and many other things that could be changed with the change in the server. We will keep these changeable values outside of our script (in an excel file or database or a configuration settings file) so that they can be changed easily without having to modify our script.
Secondly, where our tests will be executed? Will they be executed on a VM (Virtual Machine) or on a physical machine? If you will use a VM, how many VMs are needed and how much RAM and other processor requirements on each VM are required? All this need to be answered.
The important thing to know is – Can we execute the test cases without installing the tool on these VMs? Because installing the tool on a VM means buying another license of that tool which will be very expensive. For this purpose, some tools give an execution engine which is a stripped down and less expensive version of the tool and it is only used to execute test cases.
For example, MS Coded UI gives “MS Test Agent” and Test Complete gives “TestExecute” for this purpose.
The advantage of using a VM is that you can start the execution and minimize the VM and continue doing other work on your physical machine. So the time taken by scripts to execute can be utilized to create new scripts.
The third point related to the environment is on how many browsers you want the scripts to be executed? If we need to execute on Chrome, Firefox and IE, we will make sure that the techniques we use to identify elements should work on all 3 browsers. If we are using CSS techniques, we will make sure that these CSS attributes are supported on all browsers for seamless execution.
Q #2. When and by whom these test cases will be executed?
The “When” question seems easy to answer. “On every build”.
But that was not the intention of this question.
Nowadays, companies are using Continuous Integration.
What is continuous integration? In simple words, you can say that – as soon as developer checks in their code, a routine will be executed, which will integrate all the parts of the application and release the build automatically. In some cases, this build will be deployed automatically and will be tested automatically by automated scripts.
The purpose of this question is to determine that whether our automated scripts will be part of a continuous integration? If yes, then we have to make sure that our scripts can properly integrate with the CI servers such as Jenkins, MS Build and Hudson etc.
Some companies use nightly builds. In such cases, we have to make sure that our scripts can be started automatically and executed automatically without any human intervention.
Some companies do not use any build server. They manually release the build and from there, we install the software on our system and execute the automated test case. In this scenario, sometimes the scripts are executed by the automation team.
In some companies, manual testers run these scripts or a dedicated resource is hired to execute scripts. It all depends on the scale of application and number of environments. If a number of environments are higher, I suggest hiring a dedicated resource for script execution.
Q #3. Should test cases be stopped on the first failure or should they continue execution?
Some applications are such that their flow is heavily dependent on previous actions.
For example, Consider you are testing a payroll module from an ERP application. The tests are such that, in first test case you have to create an employee in the database. In the next test case, you have to print a salary check for that employee.
Now consider this situation:
If the employee is not created (due to a bug in an application), you cannot give him any salary. So in this kind of scenarios, if the first test case is failed there is no point to run the next test case.
So before designing the test cases, we have to see the dependency between test cases. If they are dependent on each other, we should order them in a respective manner. We should stop the execution on the first failure and report the bug.
Some applications are such that they have a base state.
For example, a dashboard page from which all links can be opened. So this dashboard page will become the base state in our test case. Every test case will start from that base state and end at this base state. So that the new test case can start without any problem. In this kind of scenarios, we do not stop the execution on failures.
If any test case fails, we mark it failed and return to base state and continue the next test case (it is also called the recovery scenario). At the end of execution, we have a report of how many test cases have failed and how many of them are passed. We debug the failed test cases and identify and report bugs. (The login page scenario, discussed in the beginning of the article also contains a base state).
The above questions must be asked and their answers must be known to automation engineers in order to carry out successful script development and seamless execution. A simple automated login script is easy to create but the execution is a lot tougher if we try to create scripts without considering the above points.
Reporting
Tips for Effective Reporting of Test Execution:
If scripts are great, but reporting is not, then it is difficult to find bugs through automation.
See also => How to Report Test Execution Smartly for manual testing projects
Clear and comprehensive reports help us to reach a conclusion after scripts execution is completed.
The reporting formats are very different in each tool, but I will try to list down the most important aspects that should be present in the report.
1) Report for a batch of scripts:
If multiple test cases are included in a batch and that batch is executed, then the important points to include in the report are:
- Total Number of scripts.
- List of all test cases in tabular form
- Test Result (Passed or Failed in front of every test case)
- Duration (in front of every test case)
- Machine / Environment Name (in front of every test case)
2) Detailed report for an individual test case:
Now when we click on any test case to see the detailed report, the detailed report of an individual test case should contain the following information.
- Name of the test case
- The id of that test case if it is connected to test case repository.
- Duration of the test case. (min: ss)
- Status (Passed or Failed)
- Screen Shots (only on failure or on every step)
- Where it fails. (The exact line number in the script where it was failing)
- Any other helpful logs which we have written in the script to be displayed on the report.
Examples:
Example of Batch Report: (Selenium C# Web Driver and MSTest)
Example of Detailed Report for a Passed Test Case:
Example of Detailed Report for a Failed Test Case
(Click on image to view enlarged)
Some other points to remember:
- Duration is an important factor. If the test cases are taking the longer duration to complete, they must be carefully debugged and optimized for speedy performance. The quicker the test case completes, the better it is.
- The Screenshot should be taken on failure only to speed up the execution.
- The Report should be exportable in a shareable format such as a PDF or Excel File.
- Custom Messages should be written on assertions and checkpoints so that in case of Failed Assertion, we exactly know just by seeing the report that what has gone wrong.
- A good practice is to Log a line of text before every action. This log will be displayed in the report and a layman will know that on which action the test case has failed.
Conclusion
Seamless Execution and proper reporting are one of the important factors in test automation.
I have tried my best to explain these factors in simple language with my own experience. We have faced many difficulties when we were not having an execution plan in place. But in our next projects, we came up with the proper execution planning and it resulted in much less pain afterwards.
We would love to hear your point of view as well. We will definitely learn something from your comments.
PREV Tutorial #5 | NEXT Tutorial #7
Thank you for sharing. very useful series indeed.
Very Useful Infomation
Hello
can these results be exported using any tool to custom test report?