This is the concluding part of our “software testing training on a live project” series.
It is going to be about defects and also few remaining topics that will mark the completion of Test execution phase of the STLC.
In the previous article, while Test Execution was going on, we encountered a situation where the expected result of test case was not met. Also, we identified some unexpected behaviour during exploratory testing.
What happens when we encounter these deviations?
We obviously have to record them and track them to make sure that these deviations get handled and eventually fixed on the AUT.
#1. These deviations are referred to as Defects/bugs/issues/incidents/errors/faults.
#2. All the following cases can be logged as defects
#3. Defect recording is mostly done in excel sheets or via the use of a defect management software/tool. For information on how to handle defects via tools, try using the following links:
What You Will Learn:
We will now try to see how to log the defects we encountered in the previous article in an excel sheet. As always, choosing a standard format or template is important.
Typically, the following columns are a part of defect report:
These are some of the ‘must-have’ fields. This template can be expanded (E.g. to include the name of the tester who reported the issue) or contracted (E.g. the module name removed) as needed.
Following the above guidelines and using the template above, a sample Defect log/report could look like this:
Sample Defect Report for OrangeHRM Live project:
Below is the sample defect report created in qTest test management tool: (Click on image to enlarge)
Defects are no good when we log them and keep them to ourselves. We will have to assign them in a right order to have the concerned teams act on them. The process – who to assign or what order to follow can also be found in the test plan document. It is mostly similar to (Click on image to enlarge)
From the above process, it can be noted that bugs go through different people and different decisions in the process of being identified to fixed. To track and to establish a transparency as to exactly what state a certain bug is at, a “Status” field is used in the bug report. The entire process is referred to as a “Bug life cycle”. For more information on all the statuses and their meaning, please refer to this bug life cycle tutorial.
For our live project if we were to follow the defect life cycle for defect 1,
This is where we employ what we would call the “Exit criteria”. This is pre-defined in the Test plan document. It is simply in the form of the checklist that will determine whether we conclude the testing after cycle 2 or go for one more cycle. It looks like, the below when filled out taking into consideration some hypothetical answers to the following questions concerning, OrangeHRM project:
When we look carefully at the above checklist, there are metrics and sign off mentioned there that we have not discussed earlier. Let us talk about them now.
We have established that during the test execution phase, reports are sent out to all the other project team members to give a clear idea about what is happening in the QA execution phase. This information is important to everyone in order to get a validation about the overall quality of the final product.
Imagine I report that 10 test cases passed or 100 test cases were executed- these numbers are merely raw data and do not give a very good perspective about how things are going on.
Metrics play a vital role is filling this gap. Metrics are in simple words, intelligent numbers that the testing team collects and maintains. For example, if I said 90% of the test cases passed, it makes more sense than saying 150 test cases passed. Isn’t it?
There are different kinds of Metrics collected during the test execution phase. What metrics exactly are to be collected and maintained for what periods of time- this information can be found in the test plan document.
The following are the most commonly collected test metrics for most projects:
Check out the status report attached to this article to see how these metrics are used.
As we have to notify all the stakeholders that testing has begun, it is also the QA team’s duty to let everyone know that testing has been complete and share the results. So, typically an email is sent from the QA team (usually the team lead/QA manager) giving an indication that QA team has signed off on the product attaching the test results and the list of open/known issues.
Sample Test Sign off Email:
To: Client, PM, Dev team, DB team, BA, QA team, Environment Team (and anyone else that needs to be included)
Email: Hello Team,
QA team signs off on the OrangeHRM version 3.0 software after the successful completion of the 2 cycles of functional testing the website.
The test cases and their execution results are attached to the email. (Or mention the location where they are present. Or if using test management software, provide details regarding the same.)
The list of known issues is attached to the email too. (Again, any other references that make sense can be added.)
QA team lead.
Attachments: Final Execution Report, Final issue /defect report, Known issues list
Once the Test sign off email goes out from the QA team, we are officially done with the STLC process. This does not necessarily mark the completion of the “Test” phase of the SDLC. We still have the UAT testing to finish for that to happen. Find more details about UAT testing here.
After the UAT is done, the SDLC moves to deploy phase where it goes live and is available to its customers/end users to be consumed.
This has been our effort to bring the most live like QA Project experience to our readers. Please let us know your comments and questions about this free online Software Testing QA training series.