Software QA Testing Checklists:
Today we bring to you another quality tool that is so often underused that we thought we would rehash details about it in the hope that it regains its lost glory. It is ‘Check List’.
Definition: A checklist is a catalog of items/tasks that are recorded for tracking. This list could be either ordered in a sequence or could be haphazard.
Checklists are part and parcel of our daily lives. We use them in various situations from grocery shopping to having a to-do list for the day’s activities.
What You Will Learn:
As soon as we get to office, we always make a list of things to do for that day/week, like below:
As and when an item in the list is done, you strike it off, remove it from the list or check the item off with a tick – to mark its completion. Isn’t it all too familiar to us?
However, is that all it can be used for?
Can we use checklists in our IT projects formally (specifically QA) and if yes, when and how? This is what is going to be covered below.
I personally advocate the use of checklists for the following reasons:
- It is versatile – can be used for anything
- Easy to create/use/maintain
- Analyzing results (task progress/completion status) is super easy
- Very flexible – you can add or remove items as needed
As is the general practice we will talk about the “Why” and “How” aspects.
- Why do we need checklists? : For tracking and assessing completion (or non-completion). To make a note of tasks, so that nothing is overlooked.
- How do we create checklists? : Well, this could not be simpler. Simply, write everything down point by point.
Example checklists for QA processes:
As I mentioned above, there are some areas in the QA field where we can effectively put the checklist concept to work and get good results. Two of the areas that we will see today are:
- Test readiness review
- When to stop testing or Exit criteria checklist
Test Readiness Review:
This is a very common activity that is performed by every QA team to determine whether they have everything they need to proceed into the test execution phase. Also, this is a recurring activity before each cycle of testing in projects that involve multiple cycles. In order to not run into issues after the testing phase begins and realize that we entered the execution phase prematurely, every QA project needs to conduct a review to determine that it has all the inputs necessary for successful testing.
A checklist facilitates this activity perfectly. It lets you make a list of ‘things-needed’ ahead of time and to review each item sequentially. You can even reuse the sheet once created for subsequent test cycles too.
Additional info: Test Readiness Review is generally created and the review is performed by the QA team representative. The results are shared to the PMs and the other team members to signify whether the test team is ready or not to move into the test execution phase.
The below is an example of a sample Test Readiness Review checklist:
Test Readiness Review (TRR) Criteria
|All the requirements finalized and analyzed||Done|
|Test plan created and reviewed||Done|
|Test cases preparation done|
|Test case review and sign off|
|Test data availability|
|Sanity testing done?|
|Team aware of the roles and responsibilities|
|Team aware of the deliverables expected of them|
|Team aware of the communication protocol|
|Team’s access to the application, version controlling tools, test management|
|Technical aspects- server1 refreshed or not?|
|Defect reporting standards are defined|
Now, all you have to do with this list is mark done or not done.
Exit Criteria Checklist:
As the name indicates, this is a checklist that aids in the decision making of whether a testing phase/cycle should be stopped or continued.
Since, a defect free product is not possible and we will have to make sure that we test to the best extent possible in the given amount of time – a checklist of the below effect is created to track the most important criteria that needs to be met to deem a testing phase satisfactory.
|100% Test Scripts executed||Done|
|95% pass rate of Test Scripts|
|No open Critical and High severity defects|
|95% of Medium severity defects have been closed|
|All remaining defects are either cancelled or documented as Change Requests for a future release|
|All expected and actual results are captured and documented with the test script||Done|
|All test metrics collected based on reports from HP ALM|
|All defects logged in HP ALM||Done|
|Test Closure Memo completed and signed off|
Are you going to start a new project for testing? Don’t forget to check this Testing Checklist in each and every step of your Project life cycle. The list is mostly equivalent to Test plan, it will cover all quality assurance and testing standards.
1 Create System and Acceptance Tests [ ]
2 Start Acceptance test Creation [ ]
3 Identify test team [ ]
4 Create Workplan [ ]
5 Create test Approach [ ]
6 Link Acceptance Criteria and Requirements to form the basis of
acceptance test [ ]
7 Use subset of system test cases to form requirements portion of
acceptance test [ ]
8 Create scripts for use by the customer to demonstrate that the system meets
requirements [ ]
9 Create test schedule. Include people and all other resources. [ ]
10 Conduct Acceptance Test [ ]
11 Start System Test Creation [ ]
12 Identify test team members [ ]
13 Create Workplan [ ]
14 Determine resource requirements [ ]
15 Identify productivity tools for testing [ ]
16 Determine data requirements [ ]
17 Reach agreement with data center [ ]
18 Create test Approach [ ]
19 Identify any facilities that are needed [ ]
20 Obtain and review existing test material [ ]
21 Create inventory of test items [ ]
22 Identify Design states, conditions, processes, and procedures [ ]
23 Determine the need for Code-based (white box) testing. Identify conditions. [ ]
24 Identify all functional requirements [ ]
25 End inventory creation [ ]
26 Start test case creation [ ]
27 Create test cases based on inventory of test items [ ]
28 Identify logical groups of business function for new system [ ]
29 Divide test cases into functional groups traced to test item inventory [ ] 1.30 Design data sets to correspond to test cases [ ]
31 End test case creation [ ]
32 Review business functions, test cases, and data sets with users [ ]
33 Get signoff on test design from Project leader and QA [ ]
34 End Test Design [ ]
35 Begin test Preparation [ ]
36 Obtain test support resources [ ]
37 Outline expected results for each test case [ ]
38 Obtain test data. Validate and trace to test cases [ ]
39 Prepare detailed test scripts for each test case [ ]
40 Prepare & document environmental setup procedures. Include back up and
recovery plans [ ]
41 End Test Preparation phase [ ]
42 Conduct System Test [ ]
43 Execute test scripts [ ]
44 Compare actual result to expected [ ]
45 Document discrepancies and create problem report [ ]
46 Prepare maintenance phase input [ ]
47 Re-execute test group after problem repairs [ ]
48 Create final test report, include known bugs list [ ]
49 Obtain formal signoff [ ]
If you answer yes to any of these questions, then your test should be seriously considered for automation.
#1. Can the test sequence of actions be defined?
Is it useful to repeat the sequence of actions many times? Examples of this would be Acceptance tests, Compatibility tests, Performance tests, and regression tests.
#2. Is it possible to automate the sequence of actions?
This may determine that automation is not suitable for this sequence of actions.
#3. Is it possible to “semi-automate” a test?
Automating portions of a test can speed up test execution time.
#4. Is the behavior of the software under test the same with automation as without?
This is an important concern for performance testing.
Almost all non-UI functions can and should be automated tests.
#6. Do you need to run the same tests on multiple hardware configurations?
Run ad-hoc tests (Note: Ideally every bug should have an associated test case. Ad hoc tests are best done manually. You should try to imagine yourself in real-world situations and use your software as your customer would. As bugs are found during ad-hoc testing, new test cases should be created so that they can be reproduced easily and so that regression tests can be performed when you get to the Zero Bug Build phase.)
An ad hoc test is a test that is performed manually where the tester attempts to simulate the real-world use of the software product. It is when running ad hoc testing that the most bugs will be found. It should be stressed that automation cannot ever be a substitute for manual testing.
Points to note:
- The above two are examples to showcase the use of checklists to QA processes, but the usage is not limited to these two areas.
- The items in each list are also indicators to give an idea to the readers about what sort of items can be included and tracked – however, the list can be expanded and/or compacted as needed.
We really hope that the above examples have been successful in bringing forward the potential of checklists to QA and IT processes.
So, the next time you are in need for a simple tool that is semi-formal, simple and efficient, we hope we have oriented you towards giving checklists a chance. Sometimes, the simplest solution is the best.