Part way through your test cycle, do you often realize you do not have enough time to test? You had it all under control, to begin with, but soon you are reaching the contingency plan’s “What to do when there isn’t enough time to test?” section.
I have been there too and it is not fun. :)
I thought about this long and hard. How can something that started so well, go down so badly, so quickly. And, here is my analysis.
Where Did My Testing Time Go?
Firstly, Why does this happen? Many reasons – some of which are:
#1) Incorrect Estimation:
If you started with an inaccurate expectation, things are bound to fail. A good test estimate must take the following into account:
- Time for preparatory tasks – We are talking about tasks such as:
- Identifying and putting together a regression suite
- Creating Test data
- Time to determine test readiness (E.g.: Smoke/Sanity Test), etc.
- Test case maintenance: Test cases are long-term usage assets. They are sure to undergo minor updates during execution. It is recommended that for new products up to 30% of your test execution time should be allocated for these minor maintenance tasks. All teams and projects might not need 30%, but do allocate some time and effort for this task.
- Ad-hoc/Exploratory testing – The count of scripted tests is a major denominator for test estimation numbers. However, no test team in this world will deny exploring your software even if the model is dominantly scripted.
- Reporting/Communication – This includes triage/stand up meetings, updating work management tools etc.
- Contingency factor: Standards recommend 25-30% buffer to your original estimates. But teams can rarely afford it. Even then, leave a little breathing room, when possible.
- Team and its capabilities: If you have a new team or if they are using a tool for the first time, you might need to set some time aside for training. Tailor your estimates based on your team you are working with.
Recommended read => Check this for more information on test estimation success and methods
#2) Unstable builds and other technical problems:
- Smoke/Sanity test failure: When the basic tests on the AUT fail after deployment into QA environment there is pretty much nothing the QA team can do towards test execution. It is true that we can work on other tasks while this happens, but it still will not fill the test cycle time. So, this is a major contributor to time wasted.
- Test data unavailable: Production-like data is a must for every testing project. Not getting this into the QA environment on time is also another blocking factor. Sometimes testers can work around this by creating and managing their own test data, but it is time-consuming and might not always be on-point.
- Environment issues – The build failing deployments, the server keeps getting timed out, many more such issues eat away your test cycle. This probably stems from the fact that, some companies (not all) undermine the importance of a good, live-like environment for effective QA. They often try to get away low-capacity servers and make-do set ups. This is really a short-time fix and does nobody any favors. In fact, it could cost them the quality of testing and loss of valuable test time.
#3) Lack of agreement between all parties involved:
This might be a rare problem with teams following Agile or SAFe due to the close circles they work in, but many teams still suffer from disagreement or miscommunication as to when Dev, Ops, and QA is supposed to receive deliverables from one another. Hence, delays.
To understand communication subtleties, check this => How Business, Development and QA Can Work Together to Get the Project Completed
Now that we know the problems, here are some ways to fix it.
How can testers get enough time for testing?
#1) Estimate accurately. When in doubt over-estimate by a reasonable margin, but not underestimate. Don’t forget to make estimate adjustments based on your team, tools and processes. When done, seek official sign off so everyone is aware and is in kept in the loop.
#2) Take historical data into consideration – The Test Management tool is your best friend.
- How long did the earlier release test cycles take?
- What kind of issues caused interruptions to the previous test cycle?
- How many runs did most test cases take before they passed?
- What defects were reported?
- What defects caused the testing to be interrupted?
#3) Ask these questions and plan accordingly in crunch time:
- Find out Important functionality is your project?
- Find out High-risk module of the project?
- Which functionality is most visible to the user?
- Which functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which parts of the code are most complex, and thus most subject to errors?
- Which parts of the application were developed in rush or panic mode?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?
Considering these points, you can greatly reduce the risk of project releasing under less time constraint.
#4) Use a Test Management tool. This will significantly reduce the amount of preparation, reporting and maintenance time and effort.
=> For the list of the most popular test management tool choice, check out here:
#5) There is not much we can do about incorrect builds/technical issues, but the one thing that can help is looking at the Unit test results. This will give us an idea as to whether the build was a success or not and what kind of tests did it fail – so we don’t reinvent the wheel.
If your Test Management Tool supports CI integration, you have that information available without any fuss so you understand the stability of the application better.
#6) Measure your productivity and progress often. Don’t let status reports be a deliverable just for the benefit of the external teams. Make sure you are closely monitoring your daily targets and your ability to accomplish them.
Also, be sure to not get into the classic conundrum of ‘Velocity vs. Quality’. Because, when you report, say, 50 bugs a day, it might appear as if you are being super productive. But if most of them are coming back as invalid ones, you have got yourself a problem.
So monitor, monitor and monitor a little more :)
Finally, despite all the precautions and measures if you still find yourself crunched for time, ask help.
Most teams are willing to participate in a war room session to get things back on track.
About the author: These helpful testing tips are provided by STH team member Swati S.
Now, what are your tricks to stay on time and deliver a quality testing service? Also, what points in the above article resonate with you?
We appreciate your feedback and cherish your readership. Thank you for reading!