When I started my career as a QA, I was working with a company that offered its products as SaaS. Production releases were critical and there was a possibility of affecting the functionality for the live clients.
As our client base grew, to manage the risk and minimize the impact of the release to live clients, QA team adopted the post-release testing practice.
This was all new to me and I had so many questions and doubts in my mind:
- What is post-release testing?
- I tested everything properly, why do we need to do post-release testing?
- Do I test everything all over again? What do I exactly do in post-release verification?
- What happens if I find an issue? Etc.
I am happy to admit that I found all my answers within my first few production releases.
Here I am sharing that knowledge with you all. I chose to write the article in a question and answer format to show you the way I discovered the answers.
What You Will Learn:
- What is Post Production Release Verification?
- What tasks and activities are included in post-release verification phase?
- Do I need to test everything all over again?
- How do I formulate post production release verification strategy?
- Who creates the post production release test plan?
- Who approves the post production release test plan?
- When do I create the post production release verification plan?
- I completed the post production release verification. What’s next?
- What happens if I find an issue?
- What else do I need to know about post production release verification process?
- Conclusion
- Recommended Reading
What is Post Production Release Verification?
By definition, Post means After, Production Release refers to deployment to LIVE/production environments and Verification includes making sure the features released meet the requirements.
Recommended read => How to Effectively Prepare “Test Environment” before starting to test
The objective is to verify the release on production/LIVE environments.
But the questions then arise:
- Why do we need to do post production release testing when I tested everything on QA environment?
- Why do we anticipate issues to occur on production although we tested the release thoroughly on test environment?
There are many reasons why we would have issues on production even though we might have followed complete Quality assurance process (i.e. test planning, test plan review, test cycle, regression tests etc.)
Reasons why we would have issues on production:
1) Data Issue – The data available on production and test environments may vary. This can cause some corner-case issues to be missed on test environments.
2) Deployment issue – If your company has a manual build deployment process, your release may be more prone to deployment issues. Some common scenarios can be, missing configuration or site settings, missing DB scripts, order of deployment not followed (code first, then DB etc.), dependencies installed incorrectly, etc.
Also read => What QA Tester Should Know About Deployment process
3) Impact areas not identified – There can be some scenarios for which the impacted areas may not have been identified correctly and completely by the team.
For example, consider a SaaS environment.
If the team did not identify the impact of a re-factored DB table on a client using older table schema (e.g. data loss, need for data migration before release, etc.), etc. This issue is less likely to happen for well-planned projects with precise requirements. But, the possibility still exists.
4) Unknown Impact areas – This can occur if the scope and impacted areas of the release are not known. For example, in a company with several software products sharing common DB and architecture, even a small change can break the functionality of many products.
What tasks and activities are included in post-release verification phase?
Post production release tasks and activities generally include:
- Post production release verification
- Report verification results
- Reporting any issues found on production
- Post release verification data clean up
- Post release monitoring (if applicable)
Do I need to test everything all over again?
Not necessarily. This depends on the build to be released and the impact analysis.
Detailed testing should be done during regular QA cycle. Post release verification should be done by following a Post production release verification test plan that should be a derivative of the full Test Plan for that release.
How do I formulate post production release verification strategy?
Post production release verification planning needs to be done in a similar way as your regular test planning.
The strategy should be on the same lines as the test flow followed during the QA cycle. It is important to include most important and critical steps that allow maximum functionality coverage.
A good post production release strategy should:
- Include steps to test new features as well as major existing features
- Verify major impact areas
- Allow maximum functionality coverage
- Optional: Include any critical bugs that were found in test environment
- Optional: Include priority of the test cases
Who creates the post production release test plan?
This will vary across companies and will depend on the organization structure.
Let’s take an example of the following QA team organization.
In this scenario, QA working on the specific project will formulate the initial post production release test plan.
Who approves the post production release test plan?
This will vary across companies and will depend on the organization structure.
Again considering the same organization structure as shown in the previous question, the post production release test plan should be reviewed and approved by the Test Lead or QA Manager.
When do I create the post production release verification plan?
The post production release test plan can be created anytime during the software development cycle after the requirements, development scope and impact areas are identified and locked. It is usually easier for the QA to create the post production release test plan midway into the sprint. That ensures that there is enough time for review and approval.
It is a good practice to include this test plan along with any formal QA sign off documents before the project enters the deployment and release phase.
I completed the post production release verification. What’s next?
After the post release verification is complete, the next steps would be
1) Communication of verification results – The verification results should be communicated to the stakeholders including any issues that may have been found on production.
2) Reporting any issues found on production in the Defect Management tool – To facilitate root cause analysis and traceability.
3) Post release verification data clean up – The data cleanup needs to be done after verification is complete.
For example, consider a release for an eCommerce application and say you created a test order on production. This test order needs to be canceled after the verification is complete.
4) Post production release monitoring (if applicable) – Some releases require monitoring on production.
For example, if the team made improvements to improve the page load times in the Application, this would need to be monitored over some period of time to ensure that the improvement was indeed seen after release. The responsible person(s) for monitoring should be clearly identified and communicated to.
What happens if I find an issue?
Any issues should be reported in the Defect management tool and communicated to the stakeholders. If any critical issues are found on production, communication of results should occur immediately as a decision would need to be made if the build needs to be rolled back to investigate the issue further.
It is important that all the issues found are reported in the Defect Tracking Tool. It is recommended that these be raised as separate issue type (e.g. Post Production Bug) to show separation from regular QA cycle bugs. These issues can be filtered out easily if required for the purpose of root cause analysis.
What else do I need to know about post production release verification process?
Besides the actual post production release verification process, plan and strategy, below are some pointers:
- It is important to set clear expectations regarding scope and purpose of post release verification. Stakeholders (internal and external) should be made aware of the following
- Team cannot test everything on production
- Team cannot squeeze days worth of testing into few hours set aside for post release verification
Therefore, the testing on production would be essentially based on approved post production release test plan.
Limitations:
Due care should be taken while deciding the extent of post-production release testing. There are limitations to what and how much we can actually test on production. Production environment has live client data and needs to be handled very carefully. Additional planning should be done for changes that involve data migration, updating, deletion etc.
Example #1): For an eSurvey company, if testing involves answering and submitting the survey, QA would need to send a request to delete the test survey after verification so as to not impact the client survey collection data and their statistics.
Example #2): For an e-commerce company, let’s assume a pricing update SQL job runs at midnight every day and uploads the finalized price to the website. We cannot run this SQL on-demand, multiple times for the purpose of post-release verification as this may cause unfinalized data to be pushed to production.
Moreover, it can increase the chances of DB deadlocks and high consumption of CPU and memory resources during peak business hours which can affect the client application performance.
- The effort required for post-release testing and all related activities should be inbuilt and included in the Project Plan. Depending on the business rules and project specifics, this can be considered as project overhead or included in QA cycle or included as part of the release management plan.
- For the issues that are reported during post-release verification, root cause analysis should be conducted to find out the reason why the issue was not caught early on and what can be done better next time to avoid facing the issue. The root cause analysis can help the team to learn from these past issues and fill in any gaps in the implementation. Based on the organization structure, the Test Lead or QA Manager can complete the Root cause analysis with input from the project team. Some common root causes can be a Coding issue, requirements issue, design issue, data issue, 3rd party limitations, missing test scenario etc. Corresponding Corrective and Preventive Actions can be created and tracked.
- Server Logs can also be used to monitor the build after release. Server log may contain events or issues that may not be visible to the customer but will cause issues in the backend. This monitoring can be assigned as an action item to Dev lead and DevOps team.
An Example:
Project Overview:
Following changes need to be made to a social media application, specifically to the sign up process
- Last name field validation needs to be removed. It was previously implemented as ‘Last name should have minimum 4 characters’ (Improvement for existing field)
- Implement toggle button next to email address so that user can set the privacy settings for email address to show on their profile (new feature request)
- User should be able to choose their avatar (new feature request)
- Reduce API calls during Sign up process to improve application performance (Improvement)
Post production release verification Plan:
S.No. | Description | Expected Result | Status | Comments |
---|---|---|---|---|
1 | Go to Livesiteurl | Website homepage should load successfully | Pass | |
2 | Click on Sign up as new user | User should be redirected to the registration/sign up page | Pass | |
3 | Fill in the required fields and click on Register button Note: -Enter last name as ‘Lee’ -Toggle the privacy button to Do not Display -Chose an Avatar | -User should be redirected to their Profile page after successful registration. -User phone number should not be shown -User selected Avatar should show | Partial Pass | Avatar is not rendering properly and is showing as broken image. Reported in JIRA as BUG-1088 |
4 | Monitoring - Verify if the application performance has improved after this release | Reduction of API calls during Sign up process should improve application performance | Ongoing | Action is on Dev Lead and Dev Ops team to monitor application for 24 hours |
5 | Post release cleanup | Delete the test account created | Done |
Conclusion
With most of the software companies now adopting the Agile methodology, the number of production releases has increased.
For example, while using Waterfall model, a team may have a production release every 1.5 months, however with the Agile process, the same team may now have a production release every 2-3 weeks.
With every production release, we have a possibility of knowingly or unknowingly impacting the functionality of the live clients. The adoption of post production release verification immediately after release can provide additional confidence on the release at the same time providing the safety net of rolling back the release before our live clients come across some issues.
For high impact/risk projects, post-production release verification plan can be structured based on the priority of the test scenario. Critical priority test can be executed first and communication sent to stakeholders about results and any issues. If no critical issues are found, then the post production release verification can continue, otherwise, the decision for roll back needs to be made to minimize application downtime and impact to live clients.
Additionally, post-production release testing can be automated and the test scripts can be run on demand after every release as a Regression test. Again, due care should be taken while running the automated test scripts on production as it may affect live client data and functionality.
Post production release verification is the last line of defense for any software company. If we do not catch the issues, our customers will and this can be devastating to the reputation of any Software company.
In order to maintain the reliability of the product, it is essential that we verify the changes deployed to production immediately after deployment.
About the author: This helpful article is written by Neha B. She is currently working as a Quality Assurance Manager and specialize in leading and managing In-house and Offshore QA teams.
Share your post-production release testing strategy / tips / experience with our readers.
Hi Neha,
What are your thoughts on having a Checklist or Automated Scripts to handle this type of Continuous Integration ?
gr8 article, so helpful.. Thanks a lot.
Hello folks,
Just recently I did post-production release testing. For that I created test plan and listed impact analysis points. Even after doing regression testing, I did testing of whole system flow with positive data just to ascertain, everything working as expected.
What experience I did is, just after giving green signal, client come up with 2 critical bug. After analysis, I came to that one of the client’s tester, could not understand the change and without checking twice from their end, they just throw 2 critical bug. Thereafter client realize the mistake.
Anyway, main point is, we have to look after every changes thoroughly while doing post-release testing.
Thank Neha for serving nice article.
Hi Rushl,
Can you please share the Test plan if possible that you created on my email id akshat.b.agarwal@accenture.com.
I have to create the approach doc for doing Post production release testing.
It would be a great help.
Thanks,
Akshat
Helpful Article , especially for junior testers.
Nice article with right details and very written well. Thank you.
Very nice article , all points are important and must be implement by all QA’s
Hi Amit
Automated scripts are a good way to allow maximum feature/test coverage and save effort at the same time. However while running test scripts for production, limitations of what and how much we can test on production should be kept in mind. Either we can design the scripts to be run as independent modules so that we can pick and choose what we want to run on production or we can create separate scripts for QA and Live environments. There are trade-offs with both these approaches.
Could you please provide more information about the checklist? Do you mean manual documented checklists that the QA/QA lead can use as reference during release cycle?
Thanks
Neha
Thank you dear readers for your feedback :) .
This is very informative. Thanks for sharing your article
Firstly Thank you Neha B, for this helpful article, so simple and so valued, in my opinion BVT (Build Verification Test)is a part of the Post Release Testing, Are you agree with me?
Hi Ahmed
Glad you liked the article! Thank you for your feedback.
regarding BVT, Build verification Test are usually done as a first pass tests for every new build with an objective to ensure that the build is testable. This can be set of manual or automated tests that are run every time a new build is generated and delivered by Developers. The test can be run on QA or Dev test environments and are considered as part of regular QA cycle.
Post release tests on the other hand, are done on production as a final test before the production live status is communicated to the client/customers.The objective is to verify the release on production/LIVE environments.
In terms of the approach, both the test are run according to planned set guidelines/plans, however the purpose and objective of both the tests is different.
Hope this answers your question.
Thanks
Neha
How many times should we do a full testing of the application before deploying to production?
Hi RA
There are no set guidelines for the number of cycle of tests that are required. Regression tests (full testing) may be required to be performed once or more depending on the project in hand. For simple projects – one regression test cycle closer to the release date should be sufficient.
However if you do end up finding lots of bugs during regression test, chances are you would need to re-run the regression tests again before production release.
Hope this answers your question
Thanks
Neha
Hi Neha,
Thanks for sharing this article.
I would like to make sure that I understood properly that the env used to do this post release testing is the real client env and not a copy from it?
if so, is it a new testing phase before the User acceptance testing (UAT)?
Thanks,
Lara
Hi Lara
Yes, that is correct. The test environment is this case is the actual live production environment.
This is essentially the last testing phase , so it would be after UAT for sure. This is the last test cycle that we would run on production after the build is deployed there.
Thanks
Neha
Neha, Its really a great article. I have come across lot of scenarios which are there in this article
Hi Prashanthi
Thank you for the feedback. Glad you liked the article :) !
Thanks
Neha
Hi Neha,
Nice article, how does this verification happen post release in production with automated delivery process in jenkins job?
Dharma
Hi Dharma
Glad you liked the article :) !
Automation tests could be easily run on production if they are designed in a way that wont negatively impact the business and/or data in any way (I am assuming you are referring to functional tests, that mimic user behaviour). However, one needs to be mindful of the limitations and challenges of the production environment and could design tests to work around it.
Thanks
Hi everyone,
How would you manage the ticket on a KanBan board when you have post-release tests to perform?
Kind regards,
Nice article, thanks. I work in an Agile environment and our PVT is actually performed by the Business, aptly titled Business Verification Testing (BVT).
After testing has been completed against any functionality, QA team perform UAT with the business and then involve them in the release process so they can re-validate the change in production. They also perform a suite of Manual regression scenarios (We have limitations on automation in our production env.) to verify it hasnt broken any existing functionality.
This mitigates the additional effort on QA and provides greater involvement from the business in the release process.
Our order of events are as follows:
1. Developer performs deployment steps
2. Dev performs TVT against the deployed code
3. Go/No Go
4. Business Verification Testing
5. Go/No go
6. Close change
I really love to read from this platform. Thank you to everyone contributing.
I would like to join groups where I can learn more about Testing please. Telegram or whatsapp or newsletter. I wouldn’t mind.
Thanks.