This article explains 8 Key Performance Indicators for Quality Releases with the help of Panaya Test Center end-to-end test solution:
It’s no secret that the Software Quality Managers are facing an increasing pressure to deliver high-quality software at a record-breaking speed.
The question that all of us often ask is – “how do we measure our success” in terms of software quality?
Speed-to-market is a much simpler calculation, but measuring our performance in delivering high-quality software depends on a multitude of factors such as the project methodology (waterfall, hybrid, agile), the complexity of the software, the level of technical debt involved, the number of interfaces, and much more.
In a nutshell, the number of variables that plays into an acceptable level of high severity defects should not be underestimated. Hence, in order to survive in this marketplace, we must evolve continuously, both in our opinions and our measuring sticks.
That’s the reason for which I’ve developed this list of top 8 KPI’s that you should add to your Quality Scorecard, and start tracking to mitigate release risk, improve quality, and measure your success right away.
What You Will Learn:
This is a measure of your overall regression testing effectiveness. It is calculated as a ratio of defects found prior and after release by your customers.
Defects found after you release are typically known as “incidents” and are logged in a help desk system whereas the defects found during the testing phases (E.g., Unit, System, Regression, or UAT) are identified prior to release and documented with the tools like Panaya Test Center.
In order to calculate this KPI properly, you should always categorize the software version that each defect was identified within, prior to release into your production environment.
The formula often used for DDE:
Number of Defects Identified in Software Version Release /
Number of Defects in Software Release + Escaped Defects Identified by End Users (E.g., Incidents)
Here’s a Simple Illustration:
Assume that 95 defects were found during your regression testing cycle on that last monthly SAP Service Pack and 25 defects were logged after the release. The DDE would be calculated as 95 divided by (95 + 25) = 79%.
Keep in mind that the DDE should be monitored with a line chart that starts at 100% the day after releasing to production. And as your internal end users and customers begin working with your latest SAP service pack as an example, they will inevitably log a few incidents.
It’s been my experience that a “feeding frenzy” occurs within the first week 2 days after a Service Pack hits the productive environment. That is when you’ll notice a quick drop from 100% to about 95% as incidents are logged. If your company is on a monthly Service Pack release cadence, then measure DDE for a 30-day period on each Service Pack.
On the other hand, if your company is only running four (4) major release cycles per year, then measure it for 90 days to see how it declines over that period of time.
What is considered as a “good DDE”?
Its much like the blood pressure readings which every organization and person evolves over time.
Though the medical community defines the “optimal” blood pressure reading to be 120/80 – it’s natural to see an increase in systolic blood pressure as we age. With DDE, industry practitioners and thought leaders have been known to say that 90% is commendable in most industries.
However, I have seen organizations achieve >95% DDE on a consistent basis by shifting left with change impact simulation tools such as Panaya’s Impact Analysis.
Have you ever encountered multiple defects that are associated with the same objects? Surely, you would have. It’s a common phenomenon that many test managers encounter.
Suddenly, you see a huge uptick in the number of bugs reported in a UAT cycle. Fortunately, I bet that you’re of the type that monitors defects every 15 minutes and manually “links” the duplicates together or reads through every single description to discern the root cause yourself, right? Doubtful.
So, what are your options to manage the inevitable drama of “defect inflation?”
The drama that ensues on that nightly recap call with the leadership at HQ about “Why such a sudden uptick in defects today?” (Pause…. Deep breath before responding) … “I’m in the process of working with our Functional Leads to perform a manual root cause analysis.
But we think that many of the issues relate to a common issue, but that hasn’t been identified yet”, Sounds familiar?
My suggestion is that you begin tracking what Panaya calls “System-Wide Defects”. Tracking this manually takes forever – believe me, I’ve tried it many times. It’s also painful to do while using legacy ALM tools where all you’re left with is the ability to link the defects to one another and add a comment.
Wow, that really helped! (sense the sarcasm?). But if you don’t have a choice in tools now, then you’ll need to set aside the time to properly track System-Wide Defects to clearly “explain away”? why the bug trend line is moving upward towards the end of a testing cycle rather than down.
If you get a chance, check out Panaya Test Center, it has SWD built into the engine itself which automatically calculates SWD for you on-the-fly.
The Spider Web – Residing within the ‘Risk Cockpit’ of this platform, this is a powerful yet simple representation of the 6 additional key performance indicators that rounds off the most important KPIs that every quality, testing, and release manager should be tracking.
QA managers understand the risk at a deeper level that can only be realized with a code or transport-level visibility rolled up to each requirement. This requires the right set of tools.
Panaya tool will answer the needs of SAP-ran organizations seeking intelligent suggestions for unit tests, and risk analysis based on transport activity.
This level of tracking is available within Panaya Release Dynamix (RDx).
We live in an era in which customers are the King and this drives every organization’s digital transformation strategy. In this day and age, we can’t afford to be siloed in our thinking or our organizational approach to software quality assurance and delivery.
Our traditional ALM models of yesteryear were not designed for the continuous delivery model of today. In order to combat this old way of thinking, QA and testing managers must embed themselves within the action of application development, which means having a pulse on the delivery of user stories.
It’s not enough to “sit and wait” for a user story to reach the done status. Rather we must follow the evolution of a user story, attend daily Scrum meetings, and talk openly about the risks unfolding with important changes being made to the application under test.
This is one of my favorite KPI to track because I’m not relegated to tracking the system, integration, regression, and UAT coverage alone.
In the true spirit of shifting-left, I have started to advise about the importance of tracking unit testing coverage. Sounds crazy, right? It’s not, especially if you have the right tools to make the execution of unit tests alone easy but makes even the capturing of the actual results (evidence) easier.
With Panaya Test Center’s built-in test record-and-play capability on, your participation in unit testing will skyrocket. You’ll not only be able to proudly display a Requirements Traceability Matrix showing an end-to-end coverage but will also be easily showcasing the actual results to your audit department from unit through to regression testing.
A risk is inherent to any change that we make to an application under test but we don’t always know if we are testing the right things.
Many organizations have their own definition of what ‘change risk’ means to them. Within the ‘Risk Cockpit’ of Panaya’s Release Dynamix (RDx), you can take the guesswork out of tracking the changes with an Impact Analysis for your project or next release.
RDx systematically calculates the risk for each requirement and keeps you abreast of how it changes as you move further into the delivery lifecycle.
It is too common for all organizations to track KPIs like authored tests, passed tests, automated tests, and tests executed but what about tracking the actual steps executed within each of the tests?
Have you ever noticed that many of the popular ALM platforms don’t provide out-of-the-box reporting capabilities to track test ‘step’ execution progress? When you have many different ‘hand-offs’ occurring across a UAT cycle, it makes sense to track Test Execution Risk and status, not only at the test-level but also at the business process level.
Panaya Test Center just does that, out-of-the-box.
Tracking defects inherently have a negative connotation too.
In addition to tracking active defects, defects fixed per day rejected defects, and severe defects, we also suggest monitoring the resolution of defects as they relate to scoped-in requirements.
Many organizations do not take a requirements-driven view of defect resolution.
Why this solution for Testing?
With an end-to-end traceability built into both the Release Dynamix and Panaya Test Center, your organization can track the workflow of defect resolution from start to finish at the requirement level.
This is especially helpful for release, quality, and test Managers seeking a bird’s eye view of a project or release cycle.
Panaya accelerates the testing process for technical IT and business users, thereby reducing overall testing effort by 30-50%:
#1) Panaya Test Center is a SaaS solution which means you gain seamless integration, frequent and painless upgrades as well as monitoring of on-premise automation tools.
#2) Built-in collaboration tools streamline testing cycles with built-in notifications and communication tools.
Automatic handover of test steps to the next user eliminates idle time, relieves workload bottlenecks and ensures optimal workflows.
#3) Smart defect management enables users to centrally monitor defects, their resolution and the business processes affected by them.
When a defect is found, automatically identifies all the other tests affected by it and blocks or sends notifications to testers until the main defect is resolved. The resolved defect is automatically closed, by eliminating defect backlog.
#4) With a business process-centric approach to UAT and SIT, cross-functional and geographically dispersed subject matter experts validate UAT cycles based on the actual business processes (packaged applications).
#5) Test Automation Connectors provide a complete integration of Panaya Test Center with the existing automation tools for effective regression cycles in a minimum time and effort with holistic tracking and monitoring capabilities.
#6) Test Evidence Automation automates manual testing traditionally managed in Excel and Word.
Saves time by effortlessly documenting every test execution – including test evidence and a record of steps for test reproduction while reducing the back and forth between developers and testers. Documentation is audit-ready, ensures compliance with all internal and external quality standards.
#7) Autonomous TestingSM for SAP enables zero-touch test case creation and maintenance so you no longer need to deal with the pains associated with business knowledge capture and the process of creating and maintaining manually engineered scripts.
Scripts are customizable while machine learning offers validation and suggestions based on crowd analysis.
#8) Automated business knowledge capture – Omega automatically creates real-life test cases based on Business User activities seamlessly captured in the production using machine learning algorithms (SAP).
The Software Quality Managers and all the relevant stakeholders can meet their testing KPIs to drive more innovation while reducing efforts by 30-50%, without compromising on scope or quality using Panaya.
Standardizes the testing process and measures success as all stakeholders adopt the same testing methodology to gain real-time visibility over all the test cycles, including large-scale UAT.
For more information, you can explore Panaya Test Center.
Let us know your thoughts/queries in comments below.