7 Principles of Software Testing: Defect Clustering and Pareto Principle

By Vijay

By Vijay

I'm Vijay, and I've been working on this blog for the past 20+ years! I’ve been in the IT industry for more than 20 years now. I completed my graduation in B.E. Computer Science from a reputed Pune university and then started my career in…

Learn about our editorial policies.
Updated March 7, 2024

Seven Principles of Software Testing: Including More Details about Defect Clustering, Pareto Principle and Pesticide Paradox.

I’m sure that everyone is aware of the “Seven Principles of Software Testing”.

These fundamental testing principles help the testing teams to utilize their time and effort to make the testing process an effective one. In this article, we will learn in detail about two principles i.e. Defect Clustering, Pareto Principle and Pesticide Paradox.

7 principles of software testing

Seven Principles of Software Testing

Before having an in-depth look at those two principles, let us briefly understand the seven principles of software testing.

Let’s Explore!!

#1) Testing Shows the Presence of Defects

Every application or product is released into production after a sufficient amount of testing by different teams or passes through different phases like System Integration Testing, User Acceptance Testing, and Beta Testing etc.

So have you ever seen or heard from any of the testing team that they have tested the software fully and there is no defect in the software? Instead of that, every testing team confirms that the software meets all business requirements and it is functioning as per the needs of the end user.

In the software testing industry, no one will say that there is no defect in the software, which is quite true as testing cannot prove that the software is error-free or defect-free.

However, the objective of testing is to find more and more hidden defects using different techniques and methods. Testing can reveal undiscovered defects and if no defects are found then it does not mean that the software is defect free.

Example 1:

Consider a Banking application, this application is thoroughly tested and undergoes different phases of testing like SIT, UAT etc. and currently no defects are identified in the system.

However, there might be a possibility that in the production environment, the actual customer tries a functionality which is rarely used in the banking system and the testers overlooked that functionality, hence no defect was found till date or the code has never been touched by developers.

Example 2:

We have seen several advertisements for soaps, toothpaste, handwash or disinfectant sprays etc on television.

Consider a handwash advertisement which says on the television that 99% germs can be removed if that specific handwash is used. This clearly proves that the product is not 100% germ-free. Thus in our testing concept, we can say that no software is defect free.

#2) Early Testing

Testers need to get involved at an early stage of the Software Development Life Cycle (SDLC). Thus the defects during the requirement analysis phase or any documentation defects can be identified. The cost involved in fixing such defects is very less when compared to those that are found during the later stages of testing.

Consider the below image which shows how the cost of defect fixing gets increased as testing move towards the live production.

early testing - defect fixing cost

[image source]

The above image shows that cost required for fixing a defect found during the Requirement Analysis is less and it goes on increasing as we move towards the Testing or the Maintenance phase.

Now the question is how early should the testing start?

Once the requirements are finalized, the testers need to involve for testing. Testing should be performed on requirement documents, specification or any other type of document so that if requirements are incorrectly defined then it can be fixed immediately rather than fixing them in the development phase.

#3) Exhaustive Testing is Not Possible

It is not possible to test all the functionalities with all valid and invalid combinations of input data during actual testing. Instead of this approach, testing of a few combinations is considered based on priority using different techniques.

Exhaustive testing will take unlimited efforts and most of those efforts are ineffective. Also, the project timelines will not allow testing of so many number of combinations. Hence it is recommended to test input data using different methods like Equivalence Partitioning and Boundary Value Analysis.

For Example, If suppose we have an input field which accepts alphabets, special characters, and numbers from 0 to 1000 only. Imagine how many combinations would appear for testing, it is not possible to test all combinations for each input type.

The testing efforts required to test will be huge and it will also impact the project timeline and cost. Hence it is always said that exhaustive testing is practically not possible.

#4) Testing is Context-Dependent

There are several domains available in the market like Banking, Insurance, Medical, Travel, Advertisement etc and each domain has a number of applications. Also for each domain, their applications have different requirements, functions, different testing purpose, risk, techniques etc.

Different domains are tested differently, thus testing is purely based on the context of the domain or application.

For Example, testing a banking application is different than testing any e-commerce or advertising application. The risk associated with each type of application is different, thus it is not effective to use the same method, technique, and testing type to test all types of application.

#5) Defect Clustering

During testing, it may happen that most of the defects found are related to a small number of modules. There might be multiple reasons for this like the modules may be complex, coding related to such modules may be complicated etc.

This is the Pareto Principle of software testing where 80% of the problems are found in 20% of the modules. We will learn more about Defect clustering and Pareto Principle later in this article.

#6) Pesticide Paradox

Pesticide Paradox principle says that if the same set of test cases are executed again and again over the period of time then these set of tests are not capable enough to identify new defects in the system.

In order to overcome this “Pesticide Paradox”, the set of test cases needs to be regularly reviewed and revised. If required a new set of test cases can be added and the existing test cases can be deleted if they are not able to find any more defects from the system.

#7) Absence of Error

If the software is tested fully and if no defects are found before release, then we can say that the software is 99% defect free. But what if this software is tested against wrong requirements? In such cases, even finding defects and fixing them on time would not help as testing is performed on wrong requirements which are not as per needs of the end user.

For Example, suppose the application is related to an e-commerce site and the requirements against “Shopping Cart or Shopping Basket” functionality which is wrongly interpreted and tested. Here, even finding more defects does not help to move the application into the next phase or in the production environment.

These are the seven principles of Software Testing.

Now let’s explore Defect Clustering, Pareto Principle and Pesticide Paradox in detail.

Defect Clustering

While testing any software, the testers mostly come across a situation wherein most of the defects found are related to some specific functionality and the rest of the functionalities will have a lower number of defects.

Defect clustering means a small number of modules containing most of the defects. Basically, the defects are not distributed uniformly across the entire application, rather defects are concentrated or centralized across two or three functionalities.

At times, it is possible due to the complexity of the application, coding may be complex or tricky, a developer may make a mistake which might impact a specific functionality or module only.

Defect Clustering is based on “Pareto Principle” which is also known as 80-20 rule. It means that 80% of the defects found are due to 20% of the modules in the application. The concept of Pareto Principle was initially defined by an Italian economist – Vilfrodo Pareto.

If testers look at 100 defects, then it will not be clear if there is any underlying meaning against those 100 defects. But if those 100 defects are categorized on some specific criteria, then it may possible for the testers to understand that large numbers of defects belong a very few specific modules only.

For Example, let’s consider the below image which is tested for one of the banking application and it shows that most of the defects are related to the “Overdraft” functionality. Rest of the functionalities like Account Summary, Funds Transfer, Standing Instruction etc., have limited number of defects.

Defect Clustering

[image source]

The above picture states that there are 18 defects around the Overdraft functionality out of the total 32 defects, which means that 60% of the defects are found in the “Overdraft” module.

Hence, testers mostly concentrate on this area during execution to find more and more defects. It is recommended that the testers should have a similar focus on the other modules as well during testing.

When a same code or module is tested, again and again, using a set of test cases than during the initial iterations, then the numbers of defects are high, however, after some iteration, the defect count will significantly get reduced. Defect clustering indicates that the defect-prone area is to be tested thoroughly during regression testing.

Pesticide Paradox

When one of the modules is found to have more defects, then the testers put some additional efforts to test that module.

After a few iterations of testing, the quality of code gets improved and the defect count starts dropping as most of the defects are fixed by development team since the developers are also cautious while coding a particular module where the testers found more defects.

Hence, at one point, most of the defects are discovered and fixed so that no new defects are found in that module.

However, at times it may happen that while being extra cautious during coding on one particular module (here in our case the “Overdraft” module), the developer may neglect the other modules to code it properly or the changes made in that particular module might have a negative impact on the other functionalities like Account Summary, Funds Transfer and Standing Instructions.

When the testers use the same set of test cases to execute the module where most of the defects are found (Overdraft module) then, after fixing those defects by the developers those test cases are not much effective to find new defects. As the end to end flow of the Overdraft, the module is tested thoroughly and the developers also have written the code for that module cautiously.

It is necessary to revise and update these test cases. It is also a good idea to add new test cases so that new and more defects can be found in different areas of software or application.

Preventive Methods of Pesticide Paradox

There are two options through which we can prevent Pesticide Paradox as shown below:

a) Write a new set of test cases which will focus on different area or modules (other than earlier defect prone module – Example: “Overdraft”) of the software.

b) Prepare new test cases and add to the existing test cases.

In the “method A”, testers can find more defects in the other modules in which they were not focused during the earlier testing or the developers were not extra cautious during coding.

In our above example, testers can find more defects in Account Summary, Funds Transfer or Standing Instructions modules using the new set of test cases.

But it may happen that the testers may neglect the earlier module (Example: “Overdraft”) where most of the defects were found in the earlier iteration and this could be a risk as this module (Overdraft) might have been injected with the new defects after coding of the other modules.

In the “method B”, new test cases are prepared so that new potential defects can be found in the rest of the modules.

Here in our example, newly created test cases will be able to help in identifying defects in the modules like Account Summary, Funds Transfer and Standing Instruction. However, testers cannot ignore the earlier defect prone modules (Example: “Overdraft”) as these new test cases are merged with the existing test cases.

The existing test cases were more focused on the “Overdraft” module and the new test cases were focused on the other modules. Hence all set of test cases are executed at least once even a code change happens on any module. This will ensure that proper regression gets executed and the defect can be identified due to this code change.

Using the second approach, the total test case count goes high significantly and results in more efforts and time required for execution. This will obviously impact on the project timelines and most importantly on the project budget as well.

Hence to overcome this problem, the redundant test cases can be reviewed and then removed. There are many test cases which become useless after adding new test cases and modifying the existing test cases.

It is necessary to check which test cases are failed in order to identify the defects in the last 5 iterations (let’s assume 5 iterations) and which test cases are not much important. It may also be the case that the single flow covered in a few test cases can be covered in another end to end test cases and those test cases having single flow can be removed.

This, in turn, will reduce the total test case count.

For Example, we have 50 test cases to cover one particular module and we have seen that out of these 50 test cases 20 test cases are failed to detect a new defect in the last few testing iterations (let’s assume 5 iterations). So these 20 test cases need to reviewed thoroughly and we need to check how important are these test cases and a decision can be made accordingly as whether to keep the 20 test cases or to remove them.

Before removing any test case, verify that the functionality flow covered in those test cases are covered in another test case. This process needs to be followed across all modules so that the total test case count significantly gets reduced. This will ensure that the total count of the test cases is reduced but there is still 100% requirement coverage.

It means that all the remaining test cases cover all the business requirements, hence there is no compromise on quality.

Conclusion

Software Testing is an essential step in SDLC as it verifies if the software is working as the per end user needs or not.

Testing identifies as much defect as possible. Hence in order to perform testing effectively and efficiently, everyone should be aware and indeed understand the seven principles of software testing clearly ad they are known as the pillars for testing.

Most of the testers have implemented and experienced these principles during actual testing.

Generally, the term principle means the rules or laws that need to be followed. Hence, everyone in the software testing industry must follow these seven principles, and if anyone ignores any of these principles then it may cost huge to the project.

Happy Reading!!

Was this helpful?

Thanks for your feedback!

Leave a Comment