Having the most optimal product quality is the primary goal of the test organizations.
With the help of an efficient quality assurance process, test teams attempt to find maximum defects during their testing, thereby ensuring that the client or the end user consuming the product does not see any abnormalities with respect to its functioning in their own computing environment.
Since finding defects is one of the main goals of a tester, he/she needs to carefully craft or design the test scenarios to make sure the particular application or product performs the way it is supposed to.
While it is definitely important to verify that the software performs its basic functions as intended, it is equally or more important to verify that the software is able to gracefully handle an abnormal situation. It is obvious that most of the defects arise out of generating such situations with reasonable and acceptable creativity from the testers.
Most of us are already aware of several types of testing such as functional testing, sanity testing, smoke testing, integration testing, regression testing, alpha and beta testing, accessibility testing, etc. However, everyone will agree that whatever category of testing you perform, the entire testing effort can be basically generalized into two categories: positive testing paths and negative testing paths.
Let’s proceed with the next sections whereby we discuss what positive and negative testing is, how they’re different and we’ll describe some examples to understand what kind of negative tests can be performed while testing an application.
What You Will Learn:
Positive testing, many times referred to as “Happy path testing” is generally the first form of testing that a tester would perform on an application. It is the process of running test scenarios that an end user would run for his use. Hence as implied, positive testing entails running a test scenario with only correct and valid data. If a test scenario doesn’t need data, then positive testing would require running the test exactly the manner in which it’s supposed to run and hence to ensure that the application is meeting the specifications.
Sometimes there may be more than one way of performing a particular function or task with an intent to give the end user more flexibility or for general product consistency. This is called alternate path testing which is also a kind of positive testing. In alternate path testing, the test is again performed to meet its requirements but using the different route than the obvious path. The test scenario would even consume the same kind of data to achieve the same result.
It can be diagrammatically understood from a very generic example described below:
A is a starting point and B is the endpoint. There are two ways to go from A to B. Route 1 is the generally taken route and Route 2 is an alternative route. Therefore in such a case, happy path testing would be traversing from point A to B using Route 1 and the alternative path testing would comprise taking Route 2 to go from A to B. Observe that the result in both the cases is the same.
Negative testing commonly referred to as error path testing or failure testing is generally done to ensure the stability of the application.
Negative testing is the process of applying as much creativity as possible and validating the application against invalid data. This means its intended purpose is to check if the errors are being shown to the user where it’s supposed to, or handling a bad value more gracefully.
It is absolutely essential to understand why negative testing is necessary.
The application or software’s functional reliability can be quantified only with effectively designed negative scenarios. Negative testing not only aims to bring out any potential flaws that could cause serious impact on the consumption of the product on the whole but can be instrumental in determining the conditions under which the application can crash. Finally, it ensures that there is sufficient error validation present in the software.
Say for example you need to write negative test cases about a pen. The basic motive of the pen is to be able to write on paper.
Some examples of negative testing could be:
Let’s take an example of a UI wizard to create some policies. In the wizard, the user has to enter textual values in one pane and numerical values in another.
In the first one, the user is expected to give a name to the policy as shown below:
Let’s also get some ground rules to make sure we design good positive and negative scenarios.
Now let’s get to design the positive and negative testing cases for this example.
Positive test cases: Below are some positive testing scenarios for this particular pane.
Negative test cases: Below are some negative testing scenarios for this particular pane.
In the second pane, the user is expected to put in only numerical values as shown below:
Let’s establish some ground rules here as well:
Therefore here are some positive and negative test scenarios for this particular pane.
Positive test scenarios: Below are some positive testing scenarios for this particular pane.
Negative test scenarios: Below are some negative testing scenarios for this particular pane.
If you closely observe the examples above, you will notice that there can be multiple positive and negative scenarios. However effective testing is when you optimize an endless list of positive and negative scenarios in such a way that you achieve sufficient testing.
Also, in both these cases, you will see a common pattern on how the scenarios are devised. In both the cases above, there are two basic parameters or techniques that formed a basis for designing sufficient amount of positive and negative test cases.
The two parameters are:
Boundary Value Analysis:
As the name itself implies, boundary indicates limits to something. Hence this involves designing test scenarios that only focus on the boundary values and validate how the application behaves. Therefore if the inputs are supplied within the boundary values then it is considered to be positive testing and inputs beyond the boundary values is considered to be a part of negative testing.
For example, if a particular application accepts VLAN Ids ranging from 0 – 255. Hence here 0, 255 will form the boundary values. Any inputs going below 0 or above 255 will be considered invalid and hence will constitute negative testing.
In Equivalence partitioning, the test data are segregated into various partitions. These partitions are referred to as equivalence data classes. It is assumed that the various input data (data can be a condition) in each partition behave the same way. Hence only one particular condition or situation needs to be tested from each partition as if one works then all the others in that partition is assumed to work. Similarly, if one condition in a partition doesn’t work, then none of the others will work.
Therefore it’s now very apparent that valid data classes (in the partitions) will comprise of positive testing whereas invalid data classes will comprise of negative testing.
In the same VLAN example above, the values can be divided into say two partitions.
So the two partitions here would be:
Several times, I have been faced with the situation where people believe that negative testing is more or less a duplication of the positive testing rather than believing the fact that it substantiates the positive testing. My stand on these questions has always been consistent as a tester. Those who understand and strive for high standards and quality will doubtlessly enforce negative testing as a must in the quality process.
While positive testing ensures that the business use case is validated, negative testing ensures that the delivered software has no flaws that can be a deterrent in its usage by the customer.
Designing precise and powerful negative test scenarios requires creativity, foresight, skill and intelligence of the tester. Most of these skills can be acquired with experience, so hang in there and keep assessing your full potential time and again!
About the Author: This is a guest article by Sneha Nadig. She is working as a Test lead with over 7 years of experience in manual and automation testing projects.
Let us know your thoughts and experience about negative testing.