Software Testing covers a wide range of areas where any verification or validation of software functionality can occur. Occasionally, non-functional aspects become less concerning over the functional aspects. They are not performed practically; simultaneously during software testing.
This article explains the added benefits of the quality of the software product during various scenarios in the software testing life-cycle when both functional and non-functional are taken simultaneously.
Why functional testing and performance testing should be done simultaneously
Functional testing becomes much more important for any software pre-release. Actual results-based verification and validation in the replicated production or test environment are where the testing usually happens.
Defect leakage can become one of the greatest issues:
Testers have more responsibility than developers in terms of the quality of the product. Basically, they don’t want the tested product to have defect leakage. Testers generally tend to only perform functional testing to achieve this.
The following is a conversation between a Test Manager and a Tester:
(Test Manager is referred as ‘TM’ and Tester as ‘TR’)
TM: Hey buddy… How are we doing in the product ‘A’ testing?
TR: Yep… We are progressing through in a greater fashion.
TM: That’s fantastic… And what is our scope in terms of performance testing while functional testing is under execution?
TR: We aren’t covering them, our deliverables are supposed to be only in the functional area and not on the non-functional area. Also, the test environment we’re using is not an exact replica of the production.
There are a few questions from the above conversation to be considered:
- Does functional testing have a dependent factor over performance?
- What if the performance of the software is degraded, but the delivery of the product happens without checking the performance?
- Performance testing – is it co-existing within the functional testing process?
It has become a general practice for testers not to work on the non-functional aspects unless they are requested to do so. It’s common to avoid non-functional testing until the client has reported issues with the performance of the software under test.
So, there are 2 questions for you to consider:
- Performance – does it impact the functional testing?
- Do we keep performance testing as a separate deliverable, even if it worries the client?
Performance testing is important!
Software works based on various architectures and following models, including:
- Required response reply models
- Transactions based systems
- Load-based systems
- Data replication based systems
The above mentioned systematic model’s functional testing behaviour depends on the performance of the system.
The automation point-of-view requires much attention towards performance testing.
The following is a conversation between a client and the Test Manager.
(Client is referred as ‘CL’ and Test manager as ‘TM’)
CL: Hence coming to the solution we have requested, I hope there will be multiple iterations of the testing which is happening currently.
TM: Yes, this can be done. As you have said, there will be a higher probability of the iterative testing, we would like to propose automation to deal with the functional (regression) testing.
CL: OK great, please send us your approach so we may approve this. Automation will have a much higher output with minimal effort.
TM: Exactly. We will work on the approach and get back to you with a Proof of Concept.
From the above conversation, it is clear that the clients’ need is to optimize efficiency.
Company ABC works on a project for developing Software A. Testing the Software A is being done by the company XYZ.
The contract for Company ABC and XYZ has some restrictions for their collaboration. Any discussion between the 2 companies should happen once-a-week or three-times-a-month. The system works on a model of request-response mode. The development phase has been completed by Company ABC.
Now it’s the time for Company XYZ to perform the formal functional testing on Software A. XYZ starts working on testing Software A. They have given a clean chit on the software and have given the ‘Go’ for live implementation after 2 cycles of testing.
In spite of quality certificate from the testing team, the live implementation did not go well. There were lots of post-production bugs. There were large numbers of issues faced by the clients, including a break in functionality for the end-to-end business processes.
So now what is the problem?
- Is it a problem with a restriction on collaboration between the development and testing team?
- Is it that the requirements were not captured 100%?
- Is it that the product was not tested in a proper test environment?
- Or any other causes?
After careful research and analysis, the following were inferred:
- There were few of the dependent and interdependent applications which had performance issues while fetching the responses.
- The test inputs used were not absolute.
- The robustness of the software was not taken care of.
- Lots of sync issues between the multiple independent applications.
- The software testing had done multiple re-works which were not considered.
Hence after the remedial actions planning team stepped in, the following were suggested:
- The interaction between the development team and the testing team has to be increased.
- All dependent applications need to be connected and included in the system functional testing
- The request and response time-out value needs to be increased to give room to non-production environments
- Various input ranging between simple to complex has to be used in functional testing
- Non-functional testing, especially the performance and load testing, has to be done as advised by the remedial team.
- In addition to system testing, system integration testing has to be performed.
- A minimal time-gap between any two testing iterations has to be provided. This is for re-testing the previously identified bugs.
- All bugs identified in previous iterations should be fixed in the current iteration.
The testing team implemented all of the actions proposed and there were a great number of defects uncovered in little time.
- The live implementation schedule of the software improved significantly by optimizing the test-cycle times.
- There was good progress in the optimization of the software quality. Hence there was a tremendous decrease in the support tickets post-implementation.
- Re-works were decreased and it was testing iterations instead of re-work. Between the different iterations, there were better improvements in quality observed.
Performing non-functional testing during functional test execution is more advantageous and will add more benefits to the overall software quality. This will identify performance bugs (restricted to the testing environment and dependency) and hence will be reducing situations of functional issue assumptions.
Sufficient planning for performing functional and non-functional testing (to a minimum level) has to be done in order to keep a strong relationship among the other stakeholders of the project.
About Author: This is an article written by Nagarajan. He is working as test lead with over 6 years of Testing experience in various functional areas like Banking, Airlines, Telecom in terms of both manual and automation.
What is your approach for doing functional and performance testing? Let us know in comments below.