Performance Test Plan and Test Strategy were explained clearly in our previous tutorial.
If your company is involved in developing, testing and deploying software, then you’re already using more technologies and tools than can be counted.
The sheer quantity of technology in enterprise is an even better reason to leverage the work and products already being used to enhance your performance testing and deliver more value, faster, and at a lower cost.
Here are five ways to help supercharge your performance testing:
What You Will Learn:
#1) Mix Real Client Functional Tests
When you load test your servers your goal is to receive a clear answer regarding end users’ user experience – not just how the server itself behaves.
You’re trying to characterize the entire round trip of a request that started with a user’s click in the browser, continuing with the request and the response with / from the server, and ending by rendering the response on the client browser.
Load Testing is focused on creating mass load and therefore is based on the protocol level. By simulating many clients it is able to load the server side, with the client itself getting less attention. Put differently, the client footprint is minimized to create an effective load on the server.
Nevertheless, it is only through the client side that you can truly evaluate end-user experience.
By combining Selenium browser automation test scripts (or other functional test scripts) into load testing you’ll get a complete picture of the behavior of your system during load. While you generate load in your load testing tool, you’ll run a few browsers from different locations and measure their activity.
The idea is to combine an effective load on one hand, with a few browsers measuring the real user experience on the other hand.
You do want to be able to analyze the combined results of the server-side load together with the Selenium results, similar to the graph below.
(Note: Click on any image for enlarged view)
WebLOAD provides a Selenium extension to collect performance statistics.
#2) Integrate Mobile Testing
We’re now past the mobile tipping point and the number of mobile users has surpassed desktop users. Keeping this mind, it’s unreasonable to exclude mobile from your performance tests.
Similar to the real browser testing discussed earlier, your goal is to measure the user experience on a real mobile device while generating load on your system.
- Use different mobile devices and networks to test mobile experience
- Analyze combined statistics of both backend servers under load and real mobile device
- Execute performance tests on real mobile devices from within your load testing
For example, WebLOAD has a Perfecto Mobile integration (and can be integrated with any mobile service that contains a proper API). While the load test is running, all front-end measurements from Perfecto Mobile are captured, letting you view the response time for each transaction from the mobile device to the backend servers and back.
You can also evaluate within WebLOAD mobile device metrics like CPU, memory and battery usage. Or you can view side-by-side measurements of device-side data and server-side information to fully evaluate the correlation between all test components.
#3) Use APM Tools to Isolate Problems
A load testing tool will help you stress your system to unsurface problems, and in most cases will also provide server-side statistics to help identify the general location of the problem.
With some performance issues, however, you’ll need to drill down into the application in order to isolate and fix the code causing the slow response time. In such cases, combining your load testing tool with an Application Performance Monitoring (APM) product, such as Dynatrace, AppDynamics or NewRelic will speed up root cause analysis.
This will help you see the interrelationships between all components in real time – the web server, application server, database servers, cloud services and so on. You’ll be able to quickly locate the bottlenecks in your system by drilling down to the stack trace level and identifying the calls that are the most problematic.
A colleague of mine who manages performance testing at a software company estimates that his team has cut down root cause analysis by 75% with the integration between load testing and APM!
Ideally, you want your load testing tool and the APM tool to be integrated so you can easily switch between them and see events in the exact same context.
For instance, once you track a problem and examine the report generated by your load test tool, be able to easily switch to the APM tool that has been monitoring your servers, to drill down at the exact context and timing of the load test, to continue the drill down and conduct a deep analysis of the relevant events.
Using WebLOAD, for example, you can highlight a point in the graph with a spike in a login response time. Right-clicking the spikelets you switch to an APM tool (in this example, Dynatrace) to continue the drill down and isolate the problem.
Drilling down using the APM tool you can trace the lengthy login is primarily to a slow database access.
A tight integration between your load testing tool and APM tool can significantly cut root-cause identification time and help accelerate processes with agile teams and continuous delivery processes.
#4) Integrate with DevOps Processes
To keep up with agile development trends and faster deployments, you should extend performance testing to accommodate software delivery methods like Continuous Delivery and DevOps. It’s much easier to reach performance goals with multiple teams (R&D, QA, DevOps) working in cooperation and sharing data.
Companies that have adopted DevOps are doing 8x more frequent production deployments, have 2x higher success rates, and are fixing issues 12x faster when something goes wrong.
Your goal is to automatically and continuously verify the performance of each build and validate that it can continue its way to production.
Start by Identifying the performance tests that should be automated. For example, AVG, which provides antivirus software and internet security services, uses WebLOAD to run load tests that verify stability and response times of their revenue-critical pages.
While performance tests cover other aspects too, it’s the response time of business-critical areas that were automated and are run regularly in connection with the software releases.
Ellucian, an educational ERP software company, automates performance tests that validate the scalability of its production infrastructure as a service provider. The performance testing team has integrated load tests into DevOps processes and can automatically orchestrate them against every build in the pipeline.
Pass/fail criteria in performance tests:
Note that before you can automate anything, you need clear pass/fail definitions for each of your tests. The thing is – pass/fail criteria in performance tests is typically not as clear-cut as it is in functional testing.
Still, here are some testing goals you might consider. These will let you determine how changes in performance should affect the next step in the continuous delivery pipeline.
- Response time – A maximum response time per transaction, or an average response time for multiple transactions above which a test will be defined as failed.
- Error rate – An acceptable error rate threshold for transactions.
- Hits, or requests per second – A reduction in the hits per second points out that the server can handle fewer requests, which you want to investigate.
- Average throughput – The average number of bytes that virtual users receive from the server at any given second can help you spot server issues that result in sending more data than needed.
- Server CPU or memory – Ensuring that CPU or memory does not go up above a certain threshold helps avoid potential crashes and a slow response.
The best way to trigger your tests automatically is using Jenkins, the open source automation platform, where you can use the ‘Build Step’ option as well as set thresholds to automatically alert on poor performance. Another option is Bamboo, which also lets you build continuous integration, deployment, and delivery processes.
#5) Extend analysis using external tools
Your performance testing tool may provide outstanding reporting and analytical capabilities, which may answer 95% of your needs. But reporting and analysis needs tend to be unique. You may need to view or present test data using a specific angle or way that was not built into is not supported by your tool.
To extend your performance testing reporting and analysis capabilities you should be able to export test data and then use external tools to further manipulate, slice and dice as needed.
This may be as simple as using Excel or your homegrown system to run some calculations and present your own insights, or else a more sophisticated business intelligence (BI) tool like Tableau.
You should be able to export test data using various methods such as SQL queries run directly with the results database, using Restful API.
Hope these points will help you supercharge your performance testing and achieve your performance testing goals.
About the Author: This is a guest post by RadView VP Product David Buch. He has led R&D teams in several high-tech companies. Prior to RadView, David was VP R&D at Softlib and Brightinfo, R&D Manager at HP Software, Director of R&D at Mercury Interactive. David is a Bar Ilan University BA magna cum laude graduate in computer science and economics and is a MAMRAM (The Israeli Army Computer Corps) graduate.
How Do You Supercharge?
How do you supercharge your performance testing? What of the above concepts work best for you? What other ideas do you use? Share your thoughts below!
Are you looking for a Cloud Performance Testing Guide? Please check our upcoming tutorial.