This Video Tutorial Explains the Features Of Gatling Reports Generated While Simulation execution either through the Install Location or Through the Maven-based Project:
In this tutorial, we will walk through the major sections of the feature-rich out of the box reports that get generated while running Gatling simulation through Gatling script runner.
We will also see what configurations are available in the Gatling configuration file to change these reports as per our requirements. The configuration options add a strong dimension to use these reports as we can configure things like – what values/threshold of response times should show in graphs, what percentiles should be calculated, etc.
=> Visit Gatling Tutorial Series Here
Table of Contents:
Gatling Reports: Video Tutorial
Where Are Gatling Reports Generated?
The Gatling scripts can be generally executed from 2 locations:
#1) If you have recorded script using the Gatling recorder directly (through the Gatling recorder in the Gatling installer), the scripts executed through the Gatling script runner store their results in – $GATLING_HOME/results folder ($GATLING_HOME is an environment variable that should be set and should have the value of Gatling install location).
#2) If the script is executed from within the Maven project, the results get stored in a top-level folder named “target” within the project directory.
Let’s see below, how does a Gatling report will look like. We will see all the components in detail in the further sections of this tutorial.
Gatling Report Sections
At a broader level – Gatling reports have 2 sections and are displayed as 2 tabs namely, Global and Details. We will discuss each of the tabs and the contents available.
a) Global Section
This report shows global or aggregate level information with the total number of requests/responses and the total failed requests/responses at a scenario or simulation level.
As displayed in the screenshot above, there are different subsections, which are discussed in brief below. (Please refer to the numbered representations in the screenshot against the descriptions below)
#1) Scenario details: It includes scenario name and overall simulation stats (like start time, and the total duration of the simulation execution).
#2) Distribution of response times: This denotes the response time distribution for the overall requests that got executed as part of the scenario.
Please note that this includes each request as an individual for counting these metrics – for example, take a scenario like adding a product to the cart which might involve 5 API calls – this section will show the count as no of calls x no of users (for which simulation was executed).
#3) Number of requests: This section denotes a representation of failed requests vs successful requests. It’s also termed OK vs KO.
Here, for new users, the naming might be a little difficult to understand, but OK here means a successful request and KO means a failed request.
#4) Statistics: This section details statistics at the individual scenario level. It shows details at request level and metrics like – total requests, total requests passed/failed, percentile values like – 50/75/95/99 percentile, etc.
The below image shows an example scenario that executes 2 requests and the statistics for the same:
You can see here since the scenario has multiple requests, the statistics section details data or metrics at the individual request level.
b) Details Section
In the details section, you can see more granular level details per request type in the scenario.
You can navigate to the Details tab and select the request for which details are to be displayed (from the left side navigation against executed requests).
Depending on the selected request, various details like response time distribution and statistics for the selected request will be displayed on the right side.
Let’s see some more details/graphs that are shown as part of the details tab:
(i) Active users along with the simulation: This graph shows the number of users that are actively present in the test at a particular instant/point in time. This shows values for total active users as well as per scenario. In this case, since there was only one scenario, that’s why both graphs are colliding with each other.
(ii) Response time distribution: This displays the count of requests distributed over response time. In other words, it groups the requests by response times which helps to find out average and outlier values simply by looking at the graph.
In an ideal scenario, there should not be much variance in response time distribution. Since this is a test API, that’s the reason we are getting wide distribution.
(iii) Requests per second: This chart is nothing but throughput. i.e. the number of requests that were sent to the system under test per second.
(iv) Responses per second: Similar to the requests per second, this represents the number of responses that were received per second. There’s nothing much that can be inferred from this graph but is useful to find out in case there are failed responses. With instantaneous values, we can find out what time during the test, there were more failures and vice versa.
Notes:
- Similar to the graphs in the Details section, the same graphs are available in the Global section as well. The only difference being is that it represents all the data at a global level and not at the scenario or individual request level.
- All these graphs are rich javascript-based graphs and we can simply hover at any instant and see details for that point in time.
Refer to the screenshot below for more details.
Customizing Gatling Graphs
In the past sections of this tutorial, we saw how does the Gatling report look like and what the different charts/graphs/statistics data are available for a user for referring to. Now, let’s see how we can configure some of the data points/values/metrics per our requirements by making some changes in the configuration file.
Gatling provides a configuration file named gatling.conf file to modify a lot of settings as per user requirements which would reflect in the graph once updated.
An important point to note here is: that for simulations that are run from the Gatling install directory itself, to update the configuration settings, the gatling.conf file is available at the following location
$GATLING_HOME / conf / gatling.conf
For executing tests from the Maven project through engine class or the command line, the gatling.conf file is the part of the project itself and can be edited there only. This file is project-specific and multiple Gatling projects can have their own Gatling configuration file.
Few use cases of configuring Gatling reports are as follows:
(i) Change the percentile values: The default values for the percentile indicators in Gatling are – 50, 75, 95, and 99th percentile.
Suppose you want to change these values to 80, 90, 92, and 94, you can go to the charting -> indicators section in gatling.conf file and change the values as shown below.
Default values
Changed values
In the figure above, you can see statistics now show the newly configured percentile values.
(ii) Change response time bounds: Similarly, if you want to change the default response time bounds that are displayed in Gatling reports, you can again go to charting -> indicators section in gatling.conf file and update the lower bound and higher bound values to whatever value you desire.
Please note, that the default values in Gatling reports for upper and lower bounds are 1200 and 800 ms respectively.
These kinds of customizations can be useful for scenarios where you might think that 800 ms is too high a number for a lower bound and your API typically responds in say 50 ms and if it’s greater than 100 ms then you want to put it as a higher bound.
Default values
Changed values
In the figure above, you can see that the newly configured time range bounds have started showing up.
Conclusion
In this tutorial, we have walked through the reports that get generated when a simulation is executed either through the install location or through the Maven-based project.
Gatling provides a feature-rich HTML-based report out of the box which has a lot of details about the scenario execution already preconfigured. It also provides means to change a few settings like percentile values, response time bounds, etc. through a configuration file as per user requirements.
PREV Tutorial | FIRST Tutorial