Report Metrics

Detailed examination and explanation of performance metrics
Almost every firm is in demand for their website to respond fast with %100 stability. Your site or app can be fast, but it can misinterpret a request during a high load and order the wrong shoe size for your customer or vice versa. We want both. We want stability and 24/7 availability within acceptable time frames.
We need some load test metrics to measure the performance of the application. With these metrics, we can get information about the performance of the application.
Some of these metrics are
  • Average Response Time,
  • Error Rate,
  • Average Latency,
  • Concurrent Users,
  • Requests per Second (RPS) and Throughput
Summary Report section of Report Page

Average Response Time

Response time tells us how long a process takes to complete a specific request. In case you are familiar with statistics.
Overview section of Report Page
We want response time to have a normal distribution in graphical analysis. Most of the requests should be around the mean value and some of them should be in the outlier zone.

Errors

Some errors can be expected when processing requests, especially under load. Often times, you will find errors getting reported when the load reaches to a point that exceeds the web application's ability to deliver what is required, or when the wrong request is sent.
You can also see the detail messages and response codes of these errors on Loadium.
Response Codes section of Report Page

Average Latency

Latency is measured by the time taken for information to get to its target and back again. It’s a round trip. Average Latency is the average of the latency values during the test period. Sometimes, latency means delay which is a very problematic issue when working with remote data centers.
Data hops through nodes untill it’s sent from the server. Therefore, the bigger the distance the more the delay. That’s why those nodes will increase the response time and violate your service level agreements (SLA’s).
That is also why dealing with latency is the hardest one. JMeter measures the latency from the first moment of sending the request until the first byte is received. So, in JMeter Connect time is included when calculating Latency Time. There’s also the network latency to express the time for a packet of data to get from one designated point to another.
You can follow the latency/response time/connection time values throughout the test period with the filters on the chart below.
You can also see this value on average in the summary report.
The difference between Response Time and Latency is:
  • Latency is measured from request sent to first byte of response received.
  • Response time is measured from request sent to whole response is received.

Concurrent Users

Concurrent user is the number of users at the same time on the system. It is not equal to RPS because users can create a large number of requests and not every Virtual User (VU) will create requests continuously.
VU are in demand at the same time, but there are many VU users that aren't, due to "thinking time” (thinking time is the time the user will wait between two tasks.)

Requests per Second(RPS) and Throughput

RPS is the number of requests per second. This metric can vary depending on response time and other metrics. With this metric, we can measure the load the application can handle per second.
Throughput is calculated based on response state. If 45 of the 50 requests are completed successfully as a result of the test, the Throughput is calculated as 45.
As a result, RPS is nothing but the number of requests received by the server irrespective of response status and Throughput is nothing but the successful response sent by the server.
Enjoy load testing!
If you don't see the answer to your question here, please reach out to us to let us know! We're always improving our documentation.