Understanding Test Reports (Performance Test Reporting)
Loadium supports real-time reporting. It allows you to monitor performance metrics on tables and a graph layout.
There are eleven different tabs that show performance metrics.
This screen allows you to monitor Response Time, Hits & Errors, Received Data and Sent Data graphs in real- time and presents basic test metrics such as: “Max User Number”, “Average Throughput”, “Total Error Number”, “Average Response Time”, “Average Received Bytes” and “Average Sent Bytes”
This screen shows KPI’s such as:
Total Hits , Average Response Time, Max Response Time, Min Response Time, Percentage Error, Total Throughput, Average Connect Time, Average Latency, Total Error Hits.
The KPIs shown in the table are as above
Label: Name of the request in JMeter.
Total Hits: Total number of requests sent to all services during the test.
Avg Throughput/RPS: The number of requests per second.
Avg Response Time/Sec: The average response time.
%85 LINE-85th Percentile: Response time of 85% of the samples were smaller than or equal to this response time.
%90 LINE: 90th Percentile: 90% of the samples were smaller than or equal to this response time.
%95 LINE: 95th Percentile: 95% of the samples were smaller than or equal to this response time.
%99 LINE: 99th Percentile: 99% of the samples were smaller than or equal to this response time.
Max Response Time/Sec: Maximum response time for a user during the test.
Min Response Time/Sec: Minimum response time for one user during the test.
Total Error Hits: The number of all requests that received errors during the test.
Avg.Connect Time/Sec: Average connection time to server during the test.
Avg.Latency/Sec: Average server time during testing.
Sent KB/Sec: The amount of data sent per second.
Received KB/Sec: The amount of data received per second.
This screen represents performance metrics on a timeline graph. By this way, users may observe any KPI changes on a timely basis. Users may correlate different KPI’s (Hits, Errors, Response Time, Virtual Users, Latency Time,Received Bytes and Sent Bytes”) to visualize on a graph.
If you select the KPI and request you want in the Timeline KPI Selection area, it will show the graph for you.
This screen shows the total number of response codes received during a performance test grouped by request.
Engine Health: This screen shows the status of AWS server’s status aligned with Average CPU, Memory, Network Send / MB, Network Recv /MB
You can see the status of the machines graphically on the Engine Health report screen as shown above.
The Failure Criteria feature allows you to set your test's pass / fail criteria for various metrics, such as response time, errors, hits/s, test duration etc.
Column 1 - Specify here if you want to use this rule on a particular label from your script. It's set to "ALL" (all labels) by default.
Column 2 - Select the specific metric you'd like to apply a rule for. Click the down arrow on the right side of the field to open a drop-down menu and review available metrics to monitor.
Column 3 - The binary comparison operators for this rule, which includes "Less than", "Greater than", "Equal to", and "Not Equal to". Click the down arrow on the right side of the field to open a drop-down menu.
Column 4 - The numeric value you want this rule to apply to.
Stop Test - If this box is checked, the test will stop immediately when that criteria fails; otherwise, it will continue running uninterrupted.
Users can download JMeter and Loadium logs on this page.
HTTP codes except for HTTP 200 will be shown on this screen. Users will be able to observe the total number of errors per HTTP status and its related transaction.
In case your JMeter scripts contain any assertions, this page will be populated with assertions whenever it fails. Any failed transaction will also show its reason of failure.