📓
Loadium Wiki
  • Welcome to Loadium
  • Getting Started with Loadium
  • Quick Guides
    • Quick Start
    • JMeter Test
    • Locust Test
    • Script Builder
      • Simple HTTP Builder
      • Record&Play
    • Gatling Test
    • WebDriver Test
    • HLS Test
    • Understanding Test Reports
    • Test Execution Fail Reasons
    • File Output
  • ABOUT LOADIUM
    • Features
    • Pricing
    • VUH - Virtual User Hour
  • ACCOUNT
    • Profile & Account
    • Subscription
  • Features
    • Test Report
    • Compare Test
    • Private Location
    • Report PDF Export
  • Test Settings
    • Split CSV
    • Sandbox
    • Multi-Step
    • Geolocation
    • Customize Setup
    • Dedicated IP
    • Send Email
    • DNS Override
    • TPS
    • Network Type
    • Test Scheduling
    • Test Failure Settings
    • JMeter Settings
    • Failure Criteria
    • Flexible Thread Scheduling
  • CONTINUOUS INTEGRATION SUPPORT
    • Azure DevOps Pipeline
    • Jenkins Load Testing with Loadium
  • Integrations
    • Microsoft Teams Integration
    • New Relic Integration
    • AppDynamics Integration
    • TeamCity Integration
  • Jmeter Corner
    • Built-in JMeter Functions
    • How to Install JMeter Plugins
    • Record and Play Load Testing With Jmeter in 5 Steps
    • Websocket with Apache JMeter
    • JMeter Timers
    • Handling Error Messages of JMeter Tests in Loadium
    • Details of Throughput Controller in JMeter (Splitting Threads to Different Requests)
    • How to Add a Varying Number of Parameters in Apache JMeter
    • Local Network Simulation On JMeter
    • Running Load Test with FTP Server
  • Gatling Corner
    • Introduction to Gatling
    • Gatling Recorder
    • Gatling Pros&Cons
  • PUBLIC APIs
    • Get Test Status Service
    • Authentication Service
    • Get Performance Test List Service
    • Start Performance Test Service
Powered by GitBook
On this page
  • Overview
  • Summary Report
  • Timeline Report
  • Response Codes
  • Failure Criteria
  • Logs
  • Errors
  • Assertions
  • APM Reports

Was this helpful?

  1. Quick Guides

Understanding Test Reports

PreviousHLS TestNextTest Execution Fail Reasons

Last updated 1 month ago

Was this helpful?

Loadium supports real-time reporting. It allows you to monitor performance metrics on tables and a graph layout.

Overview

This screen allows you to monitor Response Time, Hits & Errors, Received Data and Sent Data graphs in real- time and presents basic test metrics such as: “Max User Number”, “Average Throughput”, “Total Error Number”, “Average Response Time”, “Average Received Bytes” and “Average Sent Bytes”

Summary Report

This screen shows KPI’s such as:

  • Total Hits , Average Response Time, Max Response Time, Min Response Time, Percentage Error, Total Throughput, Average Connect Time, Average Latency, Total Error Hits.

The KPIs shown in the table are as above

  • Label: Name of the request in JMeter.

  • Total Hits: Total number of requests sent to all services during the test.

  • Avg Throughput/RPS: The number of requests per second.

  • Avg Response Time/Sec: The average response time.

  • %85 LINE-85th Percentile: Response time of 85% of the samples were smaller than or equal to this response time.

  • %90 LINE: 90th Percentile: 90% of the samples were smaller than or equal to this response time.

  • %95 LINE: 95th Percentile: 95% of the samples were smaller than or equal to this response time.

  • %99 LINE: 99th Percentile: 99% of the samples were smaller than or equal to this response time.

  • Max Response Time/Sec: Maximum response time for a user during the test.

  • Min Response Time/Sec: Minimum response time for one user during the test.

  • Total Error Hits: The number of all requests that received errors during the test.

  • Avg.Connect Time/Sec: Average connection time to server during the test.

  • Avg.Latency/Sec: Average server time during testing.

  • Sent KB/Sec: The amount of data sent per second.

  • Received KB/Sec: The amount of data received per second.

Timeline Report

This screen represents performance metrics on a timeline graph. By this way, users may observe any KPI changes on a timely basis. Users may correlate different KPI’s (Hits, Errors, Response Time, Virtual Users, Latency Time,Received Bytes and Sent Bytes”) to visualize on a graph.

If you select the KPI and request you want in the Timeline KPI Selection area, it will show the graph for you.

Response Codes

This screen shows the total number of response codes received during a performance test grouped by request.

You can see the status of the machines graphically on the Engine Health report screen as shown above.

Failure Criteria

The Failure Criteria feature allows you to set your test's pass / fail criteria for various metrics, such as response time, errors, hits/s, test duration etc.

Column 1 - Specify here if you want to use this rule on a particular label from your script. It's set to "ALL" (all labels) by default.

Column 2 - Select the specific metric you'd like to apply a rule for. Click the down arrow on the right side of the field to open a drop-down menu and review available metrics to monitor.

Column 3 - The binary comparison operators for this rule, which includes "Less than", "Greater than", "Equal to", and "Not Equal to". Click the down arrow on the right side of the field to open a drop-down menu.

Column 4 - The numeric value you want this rule to apply to.

Stop Test - If this box is checked, the test will stop immediately when that criteria fails; otherwise, it will continue running uninterrupted.

Logs

Users can download JMeter and Loadium logs on this page.

Errors

HTTP codes except for HTTP 200 will be shown on this screen. Users will be able to observe the total number of errors per HTTP status and its related transaction.

Assertions

In case your JMeter scripts contain any assertions, this page will be populated with assertions whenever it fails. Any failed transaction will also show its reason of failure.

APM Reports

You might check out APM integration on this tab.

Engine Health: This screen shows the status of AWS server’s status aligned with Average CPU, Memory, Network Send / MB, Network Recv /MB