Test creation

Test can be created using:

The main feature of each test is the script that is written using Selenium scripting in Nightwatch.js syntax. Additionally there are a few more general test parameters.


Test script is frontend automation script that mocks participant actions. This script must be written in Nightwatch.js syntax as a function that accepts single parameter (usually browser or client).
Check the example below:

function(browser) {
    // your code goes here

Other examples can be found here.

One of the factors taken into consideration to determine if test passed is successful script execution.


Test Mode

Loadero supports these test modes:

  • Performance Test - provides all available metrics (webRTC dump, browser logs, selenium logs, machine statistics), but is very limited in terms of allowed participant count (max 50)
  • Load Test - although less metrics are available than in performance tests, load test mode allows significantly more participants
  • Session Recording - provides session screen recording video for each participant, with few of the metrics (browser logs, selenium logs), but is extremely limited in terms of allowed participant count (max 10). This test mode can be useful to reduce time spent on script debugging


When running up to 50 participants, use performance test mode to collect more statistics without any drawbacks.

Easy Scalability

Leveraging features of participant groups it is possible to use your existing performance tests also as load tests. Simply setting higher group or participant count will add more participants in test in matter of minutes.


Performance test with a single group that contains 50 participants can easily be scaled by switching test mode to load test and changing group count to 10. That will result in 500 participant test using the same script.

Increment strategy

Increment strategy specifies participant distribution over start interval when beginning test execution. This is useful because 10k participants joining at the same time would create a DDoS attack instead of load test and for this reason we advise to use incremental load.

  • Linear - increment strategy defines that all participants join test in linear ordering creating a continuous and monotonic increment

  • Random - is an increment strategy that defines all participants joining test in random ordering


Both parameters increment strategy and start interval are applied to group not participant! So having one group with 200 participants means that these 200 participants still will join test at the same time.

Start interval

Describes total amount of time in which all participants start their tests and access the service. Closely related to increment strategy.


Having large start interval may cause some participants to finish test before other participants have joined at all.

Participant timeout

Specifies the time limit after which the test will be terminated. This is necessary to prevent too long test execution that may happen in cases when defined commands may freeze or require longer execution times. Maximum participant timeout is 8 hours.


Keep in mind that all Nightwatch commands usually take longer to execute than their predefined timeouts.

Participant configuration

Participant is a single client instance with isolated environment that executes the previously defined script.

Participant groups

When testing peer to peer applications it is often important to provide parallel communication between participants, while performing different actions within the same test. Loadero's participant-group structure allows user to easily define groups of participants that use the service under test. Therefore, participant-group structure allows creation of different flows in test script.


In a service that provides conference calls, it's necessary that the only hosts activate microphones. By editing test script it's possible to provide different flows for specific groups, allowing one group to join as hosts and activate microphone, and other groups to join as listeners.

Group count

Group count is the number of identical copies of the specific group to be added to the test. Each copy will contain exactly the same participants configurations.


Consider a group consisting of 3 participants: User A and User B from US, User C from EU. With a single group, there would be a total of 3 participants in the test (given that this is the only group). If group count is increased to 3, then there will be total of 9 participants joining the test - 6 from US and 3 from EU.


Each participant runs his own browser instance. We are using unmodified browser instances to simulate any possible overhead that could be created by the browser binaries. Currently two browsers are available:

  • Google Chrome
  • Mozilla Firefox

Each browser has multiple versions available. If there is a need for any other browsers please contact support.


Each participant is launched in the specified location. These locations are associated to physical data centers therefore simulation is very close to real life scenario. Available locations:

  • AP East - Hong Kong
  • AP Northeast - Tokyo
  • AP Northeast - Seoul
  • AP South - Mumbai
  • AP Southeast - Sydney
  • EU Central - Frankfurt
  • EU West - Ireland
  • EU West - Paris
  • SA East - São Paulo
  • US East - North Virginia
  • US East - Ohio
  • US West - Oregon


Launching performance tests up to 2 participants located in US West - Oregon will result in significantly faster test launch.

Network Conditions

Loadero has built in network conditioner that can be used to simulate different settings. This allows to test app behavior and asynchronous communication when using bad network settings such as 3G, or networks with high packet loss, jitter, etc.

Table below summarizes specific settings of each network configuration and serves as a reference point for choosing appropriate settings. If network settings differ for incoming/outgoing flow, incoming setting is given first. Where no value is given - network is not limited.

networkMode is a parameter that is used by custom Nightwatch command updateNetwork to update network conditions during the test.

Network mode (networkMode) Bandwidth Packetloss Latency Jitter
Default (default)
4G (4g) 100mbps / 50mbps 0.2% 45ms 5ms / 15ms
3.5G/HSPDA (hsdpa) 20mbps / 10mbps 0.5% 150ms 10ms
3G (3g) 1200kbps 0.5% 250ms 50ms
GPRS (gprs) 80kbps / 20kbps 1% 650ms 100ms
Edge (edge) 200kbps / 260kbps 1% 650ms 100ms
Asymetric (asymetric) 500kbps / 1000kbps 50ms 10ms
Satellite phone (satellite) 1000kbps / 256kbps 600ms
5% packetloss (5packet) 5%
10% packetloss (10packet) 10%
20% packetloss (20packet) 20%
50% packetloss (50packet) 50%
100% packetloss (100packet) 100%
High latency (latency) 500ms 50ms
High jitter (jitter) 200ms 100ms


One of main features of Loadero is ability to supply fake media feeds during tests. This feature mainly concerns services that require webcam or microphone to fully cover app logic. By default this functionality is not provided by Selenium or Nightwatch, so this requires some custom actions to accomplish.

Loadero offers fake media feed out of the box for all tests running through Loadero. During participant configuration media type can be selected. This media feed will be used to simulate webcam and microphone that physical machine could have.

Available media types:

  • 1080p Audio+Video feed
  • 720p Marked Video + DTMF Audio feed
  • 720p Audio+Video feed
  • 480p Audio+Video feed
  • 360p Audio+Video feed
  • 240p Audio+Video feed


Media types that supply only audio or only video are not provided - these scenarios should be solved on web app itself!


Media selection is applied to Google Chrome browser only. In order to use the built-in media feed of Google Chrome you can set up the media type for the participant to Default. For Mozilla Firefox browser built-in fake media feed will always be used.

Post run assertions

Asserts allow to check statistic values for individual participant after selenium script execution has finished. Asserts are automatically calculated for each participant to check if given values are within allowed thresholds.

Available assert paths can be divided in two categories:

These categories contain parts of full assert path as in most cases additional path parameters are present which indicate what aggregation function will be used to retrieve the final value of assert.

Machine statistics asserts:

Name (pathValue) Description
CPU (cpu) CPU percentage
RAM (ram) RAM usage (in bytes)
Network bytes (network/bytes) Network bytes (total). Incoming and outgoing data
Network bitrate (network/bitrate) Network bitrate per second. Incoming and outgoing data
Network packets (network/packets) Network packets (total). Incoming and outgoing data
Network packet loss (network/packetsLost) Network packets lost (percentage). Incoming and outgoing data
Network errors (network/errors) Network errors (total). Incoming and outgoing data

webRTC statistics asserts:

Name (pathValue) Description
Bitrate (bitrate) Actual bitrate of media in kilobits
Packets lost (packetsLost) Number of packets lost during test time
Packets (packets) Number of packets overall (per second)
Jitter (jitter) Media jitter in milliseconds
Jitter buffer (jitterBuffer) Incoming media jitter selected size in milliseconds
Audio volume (level) Audio volume in absolute values
Round trip time (rtt) Data round-trip time in milliseconds
Bytes (bytes) Total number of bytes transmitted
Codec (codec) Codec of the stream as a string value


Packet metrics measure “packets per second” instead of “total packets”.
To assert total packets use ../packets/../total


webRTC assertions currently work only on Chrome browser, but support for FireFox is coming soon.


All media tracks are merged together for each participant - to assert the total in/out stats. WebRTC dump is available for download to perform customized assertions.

Aggregator functions

Metrics are collected once per second and need aggregation to be compared to a single value. We have several aggregation functions defined:

  • Minimum
  • Maximum
  • Average
  • Standard deviation
  • Relative standard deviation*
  • Percentile

*Relative standard deviation asserts offset relative to average value


Relative standard deviation gives more context by reflecting the offset keeping the scale of value. For example, if standard deviation is only 1 byte it does not give any indications about the significance of fluctuations. But by using relative standard deviation it would return fluctuation in percent, immediately giving perspective on the fluctuations without the need to check average value.

results matching ""

    No results matching ""