Automated load testing

A solution for the modern engineering team that wants to performance regression test as part of their test automation pipelines.

The challenges of automating performance testing

Deciding how often and for how long to run performance tests can be tricky. It depends on the type of performance test and what you're testing. To integrate performance testing into an automation pipeline you also need to define and codify your success criteria. How will the test success or failure be determined automatically so that the pipleline can be halted upon a performance regression. The actual integration with your CI tool needs to be done, test environments must be made available and notifications of failed load tests must be set up.

How it works

Automating your load testing comes down to a few high-level steps that you need to go through.


First you need to have a plan. What are you performance regression testing, what needs to be tested and how often do you need to trigger the automated tests (see FAQ below). 

Defining goals

To automate you need to define your success criteria. We need to know what you consider a successful and failed test run. You do that with metric thresholds. If you have SLAs with your customers that's a good starting point, otherwise start out with a baseline test.

Create test scripts

When you have a plan and your success criteria, you're ready to create your test scripts.

Integration with CI tool

The final step is to integrate the load test into your CI tool. This means we need a way to trigger a load test run from a build step in your CI tool. See below for details on how to accomplish that.

Integrating with your favorite CI tool

Follow four general steps for any CI tool integration, or check out our guides for some common CI tools. 



Define success criteria with thresholds

The first thing is to add thresholds, your success criteria or goals, to your test script. You do that in the options section.


Get your API token

To trigger test runs from your CI you need a LoadImpact API token that we'll use together with k6, our open source load testing tool. You get your token from within the app, here.


Download k6

Download k6, our open source load testing tool, to trigger test runs from your CI tool/service of choice. You can opt for executing the tests from the CI server itself or use the LoadImpact cloud execution functionality to generate the traffic from cloud servers managed by us.


Setup notifications

The final step, to complete the automation workflow, is to hook up the load test notifications to your chat tool. We support Slack and Webhook.


import { sleep } from "k6";
import http from "k6/http";

export let options = {
    // Add a soft (non-aborting) and a hard (aborting) threshold
    thresholds: {
        // Set a threshold measuring against the 95th percentile
        // of all response time metric data points
        "http_req_duration": ["p(95)<500"],

        // When specifying thresholds, we can limit the scope of
        // the metric data points being considered by using tags,
        // in this we look at the URL tag automatically set
        // for all requests.
        "http_req_duration{url:}": [
            { threshold: "p(99)<5000", abortOnFail: true }

export default function() {

$ k6 run -o cloud scriptWithThresholds.js

          /\      |‾‾|  /‾‾/  /‾/
     /\  /  \     |  |_/  /  / /
    /  \/    \    |      |  /  ‾‾\
   /          \   |  |‾\  \ | (_) |
  / __________ \  |__|  \__\ \___/ .io

  execution: local
     output: cloud (
     script: scriptWithThresholds.js

    duration: -,  iterations: -
         vus: 1, max: 10

    done [==========================================================] 3m0s / 3m0s

    checks.....................: 100.00% ✓ 792  ✗ 0
    data_received..............: 7.1 MB  40 kB/s
    data_sent..................: 135 kB  749 B/s
    http_req_blocked...........: avg=2.8ms      min=2.33µs     med=3.73µs     max=189.49ms   p(90)=4.98µs     p(95)=5.51µs
    http_req_connecting........: avg=948.06µs   min=0s         med=0s         max=91.92ms    p(90)=0s         p(95)=0s
  ✓ http_req_duration..........: avg=286.36ms   min=190.65ms   med=322.67ms   max=571.36ms   p(90)=353.58ms   p(95)=382.54ms
    http_req_receiving.........: avg=4.74ms     min=59.13µs    med=1.46ms     max=55.09ms    p(90)=6.1ms      p(95)=37.2ms
    http_req_sending...........: avg=24.47µs    min=14.84µs    med=22.26µs    max=113.77µs   p(90)=30.44µs    p(95)=34.22µs
    http_req_tls_handshaking...: avg=1.77ms     min=0s         med=0s         max=97.48ms    p(90)=0s         p(95)=0s
    http_req_waiting...........: avg=281.59ms   min=189.36ms   med=317.06ms   max=564.64ms   p(90)=348.91ms   p(95)=364.31ms
    http_reqs..................: 528     2.93333/s
    iterations.................: 273     1.516665/s
    vus........................: 1       min=1  max=10

Check out our CI integration guides




Frequently asked questions

What types of performance tests are good for automation?

In short, performance regression tests, where you want to make sure that a component or end-to-end system does not regress performance wise as new code is being pushed towards production.

How often should I run load tests?

As a general rule of thumb: if the user scenario duration is short (less than 30s, say if you're testing a single API endpoint or a microservice) we recommend you run the tests on every commit as the test won't have to run for long to gather enough samples for you to be able to extract meaningful value from the results. If the user scenario duration is longer (say for end-to-end tests with more complex flows) we recommend on the order of once a day, or whatever makes sense given how often you deploy to your pre-production environment where load tests are run.