Automated load testing

A solution for the modern engineering team that wants to performance regression test as part of their test automation pipelines.

The challenges of automating performance testing

Deciding how often and for how long to run performance tests can be tricky. It depends on the type of performance test and what you're testing. To integrate it performance testing into an automation pipeline you also need to define and codify what your success criteria is. How will the test success or failure be determined automatically so that the pipleline can be halted upon a performance regression. The actual integration with your CI tool needs to be done, test environments must be made available and notifications of failed load tests must be set up.

How it works

Automating your load testing comes down to a few high-level steps that you need to go through.

Planning

First you need to have a plan. What are you performance regression testing, what needs to be tested and how often do you need to trigger the automated tests (see FAQ below). 

Defining goals

To automate you need to define your success criteria. We need to know what you consider a successful and failed test run. You do that with metric thresholds. If you have SLAs with your customers that's a good starting point, otherwise start out with a baseline test.

Create scripts and test configuration

When you have a plan and your success criteria, you're ready to create your user scenario scripts and test configuration.

Integration with CI tool

The final step is to integrate the load test into your CI tool. This means we need a way to trigger a load test run from a build step in your CI tool. See below for details on how to accomplish that.

Integrating with your favorite CI tool

Follow four general steps for any CI tool integration, or check out our guides for some common CI tools. 

Define success criteria with thresholds

The first thing you need to do is create a test configuration with thresholds, your success criteria or goals.

Get your API token

To trigger test runs from your CI you need a Load Impact API token that we'll use together with our CLI. You get your token from within the app, here.

Download our CLI to your CI tool

Download the Load Impact CLI and interact with the platform to trigger test runs based on code commits from CI.

Setup notifications

The final step, to complete the automation workflow, is to hook up the load test notifications to your chat tool. We support Slack, HipChat and Webhook.

loadimpact-test-config-thresholds.png
loadimpact-api-token.png
Terminal

$ pip install loadimpact-cli
$ loadimpact test run 
TEST_RUN_ID:
123456789
Initializing test ...
TIMESTAMP:                VUs [1]:         reqs/s [1]:      bandwidth [1]:   user load time [1]: failure rate [1]:
2018-04-20 17:32:23+00:00 1.0              1.65880228503    444675.79207     -                   -
2018-04-20 17:33:00+00:00 2.0              1.65655724996    444309.858371    -                   -
2018-04-20 17:33:03+00:00 2.0              1.65174480411    442789.175918    150.41              -
2018-04-20 17:33:36+00:00 2.0              3.31532339156    889063.643269    150.595             -
2018-04-20 17:34:03+00:00 2.0              1.65779745031    444464.780093    124.19              -
2018-04-20 17:34:06+00:00 3.0              1.65459748111    443768.339145    119.52              -
...
loadimpact-notifications-settings.png

Check out our CI integration guides

Questions

Frequently asked questions

What types of performance tests are good for automation?

In short, performance regression tests, where you want to make sure that a component or end-to-end system does not regress performance wise as new code is being pushed towards production.

How often should I run load tests?

As a general rule of thumb: if the user scenario duration is short (less than 30s, say if you're testing a single API endpoint or a microservice) we recommend you run the tests on every commit as the test won't have to run for long to gather enough samples for you to be able to extract meaningful value from the results. If the user scenario duration is longer (say for end-to-end tests with more complex flows) we recommend on the order of once a day, or whatever makes sense given how often you deploy to your pre-production environment where load tests are run.