A solution for the modern engineering team that wants to performance regression test as part of their test automation pipelines.
First you need to have a plan. What are you performance regression testing, what needs to be tested and how often do you need to trigger the automated tests (see FAQ below).
To automate you need to define your success criteria. We need to know what you consider a successful and failed test run. You do that with metric thresholds. If you have SLAs with your customers that's a good starting point, otherwise start out with a baseline test.
When you have a plan and your success criteria, you're ready to create your user scenario scripts and test configuration.
The final step is to integrate the load test into your CI tool. This means we need a way to trigger a load test run from a build step in your CI tool. See below for details on how to accomplish that.
The first thing you need to do is create a test configuration with thresholds, your success criteria or goals.
To trigger test runs from your CI you need a Load Impact API token that we'll use together with our CLI. You get your token from within the app, here.
Download the Load Impact CLI and interact with the platform to trigger test runs based on code commits from CI.
The final step, to complete the automation workflow, is to hook up the load test notifications to your chat tool. We support Slack, HipChat and Webhook.
$ pip install loadimpact-cli $ loadimpact test run
TEST_RUN_ID: 123456789 Initializing test ... TIMESTAMP: VUs : reqs/s : bandwidth : user load time : failure rate : 2018-04-20 17:32:23+00:00 1.0 1.65880228503 444675.79207 - - 2018-04-20 17:33:00+00:00 2.0 1.65655724996 444309.858371 - - 2018-04-20 17:33:03+00:00 2.0 1.65174480411 442789.175918 150.41 - 2018-04-20 17:33:36+00:00 2.0 3.31532339156 889063.643269 150.595 - 2018-04-20 17:34:03+00:00 2.0 1.65779745031 444464.780093 124.19 - 2018-04-20 17:34:06+00:00 3.0 1.65459748111 443768.339145 119.52 - ...
In short, performance regression tests, where you want to make sure that a component or end-to-end system does not regress performance wise as new code is being pushed towards production.
As a general rule of thumb: if the user scenario duration is short (less than 30s, say if you're testing a single API endpoint or a microservice) we recommend you run the tests on every commit as the test won't have to run for long to gather enough samples for you to be able to extract meaningful value from the results. If the user scenario duration is longer (say for end-to-end tests with more complex flows) we recommend on the order of once a day, or whatever makes sense given how often you deploy to your pre-production environment where load tests are run.