Developer-centric load testing defined

We love fast apps, APIs and websites, and we know your users do too. We also love open source, and at LoadImpact we have a strong idea of what developer-centric load testing in the era of DevOps should look like. This drives everything we do. 

Load testing for developers and DevOps teams

In today's DevOps environments, load testing should start with the people who know the application best, the developers. It must be part of a continuous testing methodology. Developers, performance engineers and QA teams must collaborate on performance testing throughout the software development lifecycle.


We believe that developer tools should be open source to allow for a community to form and drive the project forward through discussions and contributions. This is why we built k6, the load testing tool we've always wanted ourselves!

Developer ergonomics is important

Local environment

As developers we love our own local setup. We spend a lot of time and effort on making sure our favorite code editor and command line shell is as we want it to be, everything else is subpar, a hindrance to our productivity. The local environment is king. It’s where we should be coding our load test scripts and from where we should initiate our load tests.

Everything as code

DevOps has taught us that the software development process can be generalized and reused for dealing with change not just in application code but also in infrastructure, docs and tests. It’s all just “code”.


We check in our code at the entry point of the pipeline, version control (Git and Github in our case), and then it’s taken through a series of steps aimed at assuring quality and lowering risk of releases. Automation helps us keep these steps out of our way while maintaining control through fast feedback loops (context-switching is our enemy). If any step of the pipeline breaks (or fails) we want to be alerted in our communication channel of choice (in our case Slack), and it needs to happen as quickly as possible while we’re in the right context.


Our load testing scripts are no exception. We believe load test scripts should be plain code to get all the benefits of version control. No need for version control hostile, unreadable and tool generated XML (yes, I’m looking at you JMeter).


Load testing can then easily be added as another step in the pipeline, picking up the load test scripts from version control to be executed. Truth to be told though, if any step in the pipeline takes too long, it’s at risk of being “temporarily” turned off (“just for now, promise”). Whether it’s that Docker image build taking forever, or that 30 min load test ;-)


Yes, load tests are naturally in the risk zone of being axed in the name of this-step-is-taking-too-long. Load tests need time to ramp-up the simulated traffic to gain stable measurements that can actually be acted on. This is why we usually don't recommend load tests to be run on every code commit, but rather in the frequency range of once a day. When merging code into a release branch or as a nightly build perhaps, so that you can have your snazzy load test report with your morning coffee before you’ve settled into your zone! There are, of course, exceptions to this. If you're running load tests against a very limited set of API endpoints, maybe a microservice, your tests might not need to be running for more than a few minutes to result in enough data points to derrive value from them.

Load testing against pre-production environments


Load testing should happen pre-production. Testing in production is risking our income. Monitoring for infrastructure failures or edge-case code issues is what we do in production.


Using an APM product in production is not a reason not to run load tests. It will not tell you how scalable your system is, it's just deeper monitoring.


The pre-production environment should mimic the production environment to as far an extent as possible, code, data and all. Getting a representative data layer can be especially tricky for a number of reasons:

  • Strict separation between production and pre-production environments making database dumps and restores infeasible and in some cases regulatorily impossible
  • Scrubbing of data sources can be non-trivial and hard to verify full coverage. We don’t want to end up sending thousands of emails to real customers just because we missed to scrub the data properly!
  • Systems of today quickly turn complex with many moving parts. When load testing we need to make sure that our system does the right thing(tm) in terms of third-party integrations like credit card processors, email delivery services etc. that might need mock or pre-production values in the data layer

Simple testing is better than no testing

Load tests should mimic real world users/clients as closely as possible. Whether it’s the distribution of requests to your API or the steps of a user moving through a purchase flow. But, let’s be pragmatic for a second, the 80/20 rule states that you get 80% of the value from 20% of the work and a couple of simple tests are vastly better than no tests at all. Start small and simple, make sure you get something out of the testing first, then expand the test suite and add more complexity until you feel that you’ve reached the point where more effort spent on realism will not give enough return on your invested time.


There are two types of load tests that you could run. The “unit load test” and the “scenario load test”.


A unit load test is testing a single unit, like an API endpoint, in isolation. You might primarily be interested in how the endpoint’s performance trends over time, and be alerted to performance regressions. This type of test tends to be less complex than the scenario load test (described below) and so is a good starting point for your testing effort. Build a small test suite of unit load tests first, then if you feel you need more realism you can go into scenario testing.


A scenario load test is for testing a real-world flow of interactions; a user logging in, starting some activity, waiting for progress, and then logging out. The goal here is to make sure that the most critical flows through your app are performant. A scenario load test can favorably be composed by means of unit load tests.


import { check } from "k6";
import http from "k6/http";

export function testLogin(params) {
    let data = params || { username: "admin", password: "test" };"", data);

export default function() {

import { group } from "k6";
import { testLogin } from "units/login";

let users = open("users.csv");

export default function() {
    group("Login", function() {
        let user = users.getRandom();
        testLogin({ username: user.username, password: user.password });

Load testing should be goal oriented

A prerequisite for successful load testing is having goals. We might have formal SLAs to test against or we just want want our api, app or site to respond instantly (<=100ms according to Jakob Nielsen), we all know how impatient we are as users waiting for an app or site to load.


Specify performance goals, above what level is a response time not acceptable, and/or what is an acceptable request failure rate. It is also good practice to make sure that your load testing is functionally correct. Codify the performance and functional goals using thresholds and checks (like asserts).


For load testing to be automated the tool needs to be able to signal to the runner, usually a CI server, whether the test has passed or failed.


With you goals defined it’s a straightforward task of achieving this as your thresholds will, on failure, cause k6 to return a non-zero exit code.


import { check } from "k6";
import http from "k6/http";

export let options = {
    thresholds: {
        "http_req_duration{url:}": ["p95<100"]

export function testLogin(params) {
    let data = params || { username: "admin", password: "test" };
    let res ="", data);
    check(res, {
         "is status 200": (r) => r.status === 200

export default function() {

Our commitment

We commit to build the best load testing tool, k6, and developing it in the open with the community, read our document on stewardship of the k6 project. We believe this is the necessary foundation to build great developer tooling.


We aim to codify our 20 years of performance testing knowledge into algorithms to bring you smart performance analysis through our commercial offerings, like Insights, relieving you of a lot of the work that has traditionally been part of the performance engineer's responsibility, interpreting the vast amounts of data generated by a load test.


Let us be the performance engineering experts, be that step in your automation pipeline that quietly ensures the performance of your systems are in check and screams loudly when not, so that you can focus on your application code!