Skip to main content

Performance Testing a Spring Boot Application with Gatling

In this blog, we’ll explore using Gatling, a powerful load-testing tool, to test a simple Spring Boot application. We'll set up a performance test for a sample REST API endpoint, demonstrate step-by-step how Gatling integrates with the project, and configure a scenario similar to the example discussed earlier.


What is Gatling?

Gatling is a highly performant open-source load-testing tool. It helps simulate high-traffic scenarios for your APIs, ensuring your application can handle the expected (or unexpected) load efficiently.


1. Setting Up the Spring Boot Project

We'll create a Spring Boot REST API with a simple /search endpoint that accepts query parameters: query and category.

@RestController

@RequestMapping("/api")

public class SearchController {


    @GetMapping("/search")

    public ResponseEntity<String> search(

            @RequestParam String query,

            @RequestParam String category) {

        // Simulate a simple search response

        String response = String.format("Searching for '%s' in category '%s'", query, category);

        return ResponseEntity.ok(response);

    }

}


Step 1.2: Add Dependencies in pom.xml

Make sure your project has the following dependencies:

<dependencies>

    <dependency>

        <groupId>org.springframework.boot</groupId>

        <artifactId>spring-boot-starter-web</artifactId>

    </dependency>

</dependencies>


Start the Spring Boot application and ensure the /api/search endpoint is reachable, e.g., http://localhost:8080/api/search?query=phone&category=electronics.


2. Setting Up Gatling

Step 2.1: Add Gatling Plugin to Your Project

If you use Maven, add the Gatling plugin to your pom.xml for performance testing


<build>

    <plugins>

        <plugin>

            <groupId>io.gatling</groupId>

            <artifactId>gatling-maven-plugin</artifactId>

            <version>4.0.0</version>

            <executions>

                <execution>

                    <id>gatling-test</id>

                    <phase>integration-test</phase>

                    <goals>

                        <goal>execute</goal>

                    </goals>

                </execution>

            </executions>

        </plugin>

    </plugins>

</build>


Run the following command to initialize the Gatling directory structure:


mvn gatling:generate

It will create a src/test/scala folder for your simulation scripts.


3. Writing a Gatling Simulation

Create a file SearchSimulation.scala under src/test/scala with the following content:

Step 3.1: Import Gatling Basics


import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class SearchSimulation extends Simulation {

  // Base URL for your Spring Boot app
  val httpProtocol = http
    .baseUrl("http://localhost:8080/api") // Change this as per your app
    .acceptHeader("application/json")    // Accept JSON response

  // CSV Feeder to supply data for queries
  val searchFeeder = csv("search_terms_with_categories.csv").circular

  // Scenario definition
  val scn = scenario("Search Simulation")
    .feed(searchFeeder) // Attach the feeder
    .exec(
      http("Search Request")
        .get("/search") // Endpoint path
        .queryParam("query", "#{query}")       // Dynamically inject "query"
        .queryParam("category", "#{category}") // Dynamically inject "category"
        .check(status.is(200)) // Ensure the response is 200 OK
    )

  // Load simulation setup
  setUp(
    scn.inject(
      rampUsers(100).during(30.seconds) // Gradually add 100 users over 30 seconds
    )
  ).protocols(httpProtocol)
}

Step 3.2: Create a CSV Feeder

Create a file search_terms_with_categories.csv under src/test/resources:

query,category

phone,electronics

laptop,computers

book,education

shoes,footwear


This feeder will provide dynamic data for the simulation.

4. Running the Gatling Test

Run the simulation using the following Maven command:

mvn gatling:test


Once the test completes, Gatling generates a detailed HTML report in the target/gatling folder. Open the report to see performance metrics like response time, throughput, and error rates.

5. Key Concepts Explained

  1. Scenario: The scenario in Gatling defines user behavior (e.g., feeding data, sending HTTP requests).

  2. Feeder: Feeds dynamic data (from a CSV, JSON, or database) to requests. In this case, search_terms_with_categories.csv feeds values for query and category.

  3. Load Simulation: The setUp block determines how many users execute the scenario and at what rate (e.g., 100 users ramping up over 30 seconds).

  4. HTTP Protocol: Defined globally using http.baseUrl() for all requests. You can also customize headers, timeouts, etc.







Comments

Popular posts from this blog

Advanced Kafka Resilience: Dead-Letter Queues, Circuit Breakers, and Exactly-Once Delivery

Introduction In distributed systems, failures are inevitable—network partitions, broker crashes, or consumer lag can disrupt data flow. While retries help recover from transient issues, you need stronger guarantees for mission-critical systems. This guide covers three advanced Kafka resilience patterns: Dead-Letter Queues (DLQs) – Handle poison pills and unprocessable messages. Circuit Breakers – Prevent cascading failures when Kafka is unhealthy. Exactly-Once Delivery – Avoid duplicates in financial/transactional systems. Let's dive in! 1. Dead-Letter Queues (DLQs) in Kafka What is a DLQ? A dedicated Kafka topic where "failed" messages are sent after max retries (e.g., malformed payloads, unrecoverable errors). ...

Project Reactor Important Methods Cheat Sheet

🔹 1️⃣ subscribeOn – "Decides WHERE the Pipeline Starts" 📝 Definition: subscribeOn influences the thread where the data source (upstream) (e.g., data generation, API calls) runs . It affects the source and everything downstream (until a publishOn switches it). Flux<Integer> flux = Flux.range(1, 3) .doOnNext(i -> System.out.println("[Generating] " + i + " on " + Thread.currentThread().getName())) .subscribeOn(Schedulers.boundedElastic()) // Change starting thread .map(i -> { System.out.println("[Processing] " + i + " on " + Thread.currentThread().getName()); return i * 10; }); flux.blockLast(); Output: [Generating] 1 on boundedElastic-1 [Processing] 1 on boundedElastic-1 [Generating] 2 on boundedElastic-1 [Processing] 2 on boundedElastic-1 [Generating] 3 on boundedElastic-1 [Processing] 3 on boundedElastic-1 📢 Key Insight: ...

🔄 Kafka Producer Internals: send() Explained with Delivery Semantics and Transactions

Kafka Producer Internal Working Apache Kafka is known for its high-throughput, fault-tolerant message streaming system. At the heart of Kafka's data pipeline is the Producer —responsible for publishing data to Kafka topics. This blog dives deep into the internal workings of the Kafka Producer, especially what happens under the hood when send() is called. We'll also break down different delivery guarantees and transactional semantics with diagrams. 🧠 Table of Contents Kafka Producer Architecture Overview What Happens When send() is Called Delivery Semantics Kafka Transactions & Idempotence Error Handling and Retries Diagram: Kafka Producer Internals Conclusion 🏗️ Kafka Producer Architecture Overview Kafka Producer is composed of the following core components: Serializer : Converts key/value to bytes. Partitioner : Determines which partition a record should go to. Accumulator : Buffers the records in memory be...