Java: How to run component tests with code coverage, using Gradle and Docker

Joe Honour
Level Up Coding
Published in
18 min readOct 22, 2020

--

Tech stack: Java 14, Spring, Docker, Gradle

Figure 1: the guides finished solution. Start by unit testing business functionality, collecting code coverage. After this, start the application in docker, with Jacoco reporting enabled. Run the component tests, collecting code coverage from the application. Once the tests pass, extract the coverage report. This means we can now get all the benefits of component testing, while keeping the coverage and reporting unit testing offers.

When it comes to test approaches for modern micro-service based architectures, I often feel detached from the norm. In most projects I find that a heavy unit test approach is adopted, producing a test pyramid similar to Figure 2.

Figure 2: a test pyramid showing how unit tests form the most common base of testing, followed by component tests, then integration and end to end tests.

Now this does have its benefits, unit tests can:

  • provide quick feedback to errors or problems introduced by changes.
  • provide a structure to help you write maintainable and testable code. For instance, the fact you can test a class in isolation means its dependencies are clear and well separated.
  • provide easy to monitor code coverage metrics, allowing you to identify edge cases you have missed in testing.

However, how realistic are these benefits when we turn to a micro-service based architecture?

Micro-services allow the delivery of separate pieces of well isolated functionality, providing a well known public contract to interact with them (think REST and OpenAPI, gRPC and Protobuf, etc). With this approach we expect each service to contain minimal business logic, with a lot of services simply providing a clean abstraction over some data storage. This means most micro-services spend their time either:

  • composing request/responses to other services in order to bring a set of services together to form a full business process.
  • storing data to some data layer, for instance providing a CRUD abstraction over a database.

Having worked on a number of micro-service based projects, I usually observe that most bugs now occur around integration and storage, rather than within the individual pieces of business functionality. Let me give you a few examples of the most common issues I find:

  • when composing services together the wrong request structure is sent. i.e. a required field is missing.
  • the service gives a response that the calling service does not know how to handle. i.e. we are missing the fact the downstream service can return a 400 and don’t account for this case.
  • the service stores the wrong thing in the database. I know this sounds arbitrary, and not necessarily exclusive to micro-services, it just becomes more prevalent when you work on a service that only stores data.
  • the service is not configured properly to actually talk to a downstream service or database. For example, a connection string is missing, a library hasn’t been configured correctly (i.e. JPA repositories), causing the service to fail at startup.

With this in mind, let’s think why unit tests may not be correctly placed to solve these issues. A unit test normally tests at the class level, mocking dependent classes. This is usually fine when you are spending most of your time in business process, as theres very little to mock. However, most of our issues are at the integration points of contact in this new micro-service world. Therefore, we don’t get the same level of value anymore, as our tests spend more time mocking things than they do asserting the production code behaves correctly.

For example, a unit test that asserts we call a mocked ‘save to db function’ gives us almost no value compared to actually storing data to a real database, and asserting we can retrieve it. The mocked test wont catch any bugs to do with connection details, library setup or assumptions, or if the method we are calling even does the thing we want it to!

So how we can we get the best value out of our tests ,while still trying to keep the benefits of unit testing ?

Component tests.

A component test, for the purpose of this guide, will be defined as:

An external test against the public API of a service. A real data layer, if required, should be used. Calls to downstream applications should be sent to external services, that are controllable via the test, to provide the correct response for a specific use case.

With this approach, a component test allows the service to run in a production-like mode, enabling our test suite to provide the most accurate level of assurance against the public API of the service. This gives some great benefits:

  • your tests prove the libraries, dependencies, configuration and deployment of your service is correct and repeatable.
  • your tests help guide your consumers as to how to call your service, and what to expect for each request/response, as the tests act like any other consumer.
  • if a bug is noticed during integration, you can add a component test with the request and assert the expected response. You are then in a strong position to prove any fix you make, acting as the reporter of the bug did, before you start looking into the codebase as to where the problem might lie.
  • automating the configuration and running of your service becomes everyones job, as you now need to do this every time you run the component tests.
  • you can easily refactor code within your service, assuming it doesn't change the public API, without needing to rewrite all your tests. I have lost count of the amount of times i’ve changed a single class to find it’s involved in test setup across hundreds of unit tests, that now don’t even compile. With component tests, you are validating the behaviour from an external point. Therefore, what classes used to meet that behaviour become irrelevant.
  • if you do refactor something, and a component test breaks, you know you have broken an assumption currently relied upon by your consumers. This means you can accurately look to see the effects of your change, and whether you need to inform consumers of the change before this reaches a deployed environment.

Component tests certainly sound like they solve a lot of problems, so why have we traditionally done less of them than unit tests?

Well, this comes back to the benefits of unit testing. Until recently, starting a service and its database in a reliable way has been difficult, and often slow. You don’t want to rely on a test pack that takes 10 minutes to start, only works on some peoples machines, and has flaky or unreliable tests. However, with the introduction of containers we can now deploy multiple services together locally, in a controlled and reliable way. Not only that, Docker provides networking, alongside orchestration, meaning we gain control of the entire services environment. This aids in making component tests easier and faster to implement. With these benefits in mind, I personally now think the original test pyramid shown at the beginning of this guide, should mutate itself to something like whats shown in Figure 3.

Figure 3: shows the altered test pyramid. Component tests now form the base of the test approach. However, we still fall back to unit testing when needing to assure pure business functionality.

As you can see we drastically reduce the number of unit tests we have, but we don’t remove them altogether. Unit tests are still incredibly valuable, but only for testing the business process elements of services, not everything a service does. For example, you may find you have a few core calculations that are a great candidate for unit testing, but the results are stored to a database, which would be better tested at the component level.

With this driving force behind wanting to focus more on component testing, we need to find a way to reliably implement, and execute, component tests alongside unit tests.

The rest of this guide will show the implementation of component tests for a simple REST service, using Gradle, Docker, and Spring.

Technologies needed

In order to run the project, the following will need to be installed on your machine:

  • Java 14: latest and greatest version of Java at the time of writing (though you can adjust the version of Java needed for your application).
  • Gradle: the use of the gradle wrapper for the project is required.
  • Docker: the latest versions of docker and docker-compose are required to run the service.

This article extends the following guide on building a contract first Spring service. The instructions will expect this project setup, but you can happily add the code snippets to your own Gradle project, filling in any blanks from the previous guide where needed. If you would like to jump to the finished code sample, you can find it here.

1. Adding Component Tests

The first step to getting component tests running against the service, is providing a location for them to be stored. The project structure we will end up with is:

  • src/main: production source code
  • src/test: unit tests
  • src/componentTest: component tests.

Step 1: add an initial component test

If you have the codebase from the previous guide, we will have a single GET endpoint that returns a Hello World string. In order to test this, add the following class:

/src/componentTest/java/HelloWorldComponentTest.java

This class contents should contain the following:

import org.junit.jupiter.api.Test;

import java.io.IOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

import static org.junit.jupiter.api.Assertions.assertTrue;

public class HelloWorldComponentTest {

@Test
public void shouldReturnHelloWorld() {
// Arrange

// Act
var result = makeGetRequest("http://localhost:4000/hello");

// Assert
assertTrue(result.contains("Hello World!"));
}

private String makeGetRequest(String uri) {
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(uri))
.build();

try {
return client
.send(request, HttpResponse.BodyHandlers.ofString())
.body();
} catch (IOException | InterruptedException e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}

The test sends a GET request, expecting the service to be running, parses the response to a String, and then asserts the response contains ‘Hello World’.

You will notice that, if you are using IntelliJ or another IDE, this file is not picked up correctly or runnable. This is because we haven’t added the /src/componentTest/java directory to the project. In order to do this, we need to add the following to the build.gradle file:

sourceSets {
componentTest {
java.srcDir "$projectDir/src/componentTest/java"
resources.srcDir "$projectDir/src/componentTest/resources"
compileClasspath += main.output + test.output
runtimeClasspath += main.output + test.output
}
}

The componentTest source set specifies the directories for Gradle to include during the build process. You can see we configure this to the newly created src/componentTest/java directory. With the directory now part of the projects sources, we need to make sure it has access to any dependencies it needs:

configurations {
componentTestImplementation.extendsFrom testImplementation
componentTestRuntime.extendsFrom testRuntime
}

This allows the component test execution targets to extend the equivalent test targets, giving them access to the same dependencies as the unit tests. Now we can build the component tests. The final thing to add is the task itself to run the tests:

// run the componentTest/** tests
task componentTest(type: Test) {
description = 'Runs component tests'
group = 'verification'
testClassesDirs = sourceSets.componentTest.output.classesDirs
classpath = sourceSets.componentTest.runtimeClasspath
outputs.upToDateWhen { false }
}

// print tests out to console as they run
def testLogging = {
afterTest { desc, result ->
logger.quiet "Test ${desc.name} [${desc.className}] with result: ${result.resultType}"
}
}

test {
configure testLogging
useJUnitPlatform()
}

componentTest {
configure testLogging
useJUnitPlatform()
}

This adds a componentTest task, which uses the component test source set when searching for tests. As an added extra, I have also added some logging to both test targets to aid in debugging any failed tests.

With these additions to the build.gradle file, if you refresh your IntelliJ config, or run:

./gradlew clean build

you will now see the componentTest directory correctly imported and useable.

Step 2: running the test

In order to run the component tests we need to start the application. To do this, from a command line, run the following:

./gradlew bootRun

You should see Spring start logging to stdout, and the service will now be listening on port 4000.

With the service running, from another command prompt, run:

./gradlew componentTest

This will execute the component test task we added, and in the logs you should see a message similar to:

Test shouldReturnHelloWorld() [org.guardiandev.helloservice.HelloWorldComponentTest] with result: SUCCESS

At this point we have a way to run component tests through Gradle. However, we don’t want to manually start and stop the service ourselves every-time we want to run them. In an ideal world, we would like the execution of the component tests to also incorporate any setup and teardown needed. To achieve this, we will integrate Docker into the build.

2. Running the service in Docker using Gradle

In order to provide docker support for the application, we will need to add the following files to the root of the project:

  • Dockerfile : contains the Docker build instructions
  • .env : contains environment variables we can use in docker compose files.
  • docker-compose.yml : will contain any dependencies the application needs, for instance a database.
  • docker-compose.override.yml : will contain the configuration to run the service.

The reason we have the service in an override file, rather than all in the docker-compose.yml file, is so that we can start the dependencies without turning the service on. This means we can then run/debug the service in IntelliJ if we need to, while still using docker to run anything external the service relies upon.

As the application does not require any external dependencies currently, the docker-compose.yml file only needs to contain:

version: '3.7'

networks:
default:
name: contract-first-service

This will simply name the default network, making it easier to identify this service if we need to debug any Docker compose problems.

With the initial docker-compose file implemented, lets add the docker-compose.override.yml file, which we use to start the application.

version: '3.7'

services:
contract-first-service:
image: ${SERVICE_GROUP}/${SERVICE_NAME}:${SERVICE_TAG}
ports:
- 4000:4000
environment:
- SPRING_PROFILE=${SPRING_PROFILE}

This override file, which is picked up be default when running docker-compose up, references environment variables to get the docker image name, tag, and profile. This means the following .env file is required:

SERVICE_TAG=latest
SERVICE_GROUP=com.guardiandev
SERVICE_NAME=helloservice
SPRING_PROFILE=default

You can see we currently have default values for the variables we want to reference in docker-compose. However, we are about to add Gradle tasks to interact with Docker, where these variables will be replaced.

To support building things efficiently with Docker, we need a way to download all the dependencies the application needs. To start with, add the following config to your build.gradle file:

Note: with things like the configurations block, you should add the contents of the snippet into the existing configurations (don’t define configurations twice). I’m just showing you the bits of configuration necessary for the current task, not the entire configuration each time.

configurations {
downloadDependenciesConfig.extendsFrom implementation, testImplementation, testRuntime
}

// pull dependencies needed by the application
task downloadDependencies (type: Exec) {
configurations.downloadDependenciesConfig.files
commandLine 'echo', 'Downloaded all dependencies'
}

This provides us with a task called downloadDependencies which, when we execute it, will force the resolution of the implementation, testImplementation, and testRuntime dependencies.

The next step is to add a Task to build the service within Docker:

task buildDockerimage(type: Exec) {
workingDir "$projectDir"
commandLine 'docker', 'build', '.', '-t', "$serviceGroupId/$serviceApplicationName:$serviceVersion"
}

This will take the Dockerfile (which I will show you in the next part of the guide) and build the application image. We tag the image with the same groupId, name, and version as the JAR, making it easy to line up what version of the code is running inside a container.

The next step it to add support for composing up, and down, the application and its dependencies:

task composeUp(type: Exec) {
dependsOn buildDockerimage
workingDir "$projectDir"
environment << [ SERVICE_TAG: "$serviceVersion", SERVICE_GROUP: "$serviceGroupId", SERVICE_NAME: "$serviceApplicationName", SPRING_PROFILE: "docker" ]
commandLine 'docker-compose', 'up', '-d'
}
task composeDependenciesUp(type: Exec) {
workingDir "$projectDir"
commandLine 'docker-compose', '-f', 'docker-compose.yml', 'up', '-d'
}
task composeDown(type: Exec) {
workingDir "$projectDir"
commandLine 'docker-compose', 'down', '-t', '60', '-v'
}

These 3 tasks use the docker-compose files we have in the root of the project to:

  • composeUp: turn on the service, ensuring we have built it first, along with its dependencies.
  • composeDependenciesUp: turn on just the dependencies the service needs to run.
  • composeDown: turn off all resources currently running in docker compose.

With this, we now have the ability to use Gradle to orchestrate the turning on, and off, of the service. The final step we need to do, is hook this into the component test task, so we can:

  • start the service and its dependencies
  • wait for the service to be ready
  • run the component tests
  • cleanup any resources we started

To achieve this we add the final piece of configuration:

task waitForService {
doLast {
def responseCode = null

while(responseCode != 200) {
sleep 1000
try {
def req = "http://localhost:4000/actuator/health".toURL().openConnection()
responseCode = req.getResponseCode()
logger.log(LogLevel.INFO, "Response returned from service $responseCode")
}
catch(Exception) {
logger.log(LogLevel.INFO, "Failed to connect to service")
}
}
}
}
task componentTestDocker {
dependsOn componentTestClasses, composeUp, waitForService
doLast {
componentTest.executeTests()
}
finalizedBy composeDown
}

In this snippet, we add a task that continues to poll the service, waiting for a 200 response from the health endpoint. We then wire all of the tasks together into the componentTestDocker task, which:

  • compile the component tests
  • calls composeUp to turn the service and dependencies on
  • calls waitForService to make sure we don’t execute the tests before the service has started
  • executes the component tests
  • tidies up by calling composeDown, once the tests have finished

These are all the changes we need to add to the build.gradle file, in order to make use of docker.

The final step we have left to do, is to implement the Dockerfile as shown below:

# build custom JRE
FROM openjdk:14-alpine AS jre-build
WORKDIR /app

RUN jlink --verbose \
--compress 2 \
--strip-java-debug-attributes \
--no-header-files \
--no-man-pages \
--output jre \
--add-modules java.base\
,java.logging\
,java.xml\
,jdk.unsupported\
,java.sql\
,java.naming\
,java.desktop\
,java.management\
,java.security.jgss\
,java.instrument

# start from gradle build image
FROM jre-build AS build
WORKDIR /app

# copy gradle only files over
COPY gradlew gradlew
COPY gradle/ gradle/
RUN ./gradlew --version

# copy project build files over
COPY build.gradle build.gradle
COPY settings.gradle settings.gradle
COPY gradle.properties gradle.properties

# download dependencies only
RUN ./gradlew downloadDependencies

# copy full solution and build
COPY . .
RUN ./gradlew build

# take a smaller runtime image for the final output
FROM alpine:latest

COPY --from=jre-build /app/jre /jre
COPY --from=build /app/build/libs/helloservice-0.0.1.jar /app.jar

ENV SPRING_PROFILE=default
ENV JAVA_TOOL_OPTIONS=

EXPOSE 4000
ENTRYPOINT /jre/bin/java -Dspring.profiles.active=$SPRING_PROFILE -jar app.jar

The docker build works as follows:

  1. jre-build uses the new java modules feature to build a smaller JRE, only including the parts we need to run the Spring application.
  2. the next step copies the Gradle configuration files into the image, before running the downloadDependencies task. This will create a layer in the image, that can be cached, with all the dependencies downloaded. Thus, if we don’t edit the Gradle files we will not have to download any packages when rebuilding the image.
  3. with the dependencies downloaded, the next step copies all the files in the solution over, and builds the JAR.
  4. finally, we take an alpine runtime environment (to create the smallest possible runtime image), copy over the JRE and the application JAR, and setup the entry-point to run the application on startup.

With this all implemented, you should now be able to run the following task to run the component tests:

./gradlew componentTestDocker

If you have struggled with anything here, please refer to the reference implementation, as i know I have just outlined a lot of configuration :)

Given you have this working, you now have:

  • a test approach that can run both unit tests and component tests
  • an application running in docker, with docker-compose being used to orchestrate dependencies for local development and test.
  • Gradle being used as the tool to control the build process of the entire service, keeping config in a single place.

This is a great point to be at when delivering REST base micro-services, and I would not blame you if you stopped here and called it a day. However, there is one final thing we can do to really provide the best test approach for the service: adding code coverage reporting to both the component and unit tests.

3. Collecting code coverage with Jacoco

Jacoco is a plugin we can use with Gradle to capture code coverage of a running JVM, and then generate a report. In order to collect code coverage with Jacoco, we need to collect a coverage report for both the unit tests and the component tests. The unit tests are pretty simple, and we will get this for free when we add the Jacoco plugin. However, getting code coverage out of the docker image, for when we run the component tests, will be slightly trickier.

Before we get stuck into the implementation, let me give you a quick overview of how Jacoco works and how we will make use of it.

Jacoco comes in 2 parts:

  1. Jacoco supplies a Java agent which attaches to the JVM of the running application and records all the classes/lines executed. Once the JVM terminates, the agent generates an exec file, which contains the coverage report of everything executed in the JVM’s lifetime.
  2. the Jacoco plugin takes the coverage report and generates a human readable report, with a filter to exclude any libraries or dependencies.

Therefore, in order to collect coverage from the running application, when we execute the component tests we will:

  • provide a Gradle task to download the Jacoco agent to a known location, so we can mount it within the docker container.
  • provide a new docker-compose.jacoco.override.yml file in order start the application with the Jacoco agent attached. This means when we run the container, the Jacoco agent will collect all the code executed.
  • add Gradle tasks to run the service in this coverage mode, using the new docker compose override file. With the service running, we will then execute the component test. Once they are complete, we will generate a report by reading the Jacoco coverage report.

So… armed with the knowledge of how Jacoco works let’s get stuck in. First, lets add the new docker-compose.jacoco.override.yml file to the root of the project:

version: '3.7'

services:
contract-first-service:
image: ${SERVICE_GROUP}/${SERVICE_NAME}:${SERVICE_TAG}
ports:
- 4000:4000
volumes:
- type: bind
source: ./build/jacoco
target: /jacoco
environment:
- JAVA_TOOL_OPTIONS=-javaagent:/jacoco/org.jacoco.agent-runtime.jar=destfile=/jacoco/componentTest.exec
- SPRING_PROFILE=${SPRING_PROFILE}

This override file turns the service on, same as before. However, you can see we do 2 new things:

  1. we bind the build/jacoco folder into the container at the /jacoco location. This means, if we place the Jacoco agent in the build/jacoco location, we can access it when we start the application. Also, is we write the coverage report to this location, we will have access to it outside of the container.
  2. we override the JAVA_TOOL_OPTIONS to attach the Jacoco agent to the applications JVM, and output the coverage report to /jacoco/componentTest.exec. This means the coverage report, due to the volume binding in step 1, will end up in build/jacoco when the test stops.

Now we have a way to run the service with Jacoco enabled, lets look at what we need to add to the build.gradle file to enable this.

First, we need a way to download the Jacoco agent.

configurations {
downloadJacoco
}
dependencies {
downloadJacoco "org.jacoco:org.jacoco.agent:0.8.5:runtime"
}
task copyJacocoAgent(type: Copy) {
from configurations.downloadJacoco
into "$buildDir/jacoco"

// strip version number out of agent jar
configurations.downloadJacoco.allDependencies.each {
rename "-${it.version}", ""
}
}

You can see from the above snippet, we add a new configuration called downloadJacoco. We then register the Jacoco agent as a dependency of this configuration. The copyJacocoAgent task, when executed, simply downloads and copies the agent JAR from Maven to the build/jacoco directory, while stripping out any version numbers in its name.

With the ability to download the Jacoco agent, the next thing we need to do is start the service with docker, but using the coverage override configuration.

task composeUpJacoco(type: Exec) {
dependsOn buildDockerimage, copyJacocoAgent
workingDir "$projectDir"
environment << [ SERVICE_TAG: "$serviceVersion", SERVICE_GROUP: "$serviceGroupId", SERVICE_NAME: "$serviceApplicationName", SPRING_PROFILE: "docker" ]
commandLine 'docker-compose', '-f', 'docker-compose.yml', '-f', 'docker-compose.jacoco.override.yml', 'up', '-d'
}
task componentTestCoverageDocker {
dependsOn componentTestClasses, composeUpJacoco, waitForService
doLast {
componentTest.executeTests()
}
finalizedBy composeDown
}

With the configuration above, composeUpJacoco downloads the Jacoco agent which, if you remember, we mount in the docker compose override file. The componentTestCoverageDocker task then uses this docker configuration to turn on the service, runs the component tests against the container, then cleans up the environment.

note: as Jacoco only generates the full report on graceful shutdown of the JVM, you need to allow time for this during your compose-down call. This is why, if you look at the composeDown task, we provide a timeout of 1 minute.

After running the componentTestCoverageDocker you should see, in the build/jacoco directory, a componentTest.exec report, which contains the coverage statistics of the service after all the component tests have ran.

The final step is to now take this report, combine it with the unit test report, and generate a human readable output that lets us fully see the code coverage of the service.

// adding jacoco test reporting
jacoco {
toolVersion = "$jacocoToolVersion"
}

task fullCoverageReport(type: JacocoReport) {
dependsOn test, componentTestCoverageDocker
executionData tasks.withType(Test)
sourceSets sourceSets.main
reports {
html.enabled = true
html.destination file("$buildDir/jacoco-reports")
}
}

The snippet above is the final piece of the puzzle. It adds a fullCoverageReport task which, runs both the unit tests and the component tests. The task then looks for executionData (the exec files generated) for both the test tasks. Finally, it filters the report to only include classes in the main source set, and generates the output as html (Figure 4).

./gradlew fullCoverageReport
Figure 4: the full coverage report generated by Jacoco. This combines the coverage of both the unit and component tests. You can click through to any class to see the exact line-by-line coverage.

The report is found in: build/jacoco-reports/index.html.

If you want to see a working, fully implemented example, you can go to my github.

Conclusion

At the end of this guide, assuming all went well, we now have the ability to:

  • reliably unit test the service.
  • reliably component test the service, using docker to orchestrate the running of the application and its dependencies.
  • collect code coverage from both the unit and component tests, generating a single unified report.

I hope you can see the benefit to prioritising component tests in a micro-service based environment, and, with a small amount of upfront effort, you can make them just as quick and effective to work with as unit tests.

If you have any questions over this approach, feel free to reach out to me.

--

--