Testing complex mono projects

Vadym Barylo
Level Up Coding
Published in
7 min readJul 8, 2022

--

You cannot mandate productivity, you must provide the tools to let people become their best.

Steve Jobs

Monorepo is a great solution for managing independent and standalone components of complex projects inside a single codespace.

This dramatically reduces the time needed to propagate code change between layers and independent components, so have a positive effect on developer productivity and reduces pain in managing cross-dependencies.

But adopting this approach only as a code structuring pattern is like additionally proving the quote “If my only tool is a hammer, then every problem is a nail”.

Collecting single-purpose projects and components into a single manageable workspace give additional benefits for more comprehensive and robust verification of all parts of this workspace, as well as a full workspace as a complete solution.

A very common example of single-purpose components and projects that are part of a single workspace is a REST API service application composed of company standardized common libraries and client SPA that is a consumer of this REST API server and also shares with it common elements like transfer objects, authentication/security components, etc.

Multi-projects solution

Having all individual components and main consumers of these components (APP and API projects) in the same monorepo is an example of efficient code structuring. Each individual component can have its own personalized build context to produce self-sufficient artifacts for later use.

But what about the global build context? Keeping only individual components tested, can we ensure that overall flow (pressing the button on UI produces proper grid data after reload) also works as expected?

Keeping root projects as part of a single workspace gives much more power in verifying full E2E behaviors as we have all pieces logically connected together to initiate user clicks, process API requests, and bind responses to the user form.

So maybe we can also consider a monorepo not only as a code structure pattern but also as a context with logically connected components that serve for single intent? And “global build context” is a way to manage and manipulate all projects and modules as a complete business unit?

Multi-project solution with targets

Having all sets of automation in place for all dependent components in this tree, we still struggling with a complete end-to-end flow: the API project is needed database access to store its data, and the APP project is needed a working API server to consume.

So having all required pieces mocked during this check doesn’t give us full confidence about the overall quality state.

So the next step in adopting the monorepo pattern is not only to enforce solution structure for better code re-use but to ensure that all components and root projects can be managed and verified as a single platform that covers full end-to-end user scenarios.

Hands-on experience

Given: REST API service and SPA project (that is a consumer of this API) are part of a single monorepo. The demo solution can be found on GitHub.

Requirements: Ensure a complete E2E check (from browser to database) is part of CI verification and doesn’t require separate deployments as test prerequisites.

Toolset:

Design takeaways:

  • each application in monorepo has its own build context and it is full to ensure application quality as a fully independent quantum. Also, this build context is sufficient enough to generate a deployable artifact.
  • there is a aggregated common build context with main purpose is to check end-to-end flow for all projects connected together in this monorepo. It is also full and sufficient to ensure solution quality without any mocking strategies applied.

Implementation notes:

NX framework allows managing many connected pieces using a single build flow, all we need is to properly define dependency order and task execution order. We have 3 projects in the solution:

  • API — NestJS REST API service that exposes users endpoint to load and store user payload into the Postgres database
  • APP — Angular front-end project to consume REST API and render stored users
  • APP-E2E — test project to verify all connected pieces as a complete unit

Let us keep API and APP project independent and APP-E2E dependent on these two:

Also, complete E2E flow will require build artifacts to be ready before the test starts, so put it also as an explicit requirement for the test target

Once ready, run npm test command to see execution sequence:

Execution sequence

As we can see — dependent projects are going to be tested first. If all is good then build artifacts are prepared and complete E2E flow verification is performed. All is good for now.

The next step is to describe basic verification scenarios: store users in a real database and show them on the user page. And most important — it should be a complete E2E test (from API call to store in database).

For this purpose, we can use testcontainers — a utility that helps to provision docker components as part of single test execution. We can do it as a test pre-initialization phase in Jest:

apps/app-e2e/jest.config.ts

and the setup to provision required components:

apps/app-e2e/tests/globalSetup.ts

If to take a look closer — we can see that the boot script is doing:

  • start the PostgreSQL container using the official “postgres” image
  • build “ApiContainer” from the local Dockerfile and then start it
  • promotes Postgres access info service container through env variables

The final step is to cover the scenario by managing the user through the E2E test

The test is self-explanatory:

  • service is started as a container and we configure the request object
  • we are storing a couple of users by requesting API endpoints further checking that the operation is successful
  • finally checking that all stored users can be requested from the persistent storage (Postgres)

The next step is to cover the UI part and to check that stored users are rendered successfully — this includes the next verifications under the hood:

  • APP service is up and running as a Docker container instance
  • APP service is properly configured to access API service (that also started as Docker container instance)

Let's run the test and check the result by directly navigating to the URL of provisioned docker instance (e.g. http://localhost:49164/ in my case).

We can see the next picture — after the test execution page, all users are displayed. These users were requested from the live API service that was stored in the live Postgres database. And all of this was checked completely isolated as part of a test run without physical promotion to live environments.

This solution helps to perform much robust and complete verification before even deploying services into the cluster and use as many live dependencies as we needed during test runs.

Level Up Coding

Thanks for being a part of our community! Before you go:

--

--