You’ve crafted elegant unit tests and smoke tests for your Infrahub automation, but are you truly confident that everything will work seamlessly in a real-world environment? Don’t leave it to chance! This comprehensive guide will equip you with the knowledge and tools to master integration testing in Infrahub, ensuring your schema, data, and Git workflows are rock-solid before they ever touch production. We’ll show you how to spin up isolated Infrahub instances on the fly, automate schema and data loading, and catch those sneaky integration issues that unit tests can miss. Get ready to elevate your Infrahub automation to the next level!

Why Integration Testing Matters in Infrahub

Testing individual components in isolation is a good start, but integration testing is where the rubber meets the road. It validates how multiple components of your Infrahub system work together as expected, mimicking real-world conditions. This type of testing catches issues that only surface when all the pieces are connected, such as:

  • Broken Interactions: Unexpected errors when different parts of your Infrahub setup communicate.
  • Incorrect Assumptions: Mismatched expectations between components, leading to data inconsistencies.
  • Real-World Data Issues: Problems that only arise when using actual data in a running Infrahub instance.

Skipping integration testing is like building a race car without ever testing how the engine, transmission, and wheels work together. You might have excellent individual components, but the final result could be a spectacular failure on the track!

Integration Testing in Infrahub: The High-Level View

Integration tests in Infrahub involve verifying that your resources (schemas, data, Git repositories) function as expected when interacting with a live Infrahub instance. But we’re not talking about your precious production instance! Instead, we’ll leverage the power of Testcontainers and Docker to spin up a temporary, fully functional Infrahub environment specifically for testing.

Think of it this way: each integration test gets its own pristine Infrahub sandbox, guaranteeing isolation and repeatability. After the test is complete, the sandbox is automatically destroyed, leaving no mess behind.

READ 👉  Steps to Add Applications to the Ubuntu Desktop 23.04

This approach allows you to confidently test changes to your Infrahub configuration – such as schema modifications – and ensure they play nicely with your existing data before deploying them to production.

Setting the Stage: Prerequisites and Setup

Before we dive into the code, make sure you have the following in place:

  • Basic Python and pytest Knowledge: Familiarity with Python syntax and the pytest testing framework will be helpful (but not strictly required).
  • Docker: Docker installed and running on your system.
  • Python Environment: A Python 3.7+ virtual environment.

Here’s how to set up your environment:

  1. Clone the DevNet Git Repository (Example): We’ll use the OpsMill DevNet repository as an example: git clone https://github.com/opsmill/devnet-live-2025.git cd devnet-live-2025
  2. Create a Virtual Environment: python3 -m venv venv source venv/bin/activate # On Linux/macOS venv\Scripts\activate # On Windows
  3. Install Dependencies: pip install infrahub-sdk[all] pip install infrahub-testcontainers pip install pytest-asyncio
  4. Configure Infrahubctl (Optional, for Local Testing): If you want to interact with a local Infrahub instance, set these environment variables: export INFRAHUB_USERNAME=admin export INFRAHUB_PASSWORD=infrahub export INFRAHUB_ADDRESS=http://localhost:8000

Introducing Testcontainers: Your Secret Weapon for Integration Testing

Testcontainers is an open-source library that dramatically simplifies integration testing by allowing you to spin up lightweight, disposable instances of real services within Docker containers.

It provides simple APIs for bootstrapping dependencies like databases, message brokers, web browsers, or any service that can run in a container. This means you can use the same services in your tests that you rely on in production, without the complexity of managing real infrastructure.

Testcontainers automatically handles the creation and cleanup of containers, ensuring a clean and repeatable test environment every time.

infrahub-testcontainers: Simplifying Infrahub Testing

To further streamline integration testing with Infrahub, OpsMill provides the infrahub-testcontainers Python library. Built on top of the popular testcontainers library, it makes it incredibly easy to launch temporary Infrahub instances for testing. This library handles the setup and teardown of the container automatically, allowing you to focus on writing your tests.

Writing Your First Infrahub Integration Test

Let’s create a basic integration test to verify that we can load a schema into a temporary Infrahub instance.

  1. Create the Test File: Create a file named test_end2end.py in the tests/integration directory of your project.
  2. Define conftest.py: Create a conftest.py file in the tests directory to define shared fixtures: # tests/conftest.py import pytest from pathlib import Path @pytest.fixture(scope="session") def root_dir() -> Path: return Path(__file__).parent / ".." @pytest.fixture(scope="session") def schema_dir(root_dir) -> Path: return root_dir / "schemas" @pytest.fixture(scope="session") def data_dir(root_dir) -> Path: return root_dir / "data"
  3. Write the Integration Test (test_end2end.py): # tests/integration/test_end2end.py import logging import pytest from pathlib import Path from infrahub_sdk.client import InfrahubClient, InfrahubClientSync from infrahub_sdk.testing.docker import TestInfrahubDockerClient from infrahub_sdk.yaml import SchemaFile logger = logging.getLogger(__name__) class TestServiceCatalog(TestInfrahubDockerClient): @pytest.fixture(scope="class") def schema_definition(self, schema_dir: Path) -> list[SchemaFile]: return SchemaFile.load_from_disk(paths=[schema_dir]) def test_schema_load( self, client_sync: InfrahubClientSync, schema_definition: list[SchemaFile] ): """ Load the schema from the schema directory into the infrahub instance. """ logger.info("Starting test: test_schema_load") client_sync.schema.load(schemas=[item.content for item in schema_definition]) # Add an assertion to check if schema loaded successfully (replace with actual check) assert True # Replace with a real assertion

Let’s break down this code:

  • TestServiceCatalog(TestInfrahubDockerClient): This class inherits from TestInfrahubDockerClient, which automatically manages the temporary Infrahub Docker container.
  • schema_definition fixture: This fixture loads the schema files from your schemas directory.
  • test_schema_load method: This test method loads the schema into the running Infrahub instance using the client_sync (synchronous client) and then asserts that the schema was loaded successfully.
READ 👉  How to Split Strings in Python: 9 Essential Methods (With Examples)

Running Your Integration Test

To run the test, use the following command in your terminal:

pytest -v tests/integration

pytest will automatically discover and execute your test. You’ll see output indicating whether the test passed or failed.

Adding More Robust Tests: Data Loading and Git Repository Validation

Let’s expand our integration test to include data loading and Git repository validation:

# tests/integration/test_end2end.py (Continued)
    import asyncio
    from infrahub_sdk.spec.object import ObjectFile
    from infrahub_sdk.testing.repository import GitRepo
    from infrahub_sdk.protocols import CoreGenericRepository

    class TestServiceCatalog(TestInfrahubDockerClient):
        # ... (Previous code remains here) ...

        async def test_data_load(self, client: InfrahubClient, data_dir: Path):
            """
            Load the data from the data directory into the infrahub instance.
            """
            logger.info("Starting test: test_data_load")

            # Assuming your schema is already loaded (e.g., from test_schema_load)

            object_files = sorted(ObjectFile.load_from_disk(paths=[data_dir]), key=lambda x: x.location)

            for idx, file in enumerate(object_files):
                file.validate_content()
                schema = await client.schema.get(kind=file.spec.kind) # Remove branch=default_branch
                for item in file.spec.data:
                    await file.spec.create_node(
                        client=client, position=[idx], schema=schema, data=item #Remove branch=default_branch
                    )

            countries = await client.all(kind="LocationCountry")
            assert len(countries) == 3

        async def test_add_repository(
            self, client: InfrahubClient, root_dir: Path, tmp_path: Path
        ) -> None:
            """
            Add the local directory as a repository in the infrahub instance in order to validate the import of the repository
            and have the generator operational in infrahub.
            """
            remote_repos_dir = tmp_path / "remote_repos"
            remote_repos_dir.mkdir()

            repo = GitRepo(name="devnet-live-2025", src_directory=root_dir, dst_directory=remote_repos_dir)
            await repo.add_to_infrahub(client=client)
            in_sync = await repo.wait_for_sync_to_complete(client=client)
            assert in_sync

            repos = await client.all(kind=CoreGenericRepository)
            assert repos

Key improvements:

  • test_data_load method: This test loads sample objects into the live instance and verifies that the expected number of objects (countries) exists after loading.
  • test_add_repository method: This test adds the local project directory as a Git repository inside Infrahub and checks that it’s in sync.
  • tmp_path fixture to avoid errors
READ 👉  How to Remove Items from a List in Python: The Complete Guide

Seeing a Test Fail: The Power of Integration Testing

To illustrate the value of integration testing, let’s intentionally introduce an error into our schema. Suppose we rename the shortname attribute to shortnames in one of our schema files.

This seemingly small change could have significant consequences. Because our data files still use the old attribute name (shortname), the data loading process will fail. As a result, when the test_data_load test checks the number of countries, it won’t find any, and the test will fail.

This failure highlights the importance of integration testing in catching issues that unit tests might miss.

Why Mount a Git Repository During Testing?

The test_add_repository test demonstrates another powerful feature: mounting our local Git repository directly into the test Infrahub instance. This allows us to skip the usual steps of setting up a remote repository, pushing changes, and configuring access. This makes testing changes to the repository much faster and more convenient during development.

Conclusion:

We’ve explored how integration tests work in Infrahub and how to leverage infrahub-testcontainers to spin up temporary test instances. By loading schemas, adding sample data, and registering local Git repositories, you can gain much greater confidence in your automation workflows. If you’ve already got smoke and unit tests in place, adding integration tests is a logical next step. Integration testing provides invaluable insight into the reliability and effectiveness of your automation processes, ensuring that your Infrahub environment is ready to meet the demands of production. Embrace the power of integration testing and unlock the full potential of your Infrahub automation!

Did you enjoy this article? Feel free to share it on social media and subscribe to our newsletter so you never miss a post!

And if you'd like to go a step further in supporting us, you can treat us to a virtual coffee ☕️. Thank you for your support ❤️!
Buy Me a Coffee

Categorized in: