Integrating in an API-first world
How the concept of integration testing changes and evolves with the ever-increasing complexity of a services and API driven software development landscape and how you can deal with it — a high level introduction
If you are responsible for shipping software that are built using multiple components, you would want to ensure that all these pieces work as expected when a user is using them in real life. In any such distributed systems context, the complexities and challenges of how you would test your stack grow exponentially with the number of components your entire stack is built of.
There can be multiple databases, multiple services talking to each other, 3rd party APIs that your applications integrate with and so on. If you are writing a monolith, or if your software stack consists of only one component, or if you are in a very, very early stage of your startup, these may not apply to you.
A single server, connecting to a single database, with maybe a single frontend could be tested using the age old wisdom of the Testing Pyramid. But times change, practices evolve, customer specifications get more and more convoluted and users expect richer experiences. If we look at the biggest trends in software development landscape that we have today, it is full of services and apps having elaborate conversations.
The new world order, as a service
In a recent article, Joyce talked about how software development is evolving in an API-first world. Her article discusses how software today is and how it can be built by designing APIs efficiently and building each component as extensible interfaces that you can build on top of. If you are like me and have been trying to get a clearer view, that article will shine a big halogen on this whole landscape.
Building on what Joyce talked about designing APIs in a sustainable way, how can you then establish practices on testing your software suites, particularly in the context of integration testing where you are working with distributed systems?
You could be developing your software using any of the popular design patterns, like microservices, or serverless, or a hybrid of these different approaches. Your end goal is to guarantee that all these pieces function together as expected when they are put in production. This is where integration testing comes in.
It is all about knowing that all the moving pieces that make up the product in question play well with each other — before moving to production. It’s a teamwork. You want to ensure every member of the stack is a top notch performer AND is a team player.
A checklist for establishing best practices
There are many well-known approaches to integration testing for services that have been discussed at length by experts in the field of software testing. A team of 2 people will face issues very different from a team of 20 working on a feature of a product. But the overarching nature of challenges have some common patterns.
Here is a collated checklist that you can use when choosing your APIs’ integration testing strategy to avoid some common pitfalls:
- Do not replicate the whole of the services stack on local machines for developers. Use mocks and functional tests wherever possible. If you are a small team or just getting started with microservices, you may feel the constant urge to test everything together, like build a Vagrant image that pulls all services together and runs them on a dev machine. It is not possible to scale this with growing complexity of your stack. You will end up, as Cindy Sridharan put it in their article “Testing Microservices, the sane way”, with a “full stack in a box”.
- Create a single source of truth for all your API documentation, or a single workflow that every team follows to publish API documentation for the consumers of that API. Your consumers could be other teams who are writing microservices, or building a frontend, or even developers external to your company. In any case, you and your APIs’ consumers should not fight over who was right when there is documentation and behaviour mismatch.
- Do not ignore that I/O rates differ between systems. If your testing setup does not take this into account, you may hit unexpected errors and race conditions when testing in production. You would want to make disk and network I/O become a part of the equation, which is closer home to production, but remember that you are still just asserting on simulations of your production infrastructure.
- Ensure your services do not throw unhandled errors. Handle all errors and log them properly. There should be no unhandled errors spewing out attack-worthy details, not in your test environment and definitely not in production. See Amber Race talk on API testing in the last POST/CON for more ideas on this.
- Chart out service dependencies. You will have to plan how the systems under test interact with each other. Depending on which service talks to whom, you would be able to generate a dependency chain. Every service that returns some data should be able to provide an API for other services to connect to it. With clearly defined API surface areas and a dependency chain charted, you would then be able to mock each service. Unit tests in this case upgrade from just testing each function to testing with the mocks for the dependent services.
- Test your service contracts. You should include contract testing as part of the integration tests of your microservices. Ideally, service mocks should be generated from the contracts and should be kept updated. So, you should be able to enforce that all services provide an updated contract for its consumers to use.
- Do not be a blocker for other teams. In a world where you want to be as early to market as possible, you won’t want your frontend teams to wait till the backend team comes up with the completed API. If you take enough care to ensure that all the API descriptions and mocks of each service are up to date, then you can release many bottlenecks for other teams.
- Transition away from manual testing towards automated testing as much as possible. Integration testing suites can be large. Look into the possibilities of setting up continuous integration pipelines with test automation so that all code checked-in go through these tests. You would then free up your QA resources for areas where human intelligence is better used, like in exploratory testing.
- Do not spend too much time to create an elaborate setup that becomes too hard to replicate, and in turn, makes new member onboarding a terrible experience. If you are a small company, ensure that you are not spreading your developer resources way too thin.
This checklist will make you look at API development from a more humane point of view. It is not all about services interacting with each other, but also about people building those services. Once you include humans in this equation, you end up choosing workflows and toolchains that suit you and your team’s needs and preferences. You would then want to couple it with other strategies, like a solid strategy for production testing or have a staging area before you push to production. In the end not just your software components integrate better, but your teams collaborate in a more defined and clearer way as well.
Integration testing for microservices can be simpler
As a Developer Advocate who is still less than a month old at Postman, I found Postman’s core philosophy of making things simple resonates strongly with my personal goal of making complexities simpler to understand and adopt. While I understand that distributed systems are chaotic by nature and that canonical approaches to integration testing fail when applied to them, it does not necessarily have to be difficult. Given the right tools and the right workflow any such difficult problem can be broken down to a series of smaller, solvable tasks.
In that spirit, the Developer Relations team at Postman is working on building various recipes that show how such workflows can be simplified, even while integrating with your existing stack. We will have recipes for you depending on whether you are a designer, a developer, a test automation engineer, a QA doing manual testing or a manager handling teams of such people. We will keep sharing those recipes under this publication, so watch out for more articles here, soon!
If you are already using Postman in a similar scenario, do write about it in the comments or send us a tweet at @postmanclient. We always love to hear from you.