1

I have a question around testing! As of now, I'm in a unique situation where the software I am writing is going out to a third party vendor and storing some data. In this situation, writing a test to confirm that that data is posted successfully means using a production endpoint...which I doubt is a smart way to write things.

In this situation, no one seems to have a good solution for ensuring the endpoint works other then asking the third party for a dummy/test setup. I was wondering if anyone had a better idea of how to perform this sort of interaction. How do you write an efficient contract test when you don't OWN a bit of your codebase?

Please and thank ya :D

HariKrishnan
  • 456
  • 3
  • 14
Drew L. Facchiano
  • 283
  • 3
  • 5
  • 12
  • You question is currently too broad. Can you show an example to better explain your situation and narrow the scope of what you are trying to explain. [mcve] – Nkosi Aug 08 '16 at 17:48
  • Sure! So, integration tests : I'm used to the idea of mocking things out, not using real data. That however, is not applicable as we are testing an endpoint, and we want to know this endpoint is returned OK "200" given a certain payload. The DB is not one that we own, so setting up test data seems to be the only solution, it seems strange to me and I was wondering if there is an agreed upon alternative to "testing" but automating a manual test. – Drew L. Facchiano Aug 10 '16 at 18:56
  • You could create a proxy. one that would forward the call to live endpoint in production but can be configured to return what you want for integration tests. – Nkosi Aug 10 '16 at 19:05

2 Answers2

2

There are different alternatives to deal with this situation, depending on the level of collaboration you can get from the third party vendor:

Contract testing:

Pact is a somewhat mature framework that allows you to write unit tests that make HTTP requests against a mock third party provider service, which get persisted into a document (the Pact file) that can be shared with the vendor. If using Pact as well, the vendor can use the Pact file to run the tests against the provider service. The consumer (your service) documents the endpoints used of the provider (the third party service) along with the expected responses, and the provider validates those against itself, ensuring the integration.

For this approach to work, your third party vendor has to be open to fetch your pact file and run your consumer contract tests against their service. Contract testing allows your tests to work totally independent from the provider service, as your service and the provider's service are never connected together while testing.

Record and replay:

The idea behind this approach is to write tests that, in their initial run, make requests against real services and record their responses. The next runs of these tests will not reach real services but instead operate against the recorded responses from the first run. VCR is a good example of a library that enables this kind of testing.

For this approach to work, you don't need any cooperation from the third party vendor. You would make requests to a real service from time to time (to keep your sample responses fresh), which will be subject to availability of the provider service.

Test environment:

As you mentioned in your question, asking for a test environment/account to the provider is also a possibility. With this resource, you could write end to end tests that reach a realistic provider service, as well as having access to the environment itself to make assertions on its state as part of your tests.

The challenge with this approach is around the maintenance of this test environment: how can you be sure its version is the same as the one you are integrating against in production? Who looks after this environment's availability? Who creates data in this environment? Is this data realistic and representative of the real universe?

Semantic monitoring:

A final option would be to write a test that makes a sanity check of the integration between your service and the provider's in the production environment. This test could run after every deployment on your end, or even on a regular basis outside of deployment windows.

For this approach to work you don't need any collaboration from the third party vendor, but this alternative doesn't scale very well if you have a lot of integration use cases, as these tests tend to be slow to run, flaky (as they depend on real networks and systems availability) and pollute real environments. It's better to keep these tests as the very top of the testing pyramid, focused on very critical use cases. Additionally, you won't normally be able to test anything beyond happy paths, as you don't have control of the provider service to set it in any specific state beyond the "normal" one.

Community
  • 1
  • 1
Mauri Edo
  • 377
  • 1
  • 7
1

Short answer to your question

How do you write an efficient contract test when you don't OWN a bit of your codebase?

It is not possible without cooperation with the third party. Contract Testing requires both parties (consumer and provider) to adhere to the contract, consumer by leveraging the contract as the API mock or vice versa (leveraging API mock to create contract) and provider to run the contract as a test.

However with third party vendors, a possible option is to request them to share the API Specification such as OpenAPI for their service, which you can leverage to mock their API. In a way you are treating the API Spec itself as the contract to which they may already be adhering.

Here are some tools that can help you mock the third party API if you have their OpenAPI Specification.

However the effectiveness of this approach in identifying issues depends on third party vendor API Specification being a reliable representation of their actual implementation among other things.

HariKrishnan
  • 456
  • 3
  • 14