Leveraging mocks for automated E2E testing

Published on March 27, 2025

Imagine you run your end-to-end (E2E) tests and find them failing because an external API is slow, unstable, or entirely unavailable. Frustrating, right?

This is where mocks come into play — turning unpredictable dependencies into controlled elements of your test infrastructure.

Understanding mocks

Mocks are simulated services or responses that replace actual external dependencies during testing. They help teams:

  • Reduce flakiness caused by unstable external resources.
  • Accelerate test execution, improving overall efficiency.
  • Validate scenarios that are challenging or impossible to reproduce naturally such as third-party outages or custom error handling.

Types of mocks

Here’s a quick breakdown of different mocking strategies:

  • Static mocks: consistently return the same response, independent of test conditions.
  • Selective (dynamic) mocks: generate varied responses based on the incoming request or scenario context.
  • Proxy mode: flexibly routes requests either to actual external services or mocks, depending on test requirements.

Choosing the right mocking strategy

  • Static mocks are great for regular positive scenarios where testing the stability of third-party integrations is not the goal.
  • Selective mocks are good when your test suite covers numerous variations or edge cases, especially across multiple external dependencies.
  • Proxy mode offers the flexibility to switch between mocked responses and live external services, perfect for pre-release verification of real-world contracts.

How to implement mocks

We use a straightforward mock service called mmock, hosted on our infrastructure (Nomad).

The microservice interacting with third-party services has a specific configuration for the test environment pointing directly to the mock service.

Static mocks are stored in a dedicated directory within our test framework and always return positive responses, ensuring stability across all tests. For instance, here’s how an expectation file looks in mmock:

{
"description": "Simple GET request",
"request": {
"method": "GET",
"path": "/hello",
"headers": {
"X-Custom-Header": ["test-value"]
},
"queryStringParameters": {
"user": ["~^user_[0-9]+$"]
}
},
"response": {
"statusCode": 200,
"headers": {
"Content-Type": ["application/json"]
},
"body": "{\"message\": \"Hello, user!\"}"
}
}

Selective mocks are implemented using ERB templating since our tests are written in Ruby. For other languages like TypeScript (Handlebars or EJS) or Java (FreeMarker or Mustache), there are equivalent templating options. Expectation files look similar but include a slight customization with templated parameters (<%= @user_param %>):

{
"description": "GET with templated user param, error response",
"request": {
"method": "GET",
"path": "/hello",
"headers": {
"X-Custom-Header": ["test-value"]
},
"queryStringParameters": {
"user": ["<%= @user_param %>"]
}
},
"response": {
"statusCode": 400,
"headers": {
"Content-Type": ["application/json"]
},
"body": "{\"error\": \"Invalid user parameter: <%= @user_param %>\"}"
}
}

From there, it’s just a matter of technique — you write a helper method that resolves your template values and uploads the mock file to the mock service.

Additionally, we use an after hook that cleans up all mock files created during tests.

Conclusion

Mocks significantly simplify and stabilize your E2E tests. They allow you to verify complex scenarios, eliminate dependencies on unstable external services, and speed up your testing process.

Implement mocks gradually, starting with simple static mocks and progressively adding more complex selective mocks according to the complexity of your system.

Remember, mocks alone won’t confirm if your service integrates correctly with third-party dependencies. Always maintain a dedicated smoke test suite using real third-party services to ensure full system integrity.

If you want to implement a similar approach yourself or have any questions about this process: feel free to write to me on LinkedIn.