Introduction to Automated Testing for Microservices

October 30, 2018 

Introduction to Automated Testing for Microservices

What is Microservices?

Microservices - also called the Microservice Architecture - is a building style that structures an application as an accumulation of loosely coupled services, with full business capabilities. The Microservice architecture permits the Continuous Delivery/Deployment of large, complex applications.

It additionally enables an organization to evolve its technology stack. Microservices are progressively used to create larger, more complex applications that are better developed and managed as a combination of smaller services that work cohesively together for more significant, application-wide functionality.

Big and complicated applications comprised of more straightforward and independent programs that are executable by themselves. These little programs are gathered together to deliver all the functionalities of the big, Monolithic app.

What is Microservices testing?

Any testing strategy employed should aim to give coverage to each layer and between layers of the service while staying lightweight. MicroServices testing requires a substitute methodology – the test team ought to strategize an approach to begin testing Microservices in the design phase itself. Test team’s initial involvement with design/architecture group to understand functionality, its usage, and uncovered interfaces will positively prove helpful. In addition to the above, testers should ensure that all interfaces are generic so other systems/services could consume without hurdles.

Since there is a demand to automate everything, utilize Micro Services test automation tools. These tools help to verify the functionality of each independent service units and also perform an integrated test by combining more than one of these Micro-services.

Level of Automated testing for Microservices

Unit Testing - This is to test the internal working of individual Microservices test unit. These can be automated at each programming level using the Automated Unit Testing framework.

Contract Testing - This is to test that each Microservice unit adheres to the given functionalities provided in the contract established. Here each service component is tested individually as a Black Box. In the Contract testing, service should offer the same results for the same given input throughout even if the service changes. Each service in the MicroService Architecture works robustly for a more extended period. It guarantees that new functionality added to each service without affecting the existing consumers.

Integration Testing - This checks that more than one service communicates each other and provides the desired results given in the service functional level document. This can be an overall Microservices Architecture integration test or just taking up a sub-area of the architecture and testing.

UI Functional Testing - In this, services integrated with a UI and testing done via a UI where inputs needed by the MicroServices given via the UI, and the desired output tested via a UI.

For all these types of test, automated testing can be carried out.

For Unit testing, use unit testing frameworks based on NUnit or JUnit, automate tests with less QA involvement.

For Contractual testing, the QA test automation engineers involve. This test performed in each service unit, by isolating it and hitting the individual URI of the service. Functions given in the contract will be tested using the set of automation scripts, within a test automation framework.

Integration testing automates by the same set of tools used in Contractual testing. The only difference here is more than one of the service units will be considered, and the automation scripts trigger the functionality to provide the communication within these processors, where the desired output verified. Here the automated test will also verify the communication message formats and any databases linked between the processors.

UI Functional testing automated utilizing automated testing tools like UFT, Selenium or any other UI based automation tools.

When doing Micro Service Automated testing, more than one tools or frameworks can be integrated. It is also a good practice to integrate API automated test tool framework and the UI based automated test tool framework. This is the future of test automation. Most organizations use global hybrid test automation frameworks rather than maintaining separate frameworks.


How Automated Testing For Microservices Works?

Test each service individually

Test automation is a tool for testing discrete Microservices. It is easy to create a simple test harness that repeatedly calls the service and compares a known set of inputs against an expected output. In any case, all by itself, it wouldn't get unusually far with testing. It will free up test team to concentrate on more complex testing.

Test the different functional pieces of the application

Having recognized the key functional elements in the application should seek to test these in much the manner would traditionally do Integration Testing. Here the advantage of test automation is clear. Rapidly build test scripts run each time one of the Microservices refreshes. Compares the outputs of the new code with previous outputs, quickly establish if anything has changed.

Don’t test in a small setup

There’s an ability among some managers to keep the testing group of resources. But with Microservices-based applications this is counterproductive. As opposed to attempting to make small local staging environments to test code in, should look to leverage Cloud-Based testing. Here dynamically allocate resources as tests need them, freeing them up when tests have completed. As such, test automation won’t directly help here.

Try to test across different setups

Advised to use multiple environments to test code, similar to cross-browser testing for web applications. The idea is to expose code to any minor variations in things like library versions, underlying hardware, etc. that may affect it while deploying to production. One approach to do this may be to make staging environments on the fly. Utilizing Kubernetes, it’s conceivable to create a test environment, populate it with data from a known source, load code and then run tests. The excellence is that the environment is recreated each time automatically exposed to any differences that might exist. Of course, the flip side is that it becomes harder to diagnose the underlying cause of any bugs.

Use Canary testing as much as possible

Canary testing is a methodology where a small set of users presented to changes in the code and contrast their experience with the experiences of those users still running the old code. This methodology is particularly useful for testing Microservices. It uses monitoring to survey the effect of the change.It views error rates, service load, responsiveness, and similar metrics can tell whether the new code has a negative impact by adopting a strategy where one service instance at a time update, can quickly and automatically do Canary Testing.

Canary testing is a methodology where a little arrangement of clients present to changes in the code, and contrast experience and the encounters of those clients as yet running the old code. This methodology is especially helpful for testing Microservices. It utilizes checking to survey the effect of the change. It views blunder rates, benefit load, responsiveness, and comparative measurements to disclose whether the new code has an adverse effect. By embracing a procedure when one administration case at once is refreshing, get without much of a stretch and consequently do Canary Testing.

AI-powered testing

AI or Artificial Intelligence utilized to automate Canary Testing of the Microservices application completely. AI methodologies such as Deep Learning recognize changes and issues activated by the new code. Few users moved over to the new framework, and the AI compares the experience with that of the existing users. Since this is possible automatically, it replaces the human in the loop.

Debuggable code

Writing debuggable code includes the capacity to make inquiries later on, which in turn involves -

Correct Code Instrumentation.

Knowing the Observability arrangement of choice (be it metrics or logs or unique case trackers or traces or a mix of these) and its pros and cons.

Capacity to pick the best observability to arrange given the requirements of the given service, operational quirks of the dependencies and good engineering intuition.


Benefits of Automated Testing for Microservices

There are following benefits of testing Microservices -

  • Incentive better isolation between services and the design of better systems.
  • It applies a certain design pressure on programmers to structure the API in a way that’s easy to consume.
  • Tests act as fantastic documentation for the API exposed by an application.
  • Test each service individually.
  • Test the different functional pieces of the application.
  • Monitor to assess the impact of the change.
  • Monitor the ongoing performance of your application.

Why Automated Testing for Microservices Matters?

Microservices testing matters because of following reasons -

Decoupling

Each functionality is loosely coupled to help SaaS/SOA architecture. MicroServices scattered across the platforms over the network and integrated through REST over HTTP.

Maintainability

Every service maintained, upgraded and tested independently which is an essential requirement for a SaaS architecture. This makes Microservices a necessary enabler of Continuous Delivery supporting frequent releases yet offering high system availability and stability.

Scalability

Each Microservice scaled autonomously according to use. Each service deployed on autonomous hardware according to a resource requirement, not possible in the traditional Monolithic design approach.

Availability

Every Microservice can autonomously design and deployed for failover and fault tolerance. For example, issues like memory and CPU use handed locally while different services continue to work usually.


How to Adopt Automated Testing for Microservices?

There are five strategies used to approach testing Microservices successfully.

The documentation-first strategy

Follow a documentation first approach, the majority of documentation is markdown in Git. API documents stay open source, so it’s all public. At that point, before anybody writes any API changes or either different API, refresh the documentation first, have that change investigated to ensure that it adjusts with API conventions and standards which are altogether documented, and ensure that there’s no breaking change present. Ensure it accommodates with naming conventions and so forth as well.

The full stack in-a-box strategy

The entire stack in-a-box strategy entails replicating a Cloud environment locally and testing everything all in one vagrant instance (“$ vagrant up”).

The AWS testing strategy

This third method involves turning up an Amazon Web Services (AWS) framework to deploy and run tests on. This is a more adaptable approach to the full stack in-a-box strategy. A few people called this a personal deployment [strategy], where everybody have their AWS account. Push the code that’s on workstation up into AWS in around ten minutes and just run it in like a real system.

The shared testing instances strategy

The fourth strategy is a cross breed between full stack in-a-box and AWS testing. That’s because it involves working from own local station while utilizing a different, shared instance of a Microservice to point local environment at during testing. Some run a different instance of a Microservice just to be utilized for testing local builds. However, the local builder would point to a test image parser that’s running in the Google infrastructure.

The stubbed services strategy

The marks or ‘stubs’ of Microservices behave like the right service and advertise in service discovery as real service, but are a dummy imitation. For example, a testing service may require that the service becomes aware that a user carries out a set of tasks. With stubbed services, imagine that user tasks have occurred without the typical complexities that come with that. This approach is much more lightweight compared to running services in their totality.


Best Practices for Automated Microservices Testing

Testing in isolation

Microservices are tricky to test since having a collection of independent services communicating with other independent services in many (usually unanticipated) ways. An excellent place to begin test automation efforts is to test the functionality of a specific Microservice in isolation directly. Generally, quickly done by using a REST API to talk to service and some mocking to enable to test the service alone, without any integration with other services.

Make a contract

It's almost impossible to know all the ways consumers might use services. With a consumer-driven deal, the consumer must give a suite of tests that determine what types of interactions are required and in which format. Service would then agree to the contract and ensure that it's not broken. This disposes of conditions on other services. This methodology also enables to verify that the deal fulfills at build time. Tools like Pact will give a better understanding of how can achieve this type of functionality for developing and testing Microservices. Once have a consumer-driven contract process, the following step in testing microservices is to shift-right into the previously forbidden world of production.

Shift-Right into production

In a DevOps, Microservices world, testing in production turns into a necessary bit of overall quality plan. With the fluid, moving relationships that a Microservices-based architecture makes, how to be sure about the manners in which many services are going to expand? And how to know in advance how services might behave? How to test this vulnerability? The answer is to begin testing in production.

Monitoring and alerts

Have a key checking and alert system in place and tracing in production is critical. Reveal immediately if one of the services goes down or becomes unresponsive. By recognizing an issue during production with the help of monitoring, can often easily move back to the last known great version of the service before users even know there's an issue.


Best Automated Testing Tools

  • Hoverfly - simulate API latency and failures.
  • Vagrant - build and maintain portable virtual software development environments.
  • VCR - a unit testing tool.
  • Pact - frameworks consumer-driven contracts testing.
  • Apiary - API documentation tool.
  • API Blueprint - design and prototype APIs.
  • Swagger - design and prototype APIs.