Agile and DevOps Testing Solutions - XenonStack

What are Microservices?

Microservices are architectural styles, under which we divide a service or application into parts or components; these components also called microservices. It mainly focuses on building single-function modules with well-defined interfaces and operations. These microservices are structured, making coupling as loose as possible, a way in which they can be independently developable and deployable. Dividing services in microservices make them highly maintainable, and easily testable, and also enables continuous delivery even for complex applications.This article will give an overview of Role of Test Automation in DevOps.

Why Microservices Architecture?

Earlier organizations used monolith architecture for development of services but due to it’s drawbacks microservices architecture is preferred.The problems with Monolithic architecture adoption are as follows –

  • Its focus was on building a single autonomous unit.
  • Making changes was hard and slow.
  • It is mostly written in a single language.
  • Huge structure.
  • Hard to understand for developers.
  • Modification to a part might require developing and deploying a whole new version of service.
  • Scaling a function leads to scaling the whole service.

For solving these issues, a new architecture was introduced and called as microservices.

How Microservices work?

Microservices architecture is used by splitting the application in a set of smaller and interconnected parts or components. Distinct features or functionality get implemented as a different service. Each microservice gets connected to its API Gateway, UI design, and adapters. These APIs can be consumed by other microservices or by application’s client. API Gateway is responsible for things like load balancing, caching, access control, API metering, and monitoring, etc. Each microservice can be implemented in a separate language. And because of their independent development, deployment and resource utilization, not only these microservices can be easily testable for unit tests, but there can be a considerable number of test cases for testing.

Test Automation in Microservices Architecture

There can be a vast number of test cases that we have to test in case of microservices, and of course, we want our service to be bug-free. So, we have to check all the possible test cases, and because it is enormous, we have to use automation tools for that. Levels of Automation –

  • Unit Testing/ Testing in Isolation
  • Integration Testing
  • End-To-End Testing
  • Contract Testing

Unit Testing / Testing in Isolation

Here our main aim is to test the module behavior. Interactions between objects and their dependencies and also the change in their state have to be observed. Usually, this is quickly done by using a REST API to interact with the services.

As we have a lot of independent services interacting with each other, so it is obviously tricky to test. Testing here not only wants the solitary unit testing(making the module completely independent and providing the little dependencies by what we call TestDoubles) but the friendly unit testing(enabling module to interact with its dependencies) also. Solitary Unit Testing usually needs a mocking kind of thing. Sociable unit Testing is more crucial than Solitary unit Testing because not only we want our service itself to work correctly, we also have to make sure that other clients or APIs consuming it are not going to work wrongly. And at the same time, we also have to be informed, if testing of any behavior of the service is constraining the implementation.

Integration Testing

Up till this, we have implemented the unit testing. Still, the crucial part for microservices is how these microservices interact with each other or is inter-service communication is working correctly or not. According to Fowler, an integration test “exercises communication paths through the subsystem to check for any incorrect assumptions each module has about how to interact with its peers.” Integration test checks for any defects in the interaction between microservices with each other or with other services.

End-To-End Testing

As the name suggests, it checks for the whole service. It includes all the services, DBs, etc. linked to our service. It is more prone to mistakes and more time-consuming. Hard to prepare all test cases, because one has to come up with every way the end-user will use the service. According to Fowler, " end-to-end tests may also have to account for asynchrony in the system, whether in the GUI or due to asynchronous backend processes between the services." It is recommended not to use or trust only this approach, and to use this approach as a final stamp for testing after the unit or integration testing.

Contract Testing

Here the tester’s aim should be to test the services against manual provided with services. As to come up with all the test cases in which customers might use Our service should do as mentioned in the manual provided. In most cases, it is performed in combination with other test strategies such as unit testing, integration testing, and load testing combined with a manual.

Human exploratory testing will assure quality. Some tools are there that, if we use standard specification when designing an API for our microservices is that a lot has been done to create tools that will do test auto-generation against specifications. If in some cases, we can not use test auto-generation, there are more environments that we can use which support HTTP calls, e.g., SoapUI, Jmeter.

Service Virtualization

Suppose we want to test our complete service, but some of the sub-services are not ready yet. In this situation, we use service virtualization to create a dummy of the incomplete services so that we can do our testing.

Challenges of Testing Microservices

The availability of all services for testing is hard

Suppose we have ten microservices in our service. At the start, we were waiting for three microservices to be completed. Still, as they get completed, three other microservices go to the reviewing stage, or the team starts working for improvement. Trying to find a time when all microservices are available, at once, is very rare.

Complicated and heavy work

Microservices are parts that are divided to be developed and deployed independently. And the interaction in microservices is also a crucial part of microservices, these microservices would be loosely coupled with each other, but there would be interactions. So these microservices have to be tested both in isolation, as well as an integrated service.

Knowledge Gaps between Testers and Developers

Whoever is responsible for testing the microservice, should have complete knowledge about the working of that particular microservice, to be able to create test cases. And also the tester of the microservice should be able to explain where the performance is lacking according to expectations or what more functionality can be implemented within the microservice to the developer.

Journey of Pagero from Monolith to Microservices architecture

Firstly their service was based on monolith architecture, but as they move from monolith to microservices, there were troubles in their path. The part where they improve the structure to increase overall performance was significant.

At first, their Pipeline structure was like: Any change in any of the microservice would always lead to building again, the testing of the whole system again, and sometimes even after that too, they have bugs in services. So, rather than rechecking the entire system, they categorized those services in features, and assign a testing team to each element they provide.

By this, now they did not have to run all the services at the same system, and on testing a particular feature, only a sub-set of services gets started, and now they were able to point the fault more precisely than before.

But still, if there was a fault, they were not able to point who is responsible for that or who should remove the fault. And the availability of all the services at the same time, so that they can be tested was rare. So they assigned a team to each service. Lets assume a case where service ‘foo’ and ‘bar’ got assigned to a team ‘A’ and service ‘baz’ got assigned to team ‘B’, and service ‘foo’ and ‘bar’ combines to form feature ‘A’, then if there is any fault found in feature ‘A’, team ‘A’ will be responsible for solving that bug.

But still, if team ‘A’ has to test feature ‘A’, new ‘update-release-candidate’ had to be built and after that testing can be started. And because of this, sometimes, some versions are either not tested or not passed the test. So they decided to test each service first and then build a ‘ update- release-candidate’.

By this, if any service gets updated, the team responsible for that service, tests the feature, if the function got passed from the test, ‘update-release-candidate’ gets generated. In this, if team ‘ A’ updated one of their services and tested the feature and feature got passed, then a user-release-candidate gets released with collaboration of last tested version of feature ‘ B’, by which candidate gets generated until it is thoroughly tested.

Enterprise DevOps Explained

In a traditional approach, there use to be a team developer, and their work was basically to code the functionality and deliver it in an executable form to the other team of operators whose work was to deploy the functionality in the required environment. But this whole process was too time-consuming. Suppose it takes four days for a developer to plan the application and then to code it, and to test it. And after some days, the operation team finds a bug or fault, or they had environment issues, so the feedback sent to developers. Now developer takes about three days more to remove the fault and have it tested. Now the extra time consumption is not the only problem here; during the period when the developer’s team is working on failure, the actual application is still buggy and deployed. So, the clients of the application are not able to get proper service, which will decrease the popularity and goodness of the application. To handle this situation, the old method was to stop the service or application completely, until bugs are not removed. But this also causes the company a huge loss, one of the great example of this is the NETFLIX, once NETFLIX was down for an hour and the company lost $2,00,000. So, there comes an idea of DevOps.

What is DevOps?

The idea of DevOps is hidden under its name. Its an amalgamation of two words ‘ Development’ and ‘ Operation’. I.e., to bring the developer’s team and operation team together so that they can work in the same space, develope a typical mind-set, and can share ideas. This can also be viewed as a union of the Developer Team and Operations Team. This way, as the developer team builds a block of code, the operator team simultaneously checks it. One team does not have to wait for others work to complete. And for even more speed up can be done by introducing automated tools for each step. This way, they would be able to provide application sooner, in much fewer iterations, and maybe with much less manual work. This idea is bringing a whole new world of possibilities ahead that the DevOps approach can be implemented efficiently, both according to time and quality.

Overview of DevOps Architecture

Planning of structure of the system, modules, and algorithms that would be used is done by the Development Team for the application. Coding of the planned system. Here ‘git’ tool comes very handily, by which repository can be created, and versions of the application can be maintained automatically. Now, the code gets to build, and for this stage, tools like ‘gradle’ can be used. After this, the application arrives at the testing phase, where bugs get removed. The automated tool that is most popularly used here is ‘ Selenium’. Up to this step, the primary responsibility is on Development team, after this, the tested application gets hand-over to the Operations team. Operations Team would deploy the application in the required environment.

This deployed product is continuously configured to the desired state. This step is named to Operate. ‘puppet’ and ‘docker’ are the automated tools that are being used by most companies in this step.

Now working of this application gets monitored continuously, were ‘ Nagios‘ the tool can be used to automate this step. And the reports from this step go back again to the Development Team as feedback. And the last step is the core of DevOps to Integrate this whole scenario. Here results from testing level get checked, and deployable application gets handed over to the Operations team. And feedback is sent to the Development team. Automated tool for this is also present,’ Jenkins’.

Test Automation in DevOps

In DevOps, mainly importance is given to continuous delivery and in DevOps, the application is updated more frequently. To every change in the application, there should be a test run to check the correctness of the update, and compatibility of the application with the previous version. And manual testing can be very time consuming and also make this whole process slower. So, mostly, we use automation tools for testing them.

Types of Test Automation in DevOps

Unit Testing

Unit testing can be very fast, because there are no external dependencies, no calls to the database, etc. Automated Unit Testing would verify the behavior of a particular part of the system. In our case, it would be the one that is introduced as the update. Unit tests focus on ensuring the internal paths of the code, or the segment that is updated or introduced. NUnit, JUnit are some of the tools that can automate unit testing. The unit test mainly covers boundary tests, standard and overtime testing, etc.

Integration Testing

Integration Testing would verify the path or interactions between packages or modules. It is most impactful when there are database calls or interactions with other services or APIs. In simple words, integration testing will check for the behavior of a module with another module. Integration testing is a much slower process than unit testing, because of more processing and because of more tests. Interactions usually open more gates for what could be done to the application, and testers try to go through all of them to confirm the validity. Check for combinations of inputs of the modules.

Functional Testing

Functional Testing checks for the functionality of the application. Can I do this, and is it being done efficiently implemented. It is mostly done by user interface, much slower step. And because it is much slower, it is mainly used to validate, few very high-value functionality, implementing on low valued functionality will make the whole process slower. There is other test automation also that can be done –

  • Performance
  • Security
  • Load

In DevOps, Testing is not only the duty of tester. It is shared between the tester, developer, and operator. There is a tool Selenium, that is mentioned as useful in all testing strategies and is suitable for both Microservices and DevOps.

A Holistic Strategy

To learn more about how Test Automation helps Enterprises to optimize and fasten their automation processes we advise taking the following steps –



Leave a Comment

Name required.
Enter a Valid Email Address.
Comment required.(Min 30 Char)