Understanding Cloud Native Applications
Before reading the below article we must be aware of the term pattern, so in layman, term pattern is an art to figure out a good solution to the recurring problem in a conceptual form, such that this solution can then applied to existing use cases.
Types of Cloud-Native Architecture Pattern –
The role of this pattern is to make the foundation for reactive, async-communication between components in the cloud-native.
These patterns deal with the boundaries, i.e., boundary means where the system interacts with the external system like humans or other systems.
These patterns deal with the control flow for inter-component collaboration between boundary components in cloud-native.
Now in this blog, we are going to explore the Foundation Patterns that required for creating an isolated bounded component, eliminating synchronous inter-component communication and use asynchronous inter-component communication, replication, and eventual consistency for our foundation.
Why bounded isolated component?
A cloud-Native system must recover quickly from a human error (which occur during development or deployment) to do this we need to isolate the component from each other so that one component is not going to affect the functionality of other and also during ratification we only have to address that particular fault component.
Why Asynchronous communication?
Through this mechanism calling services to post their request or data, and continues with other sets of work (apart from the requested things). It eliminates the need to wait for a response resulting in decoupling the execution of two or more services.
The benefit of Synchronous communication:
- Balancing Capacity of service is better.
- Less risk of cascading failure.
- Better Decoupling.
Types of Cloud Native Foundation Patterns
Cloud Native DataBases Per-Component
As the name describe here we are going to maximize the usability of each component by providing each component their own dedicated database to ensure proper bulkhead. Database type may vary according to microservices.
Why we need Cloud-Native Databases per Component?
To support modern high performance, horizontally scalable and to achieve global scalability, and also we know some particular technology can work better for certain type of database so why not provide them their type of database.
Event Streaming Architecture
What is Event Streaming?
It is a message-driven, subscribe and publish mechanism which maximizes a fully managed streaming-services to established asynchronous communication between all inter-components, by which upstream components transfer processing to downstream components by publishing domain events that consumed downstream.
Note: An event-based system must be an append-only, distributed, shared database.
Why should we focus on Event Streaming?
To achieve elastic, responsive, resilient, and message-driven system efficiently, and in today’s world where data is so big event streaming changes the order of the whole analytics procedure: it can Store queries/analysis parallel.Two most famous tools for event streaming are Kafka and MapR Streams.
Introduction to Event Sourcing
What is Event Sourcing?
This is an architectural pattern that ensures that all the changes to an application state stored as a sequence of events or stream. Through this, we not only store or query the events but also we can use the event log to reconstruct past states.
There are basically two approaches to event sourcing:
In this, the event-first command does not write any data to the database. Instead of writing it, it wraps the data in a domain event and posts it to only one stream.
In this approach, a database-first command will write the data to only one cloud-native database.
We can follow any of the above approaches as per our requirement. But primarily benefits of this solution is to maximize or maintain the asynchronous mechanisms of value-added cloud services to accurately chain together atomic operations to achieve eventual consistency in near real-time.
What is Data Lake?
It is generally a huge collection of all the data a particular company or any organization collects about its customers, operations, transactions and more; the collected data can be structured or unstructured, i.e., its primary goal is to receive the data regardless of source or structure of data.
Benefits of Data Lake over Data warehouses
- Data Lakes Retain All Data
- Data Lakes Support All Data Types
- Data Lakes Support, All Users
- Data Lakes Adapt Easily to Changes
- Data Lakes Provide Faster Insights
How can we use a Data Lake?
The use and the demand of Data Lake are increasing day-by-day in the enterprise level, and why not as it collects huge number of data, on which we can operate whatever we want. Some of Data lake uses are:
- Storage: store raw data regardless of their type.
- Analytics: It can be used by data scientists, data developers, and business analysts to access data with their choice of analytic tools and frameworks.
- Visualization: Analyzing is not enough we need to visualize the data for better understanding.
- Machine Learning: We can use the available data and train our machine to perform a specific task.
Stream Circuit Breaker
It verifies that the performance of microservices is stable, by continuous checking for any failures and providing an alternate service or error message.
Why we need Stream Circuit Breaker?
Actually, on the Internet, various software systems try to make remote calls to another software running in different processes, generally on different machines across a network. Sometime during the remote call, calls may fail or hang without any response or messages, which may result in cascading failures across multiple systems.
What should we do to overcome this problem?
There come to the stream circuit breaker through which we can solve this problem. We wrap a protected function call in a circuit breaker object, which continuously checks for failures.
Different States of Circuit Breaker:
- Open: When it returns an error for calls without executing the function.
- Closed: The circuit breaker remains in the closed state when everything is normal.
- Half-Open: After a timeout period, the circuit switches to this state to test if the underlying issues still exist.
As we are building a reactive, cloud-native system that composed of bounded isolated components which mainly works on event streaming for inter-component communication, thus it requires a different way of approach. So we need to publish multiple interfaces for each element:
- Asynchronous API for publishing events like the state of the component changes.
- Synchronous API for processing commands and queries
- An asynchronous API for consuming the events emitted by other components.