Interested in Solving your Challenges with XenonStack Team

Get Started

Get Started with your requirements and primary focus, that will help us to make your solution

Proceed Next

FinOps

FinOps for Data Mesh and Data Fabric

Dr. Jagreet Kaur Gill | 30 August 2024

FinOps for Data Mesh and Data Fabric
9:30
A FinOps Guide to Data Mesh and Data Fabric

Introduction

Data mesh is a data architecture that emphasizes the management of data by the teams that utilize it. In this approach, data ownership is decentralized to various business domains, such as finance, marketing, and sales.
The term "mesh" signifies the seamless integration and combination of data and data products from different domains. This architecture ensures that all the necessary infrastructure, software, and metadata are in place to deliver the required data to Data Consumers, thereby creating Data Products (DP).

Principles of Data Mesh

The core principles of data mesh can be categorized into four core categories:
  • Decentralized data ownership and architecture: Business domains manage their own analytical and operational data services.

  • Data as a product: The domain teams must think of other domains in the organization as consumers and support their needs.

  • Self-service data infrastructure as a platform: A dedicated infrastructure engineering team that provides the tools for each domain to consume data from other domains.

  • Federated governance: While organizations have a centralized data governance authority, they should also embed governance within the processes of each domain.

working-example-of-a-data-mesh

Working Example of a Data Mesh

  • Assuming the information provided in the previous example, the service domain receives data from various sources including transactional and supply chain data.

  • Using this data, the team develops data models that are utilized by the Services team for analysis purposes.

  • If these models are made available by the Services team, other teams can also utilize them.

  • Multiple data models can be combined to support a specific use case, such as business intelligence, and a single model can be used for multiple use cases.

Why use Data Mesh?

Here are some benefits of using data mesh. Note that data mesh is not for everyone; if the data needs are small, it might not be used at all.
  • Faster data processing: Distributing your data pipeline across domains reduces the technical strain.

  • Compliance and security: The decentralized mesh model requires that access control be applied for each domain-oriented data product.

  • Cost efficiency: Organizations using mesh also might use cloud technologies. The cloud infrastructure lowers operational costs.

What is Data Fabric?

Data Fabric is an advanced data integration framework that leverages metadata assets to harmonize, consolidate, and regulate data ecosystems. By harnessing continuous analytics on existing and inferred metadata assets, a data fabric enables the creation of a unified layer that seamlessly integrates data operations.

Working of data fabric

The working of data fabric can be categorized into 6 factors described in the below diagram.

working-of-data-fabric

  • Augmented data catalog: Analyze various metadata types for contextual information.
  • Knowledge Graph: Create a formal depiction of data entity connections.
  • Metadata activation: Shift to automated metadata management using machine learning.
  • Recommendation engine: AI/ML algorithms provide data integration recommendations.
  • Data prep & ingestion: Support for ETL, application integration, and data virtualization.
  • DataOps: Ensure infrastructure meets IT and business user needs.

Differences and similarities between Data Mesh and Data Fabric

The following figure illustrates the differences and similarities between Data Mesh and Data Fabric

data-mesh-and-data-fabric

FinOps in Data Mesh/Data Fabric and Cloud Environments

It’s common to recognize a significant lack of automated governance in organizations. Most internal regulation is a mix of guidelines without any real control, which has direct consequences, as it lacks monitoring and control of resources and spending.
FinOps aims to be a comprehensive financial discipline to manage cloud resources and associated costs.

Measuring Costs in the Data Mesh

As Data Mesh is the adoption of domain-driven ownership of data products, it is reasonable to collect metrics at the Data Product, Data Domain, and Data Platform levels.
This allows Data Owners and Data Domains to determine their own spending and highlight dependencies to determine chargeback policies.
The Data Platform team should access everyone’s costs and provide cost management capabilities for Data Domains and Data Products.

Decomposing Data Product’s costs

The data product’s cost can be broken down into the cost of building and the cost of running. The cost of running can be further broken down into the cost of production and the cost of consumption

 

1. Cost of production
There is a live Data Product whose workloads generate data. The cost of production is totally on the Data Product Owner's shoulders.

 

2. Cost of Consumption
Data Consumers access data from resources owned by another Data Product or shared with the platform. A Consumer Data Product must be charged back for the portion of consumption.

 

3. Cost attribution

Governance must be implemented to associate specific labels to resources and aggregate costs based on the organization tag. When summarizing the costs of data products, we should be able to directly obtain the cost per domain.

 

4. Economic Units

The resources owned by specific data products account for the product's cost attribution. However, shared resources must be charged back, and unit economics is necessary.

  • Cost attribution means that a data product can refer directly to the cost consumption provided by the cloud provider.

  • Chargeback implies that we must collect information regarding resource consumption and formulate a link between resources and costs.

Observability and monitoring

Resource monitoring is of the utmost importance in data platforms. In cost monitoring, we must ensure we collect all the metrics necessary to map costs to resources.

This requires modeling the right instrumentation of the services, meaning that the platform team must make shared platform services observable.

The monitoring architecture must be integrated with a cost management tool. In the context of FinOps, this is called Data Ingestion.

data-ingestion


The following points explain the methodology for monitoring and ensuring FinOps in the case of data mesh.

  • Collect resource consumption metrics

  • Compute cost consumption from these metrics, this also applies to shared resources as well

  • Cost management tools can ingest data and compute the right cost attribution starting from Resource Efficiency Unit Costs

Reporting and analytics

The data platform team can be divided into domains and sub-domains. They should build analytics through Data products and enable reasoning about cost management concerns.

Data Product Analytics

Data Products can rely on multiple existing Data Products. In fact, this data model can rely on other information that may be associated with a Data Product for Shared Services and a Data Product for Domains.

Data Platform Analytics

The Data Platform Team can analyze a lot of information such as Spending by tools, Spending by domains, Spending by resource type, and Forecast of cost spending. Through this analysis, several optimizations can be addressed.

  • Procurement: They could negotiate better rates with cloud and service providers and can employ better or equivalent tools.

  • Finance: They could be informed through KPIs that determine whether costs are growing at the same pace as valuable data; benchmarking cost efficiency.

  • Engineering: they should be able to optimize internal costs, correlate resource utilization and cost consumption and act for improvement.

Why chargeback?

Charging back triggers a mechanism of cost optimization and negotiation that improves the data value chain.
Consumers will fight to spend less, forcing producers to design cost-effective architecture. Thus, producers and consumers will try to minimize misuse.
When it comes to FinOps, while showback is necessary to create awareness, chargeback is the real essence of data value chain optimization. This is because showback does not involve moving accountabilities, while chargeback does.

finOps-showback

This new tension between the IT department and data domains introduces the right balance between the need to spend money to make the system efficient.

Data Governance

The data Mesh initiative incorporates Federated (a division of cost consumptions into domains) Governance.


A Data Mesh Platform should provide monitoring over cloud spending policies.

data-governance-platform

The Data Mesh Platform should embed a federated computational governance platform, which will, for instance, work on the following points.

  • Establish the maximum cost of each use case

  • Preventing expensive data products from being deployed (cost avoidance).

  • Practices such as Coded Infrastructure (Everything as Code) can be used to automatically estimate cost consumption

  • Detecting unexpected behaviors at run-time

Conclusion

In a data mesh or data fabric, managing costs involves collecting metrics at different levels, such as data product, domain, and platform. Analyzing data product costs includes examining expenses for development and operations. FinOps relies on observability and monitoring to track resource consumption, calculate costs, use cost management tools, and generate reports. Chargeback in a data mesh optimizes the value chain and improves system efficiency. Implementing FinOps practices and strong data governance are crucial for cost management in a data mesh environment.

Table of Contents

dr-jagreet-gill

Dr. Jagreet Kaur Gill

Chief Research Officer and Head of AI and Quantum

Dr. Jagreet Kaur Gill specializing in Generative AI for synthetic data, Conversational AI, and Intelligent Document Processing. With a focus on responsible AI frameworks, compliance, and data governance, she drives innovation and transparency in AI implementation

Get the latest articles in your inbox

Subscribe Now