Enterprise AI Platform on Kubernetes

Introduction

In this blog post, the practicality of Using Kubernetes to address Challenges of Adapting Artificial Intelligence will be discussed. Enterprise AI Platform is gaining popularity amount organizations for smooth functioning. Machine-learning algorithms have progressed, especially through the development of deep learning and reinforcement-learning techniques based on neural networks and so is Container Technologies. With the rise of both Container and ML Algorithms, there is a gap, leading to the following challenges.

  • Intelligent ETL and Workflows
  • Security
  • Data Access Management
  • I/O Processes
  • Cost of operations
  • Productionizing the ML/DL Models
  • Continuous Delivery for ML/DL

Enterprise AI Platform on Kubernetes has become de-facto standard to manage containerized architecture on Cloud Native Scale on Cloud Native Scale. AI and Kubernetes are a match made in heaven. It is much-needed clarity for an organization to define one standard pipeline and toolkit for orchestrating containers.

The goal of Kubernetes for AI is to make scalable models and deployment on production in the easiest way.

Requirement for Integration of  Enterprise AI Infrastructure

The versatility in the way of developing a model and the architecture behind it makes scaling a challenging job because of the reason that everyone does everything in a different way. In terms of models, there is no Interoperability of Models. There are numerous libraries and different kinds of versions being used to develop the model. There is a need for one standard approach which is agnostic of

  • Framework
  • Language
  • Infrastructure
  • Library

Enterprise AI Infrastructure Platform Capabilities

Software Engineering Practices

  • Test Driven Development in AI is important.
  • Health Checks & Resiliency as per SRE Principals.
  • Framework Interoperability by using packaging tools like ONNX and MLeap.
  • Challenges of Storage and Disk/IO due to data ingestion, outdated technology or virtualized storage.

Build Your Own DataOps Platform using Kubernetes

With Enterprise becoming jealous of data and for them security is paramount, Becoming AI-powered enterprise requires a custom made Data Platform. The other reason is that every enterprise is having its own nature of data and needs from them. The number of teams is involved makes it difficult to have security and compliance in place and at the same time giving freedom to the teams to run experiments and discover data. An Enterprise Data Strategy should elaborate on how Data will be shared across various teams in a secure manner. Big Data Applications are evolving and have got more friendly to Kubernetes.

Data Access Management

Capturing all events happening across the Data and AI platform comprises events happened in Infrastructure to Application. Data Lineage Service should be capable of capturing events from ETL Jobs, Querying Engines, Data Processing Jobs and There should be Data Lineage as a Service as well which can be integrated by any other services in which there is need of incorporating Data Lineage.

Data Catalog

With having MultiCloud and Cloud Bursting in Mind. The Data Stored should have version control. The version control for data should be as good as version control for Software code. The Data Version Control should be built on top of Object Storage which can be integrated with Existing Services. Minio is such service which can be integrated with S3 and provides the same APIs.

Multi-Cloud AI Infrastructure

This infrastructure should meet the following high-level infrastructure requirements:

  • Large compute power to train models in less time
  • High-performance storage to handle large datasets
  • Seamlessly and independently scale compute and storage
  • Handle varying types of data traffic
  • Optimize costs

Choosing Kubernetes for your Data Science Stack enables using Lift and Shift to Multiple Cloud Vendors so that there is no Vendor Lock-in involved in your next AI Project. An ideal Multi-Cloud shall determine on which Cloud Provider the Model can run more efficiently, cost and performance wise.

Federated Kubernetes on Multi-Cloud

Models as Cloud Native Containers

Microservices, in general, is very beneficial when we have a complex stack with a variety of business domains workloads running and talking to each other in a Service Mesh, but the way Microservices suits Models Development and Operations Lifecycle is great and the benefits are immense.

Portability

With the use of Kubernetes, the pain of configuration management gets easy. Once the ML System is containerized, The infrastructure can be written as a code which can run on Kubernetes. Helm can be used to create Curated Applications and Version Controlled Infrastructure Code.

Agile Training at Scale

Kubernetes provide Agile methods of Training at high scale with Training jobs running as Containers which can be scheduled on a variety of Node types having GPU acceleration. Kubernetes is able to manage the scheduling of such workloads.

Experimentation & Cost of Operations

The dynamicity of Artificial Intelligence requires a lot of research. If the cost of running experiments is too high, the development lags behind. Running Kubernetes on a commodity hardware On-Premise helps in running very low-cost experiments.

Cloud Bursting for AI

Cloud Bursting is a model for application but it’s very relevant to AI workloads which require a lot of Experimentation. In Cloud Bursting, the workload which is deployed on On-Premises Data Center scales into a Cloud whenever there is a spike.

Leverage your On-Prem for Experimentation

Customized Hardware

There is need of Customised Hardware for running Deep Learning and Machine Learning Systems on Production because, On Virtualized Storage or Compute, The Performance Degradation could be of the huge impact and we may miss important anomalies due to the performance issues.

Serverless For AI on Kubernetes using Knative

KNative on Kubernetes aims to automate all the complex part of the build, train and deploy pipeline. It is an end to end framework for: –

Promoting Eventing Ecosystem in Build Pipeline

Knative’s eventing is the framework that pulls external events from various sources such as GitHub, GCP PubSub, and Kubernetes Events. Once the events are pulled, they are dispatched to event sinks. A commonly used sink is a Knative Service, an internal endpoint running HTTP(S) services hosted by a serving function.

Using Functions to Deploy Models

Using functions is useful for dealing with independent events without needing to maintain a complex unified infrastructure. This allows you to focus on a single task that can be executed/scaled automatically and independently. Knative can be used to deploy models which are simple. The PubSub eventing source can use APIs to pull events periodically, post the events to an event sink, and delete them.

AI Assembly Lines with KNative

There are three main components for the Pipeline and Workflow for Models. Knative supports full assembly line from Build to Deploy, Embracing current CI Tools and making them Cloud and Framework Agnostic. It also routes and manages traffic with blue/green or Canary deployments, Binding running services to eventing ecosystems.

Build

Model development is part of this stage. Data Scientist must have the liberty to choose its own workbench for Building the Model. With Kubernetes, the process of provisioning workbenches for a data scientist is possible. Workbenches like Jupyter, Zeppelin or RStudio can be provided to data scientists which would install the workbench on the Kubernetes Cluster rather than a laptop. It’ll help in doing Ad Hoc Experimentation much faster and can even support GPU Acceleration during Model Building

Train

Model training is an essential stage in a project and is the most time-consuming. With the help of Kubernetes, Frameworks like Tensorflow provide great support to run Distributed Training jobs as Containers preventing a lot of time wastage and fast training. Online Training shall also be supported in the architecture in a much efficient way.

Serve

In Terms of Serving the model, With the Help of Kubernetes, The following essential features can be introduced to ML/DL Serving:

• Logging
• Distributed Tracing
• Rate Limiting
• Canary Updates

CD4ML – Continuous Delivery for Machine Learning

CD4ML with GitOps Flow is Kubernetes Friendly in terms of Continuously Deploying the Microservices. Models are stored as docker images in an Image registry. The image registry has tagged images of every code change and checkpoints. The image registry acts as an artifacts repository with all the docker images version controlled. The Models can be version Controlled using Docker registry.

What Next?

To assist in your Enterprise AI Platform on Kubernetes journey of discovery, we would recommend you to give a read to our use case for  “AIOps for Monitoring Kubernetes and Serverless

To know in-depth knowledge about Enterprise AI Platform, Get in Touch with us.


Related Posts


Leave a Comment

Name required.
Enter a Valid Email Address.
Comment required.(Min 30 Char)