Machine Learning Operations (MLOps): Deploy at Scale
Alex Cattle
on 10 September 2019
Tags: artificial intelligence , devops , Kubeflow , kubernetes , machine learning , Ubuntu

Artificial Intelligence and Machine Learning adoption in the enterprise is exploding from Silicon Valley to Wall Street with diverse use cases ranging from the analysis of customer behaviour and purchase cycles to diagnosing medical conditions.
Following on from our webinar ‘Getting started with AI’, this webinar will dive into what success looks like when deploying machine learning models, including training, at scale. The key topics are:
- Automatic Workflow Orchestration
- ML Pipeline development
- Kubernetes / Kubeflow Integration
- On-device Machine Learning, Edge Inference and Model Federation
- On-prem to cloud, on-demand extensibility
- Scale-out model serving and inference
This webinar will detail recent advancements in these areas alongside providing actionable insights for viewers to apply to their AI/ML efforts!
Enterprise AI, simplified
AI doesn’t have to be difficult. Accelerate innovation with an end-to-end stack that delivers all the open source tooling you need for the entire AI/ML lifecycle.
Newsletter signup
Related posts
Canonical announces Ubuntu Pro for WSL
Ubuntu Pro for WSL provides turnkey security maintenance and enterprise support for Ubuntu 24.04 LTS WSL instances in Windows. The subscription will also...
AMI and Canonical announce partnership
Today, Canonical, the publisher of Ubuntu, announced a partnership with AMI, a provider of Unified Extensible Firmware Interface (UEFI) solutions, allowing...
Canonical releases FIPS-enabled Kubernetes
Today at KubeCon North America, Canonical, the publisher of Ubuntu, released support to enable FIPS mode in its Kubernetes distribution, providing everything...