Sara Gawlinski

Fine-Tuning LLMs On-Prem with Union and Platform9 at KubeCon 2023

It’s KubeCon season, and cloud-native enthusiasts will flock to Chicago November 6-9 to hear the latest innovations to all things Kubernetes. 

Not surprisingly, one of the major themes at this year’s event is AI. Attendees interested in the topic can attend the co-located event, Kubernetes AI + HPC, as well as sessions that cover AI from different angles, including GenAI, AI workload optimization, AI ethics and the future of AI. 

Union and Platform9 team for one-click AI pipelines 

This year, Union is excited to share that we are collaborating with Platform9 to showcase our joint technology in a new demo. Platform9 offers a managed Kubernetes solution that users can pair with Union for easy one-click deployment of a Kubernetes cluster and a fabric to build declarative, reproducible and reliable AI pipelines. This powerful integration can reduce your operational costs and accelerate data processing and model development — all while securely running within your on-prem infrastructure. 

Bringing fine-tuning on-prem 

The KubeCon demo will show how you can quickly provision a Kubernetes cluster with Platform9, then build a fine-tuning workflow that publishes Large Language Models (LLMs) to a HuggingFace model hub (for online inference) and a workflow on Union (for batch inference). While you can use Union by itself to run these workflows in AWS or GCP, your organization may choose to keep models and data on-prem due to data privacy and latency requirements. This is where the integration between Union and Platform9 can help. 

If you want to catch the demo, stop by Platform9’s KubeCon booth (B32) to check out the schedule and ask how you can accelerate your AI projects with Union and Platform9 Managed Kubernetes.

Community
Update