/ better AI pipelines by design

Infrastructure for AI, ML & Data

For developers managing AI, ML, and data workflows in production, the challenges extend well beyond scheduling and orchestrating DAGs. Union.ai addresses these complexities by offering a comprehensive infrastructure management platform designed for the nuances of such environments.

Union optimizes resources across teams and implements cost-effective strategies that can reduce expenses by up to 66%. Moreover, it's engineered to fit within your own cloud ecosystem, ensuring a robust and tailored infrastructure that scales with your technical demands.

View product
Input Graph
Powerful DAGs, observability & cost-efficient engineering
/ Union: just bring your compute, we bring Flyte

Powerful DAGs, observability & cost-efficient engineering

Union is a fully-managed Flyte platform deployed in your VPC that provides a single-endpoint workflow orchestration and compute service to engineers building data and ML products.

Get built-in dashboards, live-logging, and task-level resource monitoring, enabling users to identify resource bottlenecks and simplifying the debugging process, resulting in optimized infrastructure and faster experimentation.

Get a demo
/ from engineers for engineers

AI engineering for engineers

Union is an open AI orchestration platform that simplifies AI infrastructure so you can develop, deploy, and innovate faster. Unlike popular—but simple—AI engineering orchestrators, Union wrangles the infrastructure setup and management as well.

Write your code in Python, collaborate across departments, and enjoy full reproducibility and auditability. Union lets you focus on what matters.

Explore docs
def get_data() -> pd.DataFrame:
    return load_digits(as_frame=True).frame

def train_model(data: pd.DataFrame) -> MLPClassifier:
    features = data.drop("target", axis="columns")
    target = data["target"]
    return MLPClassifier().fit(features, target)

def training_workflow() -> MLPClassifier:
    data = get_data()
    return train_model(data=data)

Write your Python code locally, execute it remotely

Enjoy the freedom to write Python code that runs both locally and remotely in your Kubernetes cluster. Take advantage of full parallelization and utilization of all Kubernetes nodes without creating Docker files or writing YAML.

1. Run Local
2. Scale Remote
3. Deploy
import transformers as tr
from datasets import load_dataset

from flytekit import task
from flytekit.types.directory import FlyteDirectory

def train(
    model_id: str,
    dataset_id: str,
    dataset_name: str,
) -> FlyteDirectory:

    # authenticate

    # load the dataset, model, and tokenizer
    dataset = load_dataset(dataset_id, dataset_name)
    model = tr.AutoModelForCausalLM.from_pretrained(model_id, ...)
    tokenizer = tr.AutoTokenizer.from_pretrained(model_id, ...)

    # prepare dataset
    dataset = dataset["train"].shuffle().map(tokenizer, ...)

    # define and run the trainer
    trainer = tr.Trainer(model=model, train_dataset=dataset, ...)
    print("Training model")

    # save and return model directory
    output_path = "./model"
    print("Saving model")
    return FlyteDirectory(path=output_path)
import transformers as tr
from datasets import load_dataset

from flytekit import task, ImageSpec, Resources
from flytekit.types.directory import FlyteDirectory

image_spec = ImageSpec(
    env={"VENV": "/opt/venv"},

    requests=Resource(mem="100Gi", cpu="32", gpu="8"),
def train(
    model_id: str,
    dataset_id: str,
    dataset_name: str,
) -> FlyteDirectory:
def train(...) -> FlyteDirectory:

def deploy(model_dir: FlyteDirectory, repo_id: str) -> str:
    # upload readme and model files
    api = hh.HfApi()
    repo_url = api.create_repo(repo_id, exist_ok=True)
    readme = "..."
    return str(repo_url)

def train_and_deploy(
    model_id: str,
    dataset_id: str,
    dataset_name: str,
    repo_id: str,
) -> str:
    model_dir = train(
    return deploy(model_dir=model_dir, repo_id=repo_id)
$ pyflyte run llm_training.py train \
    --model_id EleutherAI/pythia-70m \
    --dataset_id togethercomputer/RedPajama-Data-V2 \
    --dataset_name sample

Running Execution on local.
Map: 100%|████████████| 1050391/1050391
Training model
{'train_runtime': 4.5401, ...}
100%|███████████████████| 100/100
Saving model

$ ls /var/folders/4q/frdnh9l10h53gggw1m59gr9m0000gp/T/flyte-f2qjyme6/raw/a888e295fefbdae4023ec2b35e53edcb

$ pyflyte run --remote llm_training.py train \
    --model_id meta-llama/Llama-2-7b-hf \
    --dataset_id togethercomputer/RedPajama-Data-V2 \
    --dataset_name default

Running Execution on Remote.
Image ghcr.io/unionai-oss/llm_training:5quiCD_S3VoDsP0Sr3ZWIA.. found. Skip building.

[✔] Go to https://org.unionai.cloud/console/projects/flytesnacks/domains/development/executions/fe661d1127e84438bb8e to see execution in the console.
$ pyflyte run --remote llm_training.py train_and_deploy \
    --model_id meta-llama/Llama-2-7b-hf \
    --dataset_id togethercomputer/RedPajama-Data-V2 \
    --dataset_name default \
    --repo_id unionai/Llama-2-7b-hf-finetuned

Running Execution on Remote.
Image ghcr.io/unionai-oss/llm_training:5quiCD_S3VoDsP0Sr3ZWIA.. found. Skip building.
Image ghcr.io/unionai-oss/llm_training:5quiCD_S3VoDsP0Sr3ZWIA.. found. Skip building.

[✔] Go to https://org.unionai.cloud/console/projects/flytesnacks/domains/development/executions/fe661d1127e84438bb8e to see execution in the console.
/ the better replacement for Airflow and Kubeflow

Purpose-built for lineage-aware pipeline orchestration

Bring your own Airflow code (BYOAC) and take advantage of modern AI orchestration features—out of the box! Get full reproducibility, audibility, experiment tracking, cross-team task sharing, compile-time error checking, and automatic artifact capture.

Explore features

Easily experiment and iterate in isolation with versioned tasks and workflows.


A centralized infrastructure for your team and organization, enables multiple users to share the same platform while maintaining their own distinct data and configurations.

Type checking

Strongly typed inputs and outputs can simplify data validation and highlight incompatibilities between tasks making it easier to identify and troubleshoot errors before launching the workflow.


Caching the output of task executions can accelerate subsequent executions and prevent wasted resources.

Data lineage

As a data-aware platform, it can simplify rollbacks and error tracking.


Immutable executions help ensure reproducibility by preventing any changes to the state of an execution.


Rerun only failed tasks in a workflow to save time, resources, and more easily debug.


Enable human intervention to supervise, tune and test workflows - resulting in improved accuracy and safety.

Intra-task checkpointing

Checkpoint progress within a task execution in order to save time and resources in the event of task failure.


With every task versioned and every dependency set is captured, making it easy to share workflows across teams and reproduce results.

We manage the infrastructure so you can build what matters
/ supporting innovation across industries

We manage the infrastructure so you can build what matters

Union is the AI orchestration and infrastructure platform of choice for many top data and ML teams globally. Esteemed companies such as Woven Planet and AbCellera have transitioned their workflows from Airflow or Kubeflow to Union.

Union is up to 66 percent more cost-efficient with your compute resources, solves complex infrastructure challenges, and is built for rapid iteration across teams.

View case studies

Globally trusted & tested

Community members
Downloads per month
Fortune 100 companies

Join our developer community

“Flyte has this concept of immutable transformation — it turns out the executions cannot be deleted, and so having immutable transformation is a really nice abstraction for our data-engineering stack.”

Jeev Balakrishnan, Software Engineer at Freenome

“When you write Python scripts, everything runs and takes a certain amount of time, whereas now for free we get parallelism across tasks. Our data scientists think that's really cool.”

Dylan Wilder, Engineering Manager at Spotify

“One thing that I really like compared to my previous experience with some of these tools: the local dev experience with pyflyte and the sandbox are super, super nice to reduce friction between production and dev environment.”

Krishna Yeramsetty, Principal Data Scientist at Infinome

“We’re going to have 10,000-plus CPUs that we plan to use every day to process the raw data. There’ll be 30 different targets approximately that we’re collecting data on every day. That’s about 200 GB of raw data and probably 2 TB or so on the output — a lot of data process. We’re leaning heavily on Flyte to make that happen.”

Nicholas LoFaso, Senior Platform Software Engineer at MethaneSAT

“We get a lot of reusable workflows, and Flyte™ makes it fairly easy to share complex machine learning and different dependencies between teams without actually having to put all the dependencies into one container.”

Bernhard Stadlbauer, Data Engineer at Pachama

“One of the biggest reasons we picked Flyte™ was because it is ideologically aligned with what we think all MLOps systems should have: strong lineage guarantees.”

Jake Neyer, Software Engineer at Striveworks

“You can say, ‘Give me imputation’ and [Flyte™ will] launch 40 spot instances that are cheaper than your on-demand instance that you're using for your notebook and return the results back in memory.”

Calvin Leather, Staff Engineer and Tech Lead at Embark Veterinary

“Workflow versioning is quite important: When it comes to productionizing a pipeline, there are only a few platforms that provide this kind of versioning. To us, it's critical to be able to roll back to a certain workflow version in case there is a bug introduced into our production pipeline.”

Pradithya Aria Pura, Principal Software Engineer at Gojek

“Our provisions time is at least two times faster on average. The model execution time, we have three times faster, and the cost, which we can actually probably further optimize with minimal configuration and optimization, is at least three times cheaper. And so over time, we’re expecting to save even more.”

Katrina Palen, Staff ML Platform Engineer at Stash