/ better AI pipelines by design

Infrastructure for AI, ML & Data

For developers managing AI, ML, and data workflows in production, the challenges extend well beyond scheduling and orchestrating DAGs. Union.ai addresses these complexities by offering a comprehensive infrastructure management platform designed for the nuances of such environments.

Union optimizes resources across teams and implements cost-effective strategies that can reduce expenses by up to 66%. Moreover, it's engineered to fit within your own cloud ecosystem, ensuring a robust and tailored infrastructure that scales with your technical demands.

View product
Input Graph
Powerful DAGs, observability & cost-efficient engineering
/ Union: just bring your compute, we bring Flyte

Powerful DAGs, observability & cost-efficient engineering

Union is a fully-managed Flyte platform deployed in your VPC that provides a single-endpoint workflow orchestration and compute service to engineers building data and ML products.

Get built-in dashboards, live-logging, and task-level resource monitoring, enabling users to identify resource bottlenecks and simplifying the debugging process, resulting in optimized infrastructure and faster experimentation.

Get a demo
/ from engineers for engineers

AI engineering for engineers

Union is an open AI orchestration platform that simplifies AI infrastructure so you can develop, deploy, and innovate faster. Unlike popular—but simple—AI engineering orchestrators, Union wrangles the infrastructure setup and management as well.

Write your code in Python, collaborate across departments, and enjoy full reproducibility and auditability. Union lets you focus on what matters.

Explore docs
def get_data() -> pd.DataFrame:
    return load_digits(as_frame=True).frame

def train_model(data: pd.DataFrame) -> MLPClassifier:
    features = data.drop("target", axis="columns")
    target = data["target"]
    return MLPClassifier().fit(features, target)

def training_workflow() -> MLPClassifier:
    data = get_data()
    return train_model(data=data)

Write your Python code locally, execute it remotely

Enjoy the freedom to write Python code that runs both locally and remotely in your Kubernetes cluster. Take advantage of full parallelization and utilization of all Kubernetes nodes without creating Docker files or writing YAML.

1. Run Local
2. Scale Remote
3. Deploy
import transformers as tr
from datasets import load_dataset

from flytekit import task
from flytekit.types.directory import FlyteDirectory

def train(
    model_id: str,
    dataset_id: str,
    dataset_name: str,
) -> FlyteDirectory:

    # authenticate

    # load the dataset, model, and tokenizer
    dataset = load_dataset(dataset_id, dataset_name)
    model = tr.AutoModelForCausalLM.from_pretrained(model_id, ...)
    tokenizer = tr.AutoTokenizer.from_pretrained(model_id, ...)

    # prepare dataset
    dataset = dataset["train"].shuffle().map(tokenizer, ...)

    # define and run the trainer
    trainer = tr.Trainer(model=model, train_dataset=dataset, ...)
    print("Training model")

    # save and return model directory
    output_path = "./model"
    print("Saving model")
    return FlyteDirectory(path=output_path)
import transformers as tr
from datasets import load_dataset

from flytekit import task, ImageSpec, Resources
from flytekit.types.directory import FlyteDirectory

image_spec = ImageSpec(
    env={"VENV": "/opt/venv"},

    requests=Resource(mem="100Gi", cpu="32", gpu="8"),
def train(
    model_id: str,
    dataset_id: str,
    dataset_name: str,
) -> FlyteDirectory:
def train(...) -> FlyteDirectory:

def deploy(model_dir: FlyteDirectory, repo_id: str) -> str:
    # upload readme and model files
    api = hh.HfApi()
    repo_url = api.create_repo(repo_id, exist_ok=True)
    readme = "..."
    return str(repo_url)

def train_and_deploy(
    model_id: str,
    dataset_id: str,
    dataset_name: str,
    repo_id: str,
) -> str:
    model_dir = train(
    return deploy(model_dir=model_dir, repo_id=repo_id)
$ pyflyte run llm_training.py train \
    --model_id EleutherAI/pythia-70m \
    --dataset_id togethercomputer/RedPajama-Data-V2 \
    --dataset_name sample

Running Execution on local.
Map: 100%|████████████| 1050391/1050391
Training model
{'train_runtime': 4.5401, ...}
100%|███████████████████| 100/100
Saving model

$ ls /var/folders/4q/frdnh9l10h53gggw1m59gr9m0000gp/T/flyte-f2qjyme6/raw/a888e295fefbdae4023ec2b35e53edcb

$ pyflyte run --remote llm_training.py train \
    --model_id meta-llama/Llama-2-7b-hf \
    --dataset_id togethercomputer/RedPajama-Data-V2 \
    --dataset_name default

Running Execution on Remote.
Image ghcr.io/unionai-oss/llm_training:5quiCD_S3VoDsP0Sr3ZWIA.. found. Skip building.

[✔] Go to https://org.unionai.cloud/console/projects/flytesnacks/domains/development/executions/fe661d1127e84438bb8e to see execution in the console.
$ pyflyte run --remote llm_training.py train_and_deploy \
    --model_id meta-llama/Llama-2-7b-hf \
    --dataset_id togethercomputer/RedPajama-Data-V2 \
    --dataset_name default \
    --repo_id unionai/Llama-2-7b-hf-finetuned

Running Execution on Remote.
Image ghcr.io/unionai-oss/llm_training:5quiCD_S3VoDsP0Sr3ZWIA.. found. Skip building.
Image ghcr.io/unionai-oss/llm_training:5quiCD_S3VoDsP0Sr3ZWIA.. found. Skip building.

[✔] Go to https://org.unionai.cloud/console/projects/flytesnacks/domains/development/executions/fe661d1127e84438bb8e to see execution in the console.
/ the better replacement for Airflow and Kubeflow

Purpose-built for lineage-aware pipeline orchestration

Bring your own Airflow code (BYOAC) and take advantage of modern AI orchestration features—out of the box! Get full reproducibility, audibility, experiment tracking, cross-team task sharing, compile-time error checking, and automatic artifact capture.

Explore features

Easily experiment and iterate in isolation with versioned tasks and workflows.


A centralized infrastructure for your team and organization, enables multiple users to share the same platform while maintaining their own distinct data and configurations.

Type checking

Strongly typed inputs and outputs can simplify data validation and highlight incompatibilities between tasks making it easier to identify and troubleshoot errors before launching the workflow.


Caching the output of task executions can accelerate subsequent executions and prevent wasted resources.

Data lineage

As a data-aware platform, it can simplify rollbacks and error tracking.


Immutable executions help ensure reproducibility by preventing any changes to the state of an execution.


Rerun only failed tasks in a workflow to save time, resources, and more easily debug.


Enable human intervention to supervise, tune and test workflows - resulting in improved accuracy and safety.

Intra-task checkpointing

Checkpoint progress within a task execution in order to save time and resources in the event of task failure.


With every task versioned and every dependency set is captured, making it easy to share workflows across teams and reproduce results.

We manage the infrastructure so you can build what matters
/ supporting innovation across industries

We manage the infrastructure so you can build what matters

Union is the AI orchestration and infrastructure platform of choice for many top data and ML teams globally. Esteemed companies such as Woven Planet and AbCellera have transitioned their workflows from Airflow or Kubeflow to Union.

Union is up to 66 percent more cost-efficient with your compute resources, solves complex infrastructure challenges, and is built for rapid iteration across teams.

View case studies

Globally trusted & tested

Community members
Downloads per month
Fortune 100 companies

Join our developer community

“We got over 66% reduction in orchestration code when we moved to Flyte™ — a huge win!”

Seth Miller-Zhang, Senior Software Engineer at ZipRecruiter

“With Flyte™, we want to give the power back to biologists. We want to stand up something that they can play around with different parameters for their models because not every … parameter is fixed. We want to make sure we are giving them the power to run the analyses.”

Krishna Yeramsetty, Principal Data Scientist at Infinome

“To my great surprise, the migration to Flyte™ was as smooth and easy as the development of our initial active learning pipeline in Airflow had been painful: It literally took just a few weeks to revamp our platform’s main pipeline entirely, to the delight of users and developers alike.”

Jennifer Prendki, Founder and CEO of Alectio

“Because we are a spot-based company, a lot of our workflows run into the majority of issues. Thankfully, with Flyte™, we can debug and do quick iterations.”

Varsha Parthasarathy, Senior Software Engineer at Woven Planet

“It’s not an understatement to say that Flyte™ is really a workhorse at Freenome!”

Jeev Balakrishnan, Software Engineer at Freenome

“We're going to have 10,000-plus CPUs that we plan to use every day to process the raw data. There'll be 30 different targets approximately that we're collecting data on every day. That's about 200 GB of raw data and probably 2 TB or so on the output — a lot of data process. We're leaning heavily on Flyte™ to make that happen.”

Nicholas LoFaso, Senior Platform Software Engineer at MethaneSAT

“The multi-tenancy that Flyte™ provides is obviously important in regulated spaces where you need to separate users and resources and things like amongst each other within the same organization.”

Jake Neyer, Software Engineer at Striveworks

“We’ve migrated about 50% of all training pipelines over to Flyte™ from Kubeflow. In several cases, we saw an 80% reduction in boilerplate between workflows and tasks vs. the Kubeflow pipeline and components. Overall, Flyte™ is a far simpler system to reason about with respect to how the code actually executes, and it’s more self-serve for our research team to handle.”

Rahul Mehta, ML Infrastructure/Platform Lead at Theorem LP

“Flyte™’s scalability, data lineage, and caching capabilities enable us to train hundreds of models on petabytes of geospatial data, giving us an edge in our business.”

Arno, CTO at Blackshark.ai