Native app integrations

An LLM-optimized bundle of this entire section is available at section.md. This single file contains all pages in this section, optimized for AI coding agent context.

Flyte ships with a set of pre-built AppEnvironment integrations that wrap popular frameworks and serving runtimes, so you can deploy common app types without writing the integration glue yourself. Each integration provides a ready-to-use environment class — just configure your app, image, resources, and scaling, and Flyte handles the rest.

If you’re new to apps in Flyte, start with Introducing apps for an overview, then see Build apps to learn how to build custom app environments from scratch.

When to use a native integration

Use a native integration when your app fits one of the supported frameworks and you want:

  • A minimal, opinionated setup — sensible defaults for the framework, no boilerplate
  • First-class support — features like model streaming, OpenAI-compatible APIs, and passthrough auth wired in for you
  • Faster time-to-deploy — focus on your app logic, not on packaging and serving plumbing

For app types not covered here, build a custom AppEnvironment using the patterns in the Build apps section.

Available integrations

Integration Framework Typical use case
Streamlit app Streamlit Interactive dashboards and data apps
FastAPI app FastAPI REST APIs, webhooks, and backend services
vLLM app vLLM High-throughput LLM inference with an OpenAI-compatible API
SGLang app SGLang Structured generation and LLM serving with an OpenAI-compatible API
Flyte webhook FastAPI Pre-built HTTP endpoints for common Flyte control-plane operations

Next steps

  • Streamlit app: Build interactive Streamlit dashboards
  • FastAPI app: Create REST APIs and backend services
  • vLLM app: Serve large language models with vLLM
  • SGLang app: Serve LLMs with SGLang for structured generation
  • Flyte webhook: Pre-built webhook for common Flyte operations