# Workflows
> This bundle contains all pages in the Workflows section.
> Source: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows ===

# Workflows

> **📝 Note**
>
> An LLM-optimized bundle of this entire section is available at [`section.md`](https://www.union.ai/docs/v1/union/user-guide/core-concepts/section.md).
> This single file contains all pages in this section, optimized for AI coding agent context.

So far in our discussion of workflows, we have focused on top-level workflows decorated with `@union.workflow`.
These are, in fact, more accurately termed **Core concepts > Workflows > Standard workflows** to differentiate them from the other types of workflows that exist in Union.ai: **Core concepts > Workflows > Subworkflows and sub-launch plans**, **Core concepts > Workflows > Dynamic workflows**, and **Core concepts > Workflows > Imperative workflows**.
In this section, we will delve deeper into the fundamentals of all of these workflow types, including their syntax, structure, and behavior.

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/standard-workflows ===

# Standard workflows

A standard workflow is defined by a Python function decorated with the `@union.workflow` decorator.
The function is written in a domain specific language (DSL), a subset of Python syntax that describes the directed acyclic graph (DAG) that is deployed and executed on Union.ai.
The syntax of a standard workflow definition can only include the following:

* Calls to functions decorated with `@union.task` and assignment of variables to the returned values.
* Calls to other functions decorated with `@union.workflow` and assignment of variables to the returned values (see [Subworkflows](./subworkflows-and-sub-launch-plans)).
* Calls to [`LaunchPlan` objects](https://www.union.ai/docs/v1/union/user-guide/core-concepts/launch-plans) (see [When to use sub-launch plans](./subworkflows-and-sub-launch-plans#when-to-use-sub-launch-plans))
* Calls to functions decorated with `@union.dynamic` and assignment of variables to the returned values (see [Dynamic workflows](./dynamic-workflows)).
* The special [`conditional` construct](https://www.union.ai/docs/v1/union/user-guide/programming/conditionals).
* Statements using the [chaining operator `>>`](https://www.union.ai/docs/v1/union/user-guide/programming/chaining-entities).

## Evaluation of a standard workflow

When a standard workflow is [run locally in a Python environment](https://www.union.ai/docs/v1/union/user-guide/development-cycle/running-your-code) it is executed as a normal Python function.
However, when it is registered to Union.ai, the top level `@union.workflow`-decorated function is evaluated as follows:

* Inputs to the workflow are materialized as lazily-evaluated promises which are propagated to downstream tasks and subworkflows.
* All values returned by calls to functions decorated with `@union.task` or `@union.dynamic` are also materialized as lazily-evaluated promises.

The resulting structure is used to construct the Directed Acyclic Graph (DAG) and deploy the required containers to the cluster.
The actual evaluation of these promises occurs when the tasks (or dynamic workflows) are executed in their respective containers.

## Conditional construct

Because standard workflows cannot directly include Python `if` statements, a special `conditional` construct is provided that allows you to define conditional logic in a workflow.
For details, see [Conditionals](https://www.union.ai/docs/v1/union/user-guide/programming/conditionals).
<!-- TODO: Add link to API -->

## Chaining operator

When Union.ai builds the DAG for a standard workflow, it uses the passing of values from one task to another to determine the dependency relationships between tasks.

There may be cases where you want to define a dependency between two tasks that is not based on the output of one task being passed as an input to another.
In that case, you can use the chaining operator `>>` to define the dependencies between tasks.
For details, see [Chaining Union.ai entities](https://www.union.ai/docs/v1/union/user-guide/programming/chaining-entities).

## Workflow decorator parameters

The `@union.workflow` decorator can take the following parameters:

* `failure_policy`: Use the options in [`flytekit.WorkflowFailurePolicy`](https://www.union.ai/docs/v1/union/api-reference/flytekit-sdk).
<!-- TODO: Add link to API -->

* `interruptible`: Indicates if tasks launched from this workflow are interruptible by default. See [Interruptible instances](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/task-hardware-environment/interruptible-instances).

* `on_failure`: Invoke this workflow or task on failure. The workflow specified must have the same parameter signature as the current workflow, with an additional parameter called `error`.

* `docs`: A description entity for the workflow.

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/subworkflows-and-sub-launch-plans ===

# Subworkflows and sub-launch plans

In Union.ai it is possible to invoke one workflow from within another.
A parent workflow can invoke a child workflow in two ways: as a **subworkflow** or via a [**sub-launch plan**](https://www.union.ai/docs/v1/union/user-guide/core-concepts/launch-plans/running-launch-plans).

In both cases the child workflow is defined and registered normally, exists in the system normally, and can be run independently.

But, if the child workflow is invoked from within the parent **by directly calling the child's function**, then it becomes a **subworkflow**.
The DAG of the subworkflow is embedded directly into the DAG of the parent and effectively become part of the parent workflow execution, sharing the same execution ID and execution context.

On the other hand, if the child workflow is invoked from within the parent [**by calling the child's launch plan**](https://www.union.ai/docs/v1/union/user-guide/core-concepts/launch-plans), this is called a **sub-launch plan**. It results in a new top-level workflow execution being invoked with its own execution ID and execution context.
It also appears as a separate top-level entity in the system.
The only difference is that it happens to have been kicked off from within another workflow instead of from the command line or the UI.

Here is an example:

```python
import union

@union.workflow
def sub_wf(a: int, b: int) -> int:
    return t(a=a, b=b)

# Get the default launch plan of sub_wf, which we name sub_wf_lp
sub_wf_lp = union.LaunchPlan.get_or_create(sub_wf)

@union.workflow
def main_wf():
    # Invoke sub_wf directly.
    # An embedded subworkflow results.
    sub_wf(a=3, b=4)

    # Invoke sub_wf through its default launch plan, here called sub_wf_lp
    # An independent subworkflow results.
    sub_wf_lp(a=1, b=2)
```

## When to use subworkflows

Subworkflows allow you to manage parallelism between a workflow and its launched sub-flows, as they execute within the same context as the parent workflow.
Consequently, all nodes of a subworkflow adhere to the overall constraints imposed by the parent workflow.

<!-- TODO: a diagram of the above example. -->

Here's an example illustrating the calculation of slope, intercept and the corresponding y-value.

```python
import union

@union.task
def slope(x: list[int], y: list[int]) -> float:
    sum_xy = sum([x[i] * y[i] for i in range(len(x))])
    sum_x_squared = sum([x[i] ** 2 for i in range(len(x))])
    n = len(x)
    return (n * sum_xy - sum(x) * sum(y)) / (n * sum_x_squared - sum(x) ** 2)

@union.task
def intercept(x: list[int], y: list[int], slope: float) -> float:
    mean_x = sum(x) / len(x)
    mean_y = sum(y) / len(y)
    intercept = mean_y - slope * mean_x
    return intercept

@union.workflow
def slope_intercept_wf(x: list[int], y: list[int]) -> (float, float):
    slope_value = slope(x=x, y=y)
    intercept_value = intercept(x=x, y=y, slope=slope_value)
    return (slope_value, intercept_value)

@union.task
def regression_line(val: int, slope_value: float, intercept_value: float) -> float:
    return (slope_value * val) + intercept_value  # y = mx + c

@union.workflow
def regression_line_wf(val: int = 5, x: list[int] = [-3, 0, 3], y: list[int] = [7, 4, -2]) -> float:
    slope_value, intercept_value = slope_intercept_wf(x=x, y=y)
    return regression_line(val=val, slope_value=slope_value, intercept_value=intercept_value)
```

The `slope_intercept_wf` computes the slope and intercept of the regression line.
Subsequently, the `regression_line_wf` triggers `slope_intercept_wf` and then computes the y-value.

It is possible to nest a workflow that contains a subworkflow within yet another workflow.
Workflows can be easily constructed from other workflows, even if they also function as standalone entities.
For example, each workflow in the example below has the capability to exist and run independently:

```python
import union

@union.workflow
def nested_regression_line_wf() -> float:
    return regression_line_wf()
```

## When to use sub-launch plans

Sub-launch plans can be useful for implementing exceptionally large or complicated workflows that can’t be adequately implemented as [dynamic workflows](../workflows/dynamic-workflows) or [map tasks](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/task-types).
Dynamic workflows and map tasks share the same context and single underlying Kubernetes resource definitions.
Sub-launch plan invoked workflows do not share the same context.
They are executed as separate top-level entities, allowing for better parallelism and scale.

Here is an example of invoking a workflow multiple times through its launch plan:

```python
import union

@union.task
def my_task(a: int, b: int, c: int) -> int:
    return a + b + c

@union.workflow
def my_workflow(a: int, b: int, c: int) -> int:
    return my_task(a=a, b=b, c=c)

my_workflow_lp = union.LaunchPlan.get_or_create(my_workflow)

@union.workflow
def wf() -> list[int]:
    return [my_workflow_lp(a=i, b=i, c=i) for i in [1, 2, 3]]
```

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/dynamic-workflows ===

# Dynamic workflows

A workflow whose directed acyclic graph (DAG) is computed at run-time is a [`dynamic`]() workflow. <!-- TODO: add link to API -->

The tasks in a dynamic workflow are executed at runtime using dynamic inputs. A dynamic workflow shares similarities with the [`workflow`]()<!-- TODO: add link to API -->, as it uses a Python-esque domain-specific language to declare dependencies between the tasks or define new workflows.

A key distinction lies in the dynamic workflow being assessed at runtime. This means that the inputs are initially materialized and forwarded to the dynamic workflow, resembling the behavior of a task. However, the return value from a dynamic workflow is a [`Promise`]() <!-- TODO: add link to API --> object, which can be materialized by the subsequent tasks.

Think of a dynamic workflow as a combination of a task and a workflow. It is used to dynamically decide the parameters of a workflow at runtime and is both compiled and executed at run-time.

Dynamic workflows become essential when you need to do the following:
- Handle conditional logic
- Modify the logic of the code at runtime
- Change or decide on feature extraction parameters on the fly

## Defining a dynamic workflow

You can define a dynamic workflow using the `@union.dynamic` decorator.

Within the `@union.dynamic` context, each invocation of a [`task`]() <!-- TODO: add link to API --> or a derivative of the [`Task`]() <!-- TODO: add link to API --> class leads to deferred evaluation using a Promise, rather than the immediate materialization of the actual value. While nesting other `@union.dynamic` and `@union.workflow` constructs within this task is possible, direct interaction with the outputs of a task/workflow is limited, as they are lazily evaluated. If you need to interact with the outputs, we recommend separating the logic in a dynamic workflow and creating a new task to read and resolve the outputs.

The example below uses a dynamic workflow to count the common characters between any two strings.

We define a task that returns the index of a character, where A-Z/a-z is equivalent to 0-25:

```python
import union

@union.task
def return_index(character: str) -> int:
    if character.islower():
        return ord(character) - ord("a")
    else:
        return ord(character) - ord("A")
```

We also create a task that prepares a list of 26 characters by populating the frequency of each character:

```python
@union.task
def update_list(freq_list: list[int], list_index: int) -> list[int]:
    freq_list[list_index] += 1
    return freq_list
```

We define a task to calculate the number of common characters between the two strings:

```python
@union.task
def derive_count(freq1: list[int], freq2: list[int]) -> int:
    count = 0
    for i in range(26):
        count += min(freq1[i], freq2[i])
    return count
```

We define a dynamic workflow to accomplish the following:

1. Initialize an empty 26-character list to be passed to the `update_list` task.
2. Iterate through each character of the first string (`s1`) and populate the frequency list.
3. Iterate through each character of the second string (`s2`) and populate the frequency list.
4. Determine the number of common characters by comparing the two frequency lists.

The looping process depends on the number of characters in both strings, which is unknown until runtime:

```python
@union.dynamic
def count_characters(s1: str, s2: str) -> int:
    # s1 and s2 should be accessible

    # Initialize empty lists with 26 slots each, corresponding to every alphabet (lower and upper case)
    freq1 = [0] * 26
    freq2 = [0] * 26

    # Loop through characters in s1
    for i in range(len(s1)):
        # Calculate the index for the current character in the alphabet
        index = return_index(character=s1[i])
        # Update the frequency list for s1
        freq1 = update_list(freq_list=freq1, list_index=index)
        # index and freq1 are not accessible as they are promises

    # looping through the string s2
    for i in range(len(s2)):
        # Calculate the index for the current character in the alphabet
        index = return_index(character=s2[i])
        # Update the frequency list for s2
        freq2 = update_list(freq_list=freq2, list_index=index)
        # index and freq2 are not accessible as they are promises

    # Count the common characters between s1 and s2
    return derive_count(freq1=freq1, freq2=freq2)
```

A dynamic workflow is modeled as a task in the Union.ai backend, but the body of the function is executed to produce a workflow at runtime. In both dynamic and static workflows, the output of tasks are Promise objects.

Union.ai executes the dynamic workflow within its container, resulting in a compiled DAG, which is then accessible in the UI. It uses the information acquired during the dynamic task's execution to schedule and execute each task within the dynamic workflow. Visualization of the dynamic workflow's graph in the UI is only available after it has completed its execution.

When a dynamic workflow is executed, it generates the entire workflow structure as its output, termed the *futures file*.
This name reflects the fact that the workflow has yet to be executed, so all subsequent outputs are considered futures.

> [!NOTE]
> Local execution works when a `@union.dynamic` decorator is used because Union treats it as a task that runs with native Python inputs.

Finally, we define a standard workflow that triggers the dynamic workflow:

```python
@union.workflow
def start_wf(s1: str, s2: str) -> int:
    return count_characters(s1=s1, s2=s2)
```

You can run the workflow locally as follows:

```python
if __name__ == "__main__":
    print(start_wf(s1="Pear", s2="Earth"))
```

## Advantages of dynamic workflows

### Flexibility

Dynamic workflows streamline the process of building pipelines, offering the flexibility to design workflows
according to the unique requirements of your project. This level of adaptability is not achievable with static workflows.

### Lower pressure on `etcd`

The workflow Custom Resource Definition (CRD) and the states associated with static workflows are stored in `etcd`,
the Kubernetes database. This database maintains Union.ai workflow CRDs as key-value pairs, tracking the status of each node's execution.

However, `etcd` has a hard limit on data size, encompassing the workflow and node status sizes, so it is important to ensure that static workflows don't excessively consume memory.

In contrast, dynamic workflows offload the workflow specification (including node/task definitions and connections) to the object store. Still, the statuses of nodes are stored in the workflow CRD within `etcd`.

Dynamic workflows help alleviate some pressure on `etcd` storage space, providing a solution to mitigate storage constraints.

## Dynamic workflows vs. map tasks

Dynamic tasks come with overhead for large fan-out tasks as they store metadata for the entire workflow.
In contrast, [map tasks](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/task-types) prove efficient for such extensive fan-out scenarios since they refrain from storing metadata, resulting in less noticeable overhead.

## Using dynamic workflows to achieve recursion

Merge sort is a perfect example to showcase how to seamlessly achieve recursion using dynamic workflows.
Union.ai imposes limitations on the depth of recursion to prevent misuse and potential impacts on the overall stability of the system.

```python
from typing import Tuple

import union

@union.task
def split(numbers: list[int]) -> tuple[list[int], list[int]]:

    length = len(numbers)

    return (
        numbers[0 : int(length / 2)],
        numbers[int(length / 2) :]
    )

@union.task
def merge(sorted_list1: list[int], sorted_list2: list[int]) -> list[int]:
    result = []
    while len(sorted_list1) > 0 and len(sorted_list2) > 0:
        # Compare the current element of the first array with the current element of the second array.
        # If the element in the first array is smaller, append it to the result and increment the first array index.
        # Otherwise, do the same with the second array.
        if sorted_list1[0] < sorted_list2[0]:
            result.append(sorted_list1.pop(0))
        else:
            result.append(sorted_list2.pop(0))

    # Extend the result with the remaining elements from both arrays
    result.extend(sorted_list1)
    result.extend(sorted_list2)

    return result

@union.task
def sort_locally(numbers: list[int]) -> list[int]:
    return sorted(numbers)

@union.dynamic
def merge_sort_remotely(numbers: list[int], threshold: int) -> list[int]:
    split1, split2 = split(numbers=numbers)
    sorted1 = merge_sort(numbers=split1, threshold=threshold)
    sorted2 = merge_sort(numbers=split2, threshold=threshold)
    return merge(sorted_list1=sorted1, sorted_list2=sorted2)

@union.dynamic
def merge_sort(numbers: list[int], threshold: int=5) -> list[int]:

    if len(numbers) <= threshold:
        return sort_locally(numbers=numbers)
    else:
        return merge_sort_remotely(numbers=numbers, threshold=threshold)
```

By simply adding the `@union.dynamic` annotation, the `merge_sort_remotely` function transforms into a plan of execution,
generating a workflow with four distinct nodes. These nodes run remotely on potentially different hosts,
with Union.ai ensuring proper data reference passing and maintaining execution order with maximum possible parallelism.

`@union.dynamic` is essential in this context because the number of times `merge_sort` needs to be triggered is unknown at compile time. The dynamic workflow calls a static workflow, which subsequently calls the dynamic workflow again,
creating a recursive and flexible execution structure.

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/imperative-workflows ===

# Imperative workflows

Workflows are commonly created by applying the `@union.workflow` decorator to Python functions.
During compilation, this involves processing the function's body and utilizing subsequent calls to
underlying tasks to establish and record the workflow structure. This is the *declarative* approach
and is suitable when manually drafting the workflow.

However, in cases where workflows are constructed programmatically, an imperative style is more appropriate.
For instance, if tasks have been defined already, their sequence and dependencies might have been specified in textual form (perhaps during a transition from a legacy system).
In such scenarios, you want to orchestrate these tasks.
This is where Union.ai's imperative workflows come into play, allowing you to programmatically construct workflows.

## Example

To begin, we define the `slope` and `intercept` tasks:

```python
import union

@union.task
def slope(x: list[int], y: list[int]) -> float:
    sum_xy = sum([x[i] * y[i] for i in range(len(x))])
    sum_x_squared = sum([x[i] ** 2 for i in range(len(x))])
    n = len(x)
    return (n * sum_xy - sum(x) * sum(y)) / (n * sum_x_squared - sum(x) ** 2)

@union.task
def intercept(x: list[int], y: list[int], slope: float) -> float:
    mean_x = sum(x) / len(x)
    mean_y = sum(y) / len(y)
    intercept = mean_y - slope * mean_x
    return intercept
```

Create an imperative workflow:

```python
imperative_wf = Workflow(name="imperative_workflow")
```

Add the workflow inputs to the imperative workflow:

```python
imperative_wf.add_workflow_input("x", list[int])
imperative_wf.add_workflow_input("y", list[int])
```

> If you want to assign default values to the workflow inputs, you can create a [launch plan](https://www.union.ai/docs/v1/union/user-guide/core-concepts/launch-plans).

Add the tasks that need to be triggered from within the workflow:

```python
node_t1 = imperative_wf.add_entity(slope, x=imperative_wf.inputs["x"], y=imperative_wf.inputs["y"])
node_t2 = imperative_wf.add_entity(
    intercept, x=imperative_wf.inputs["x"], y=imperative_wf.inputs["y"], slope=node_t1.outputs["o0"]
)
```

Lastly, add the workflow output:

```python
imperative_wf.add_workflow_output("wf_output", node_t2.outputs["o0"])
```

You can execute the workflow locally as follows:

```python
if __name__ == "__main__":
    print(f"Running imperative_wf() {imperative_wf(x=[-3, 0, 3], y=[7, 4, -2])}")
```

You also have the option to provide a list of inputs and
retrieve a list of outputs from the workflow:

```python
wf_input_y = imperative_wf.add_workflow_input("y", list[str])
node_t3 = wf.add_entity(some_task, a=[wf.inputs["x"], wf_input_y])

wf.add_workflow_output(
    "list_of_outputs",
    [node_t1.outputs["o0"], node_t2.outputs["o0"]],
    python_type=list[str],
)
```

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/launching-workflows ===

# Launching workflows

From the [individual workflow view](./viewing-workflows#workflow-view) (accessed, for example, by selecting a workflow in the [**Workflows** list](./viewing-workflows#workflows-list)) you can select **Launch Workflow** in the top right. This opens the **New Execution** dialog for workflows:

![New execution dialog settings](https://www.union.ai/docs/v1/union/_static/images/user-guide/core-concepts/workflows/launching-workflows/new-execution-dialog-settings.png)

At the top you can select:

* The specific version of this workflow that you want to launch.
* The launch plan to be used to launch this workflow (by default it is set to the [default launch plan of the workflow](https://www.union.ai/docs/v1/union/user-guide/core-concepts/launch-plans)).

Along the left side the following sections are available:

* **Inputs**: The input parameters of the workflow function appear here as fields to be filled in.
* **Settings**:
  * **Execution name**: A custom name for this execution. If not specified, a name will be generated.
  * **Overwrite cached outputs**: A boolean. If set to `True`, this execution will overwrite any previously-computed cached outputs.
  * **Raw output data config**: Remote path prefix to store raw output data.
    By default, workflow output will be written to the built-in metadata storage.
    Alternatively, you can specify a custom location for output at the organization, project-domain, or individual execution levels.
    This field is for specifying this setting at the workflow execution level.
    If this field is filled in it overrides any settings at higher levels.
    The parameter is expected to be a URL to a writable resource (for example, `http://s3.amazonaws.com/my-bucket/`).
    See [Raw data store](https://www.union.ai/docs/v1/union/user-guide/data-input-output/task-input-and-output).
  * **Max parallelism**: Number of workflow nodes that can be executed in parallel. If not specified, project/domain defaults are used. If 0 then no limit is applied.
  * **Force interruptible**: A three valued setting for overriding the interruptible setting of the workflow for this particular execution.
    If not set, the workflow's interruptible setting is used.
    If set and **enabled** then `interruptible=True` is used for this execution.
    If set and **disabled** then `interruptible=False` is used for this execution.
    See [Interruptible instances](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/task-hardware-environment/interruptible-instances)

  * **Service account**: The service account to use for this execution. If not specified, the default is used.

* **Environment variables**: Environment variables that will be available to tasks in this workflow execution.
* **Labels**: Labels to apply to the execution resource.
* **Notifications**: [Notifications](https://www.union.ai/docs/v1/union/user-guide/core-concepts/launch-plans/notifications) configured for this workflow execution.

* **Debug**: The workflow execution details for debugging purposes.

Select **Launch** to launch the workflow execution. This will take you to the [Execution view](./viewing-workflow-executions).

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/viewing-workflows ===

# Viewing workflows

## Workflows list

The workflows list shows all workflows in the current project and domain:

![Workflows list](https://www.union.ai/docs/v1/union/_static/images/user-guide/core-concepts/workflows/viewing-workflows/workflows-list.png)

You can search the list by name and filter for only those that are archived.
To archive a workflow, select the archive icon ![Archive icon](https://www.union.ai/docs/v1/union/_static/images/user-guide/core-concepts/workflows/viewing-workflows/archive-icon.png).

Each entry in the list provides some basic information about the workflow:

* **Last execution time**:
The time of the most recent execution of this workflow.
* **Last 10 executions**:
The status of the last 10 executions of this workflow.
* **Inputs**:
The input type for the workflow.
* **Outputs**:
The output type for the workflow.
* **Description**:
 The description of the workflow.

Select an entry on the list to go to that **Core concepts > Workflows > Viewing workflows > Workflow view**.

## Workflow view

The workflow view provides details about a specific workflow.

![Workflow view](https://www.union.ai/docs/v1/union/_static/images/user-guide/core-concepts/workflows/viewing-workflows/workflow-view.png)

This view provides:
* A list of recent workflow versions:
  Selecting a version will take you to the **Core concepts > Workflows > Viewing workflows > Workflow view > Workflow versions list**.
* A list of recent executions:
  Selecting an execution will take you to the [execution view](./viewing-workflow-executions).

### Workflow versions list

The workflow versions list shows the  a list of all versions of this workflow along with a graph view of the workflow structure:

![Workflow version list](https://www.union.ai/docs/v1/union/_static/images/user-guide/core-concepts/workflows/viewing-workflows/workflow-versions-list.png)

### Workflow and task descriptions

Union.ai enables the use of docstrings to document your code. Docstrings are stored in the control plane and displayed on the UI for each workflow or task.

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/core-concepts/workflows/viewing-workflow-executions ===

# Viewing workflow executions

The **Executions list** shows all executions in a project and domain combination.
An execution represents a single run of all or part of a workflow (including subworkflows and individual tasks).
You can access it from the **Executions** link in the left navigation.

![Executions list](https://www.union.ai/docs/v1/union/_static/images/user-guide/core-concepts/workflows/viewing-workflow-executions/executions-list.png)

## Domain Settings

This section displays any domain-level settings that have been configured for this project-domain combination. They are:

* Security Context
* Labels
* Annotations
* Raw output data config
* Max parallelism

## All Executions in the Project

For each execution in this project and domain you can see the following:

* A graph of the **last 100 executions in the project**.
* **Start time**: Select to view the **Core concepts > Workflows > Viewing workflow executions > Execution view**.
* **Workflow/Task**: The [individual workflow](./viewing-workflows) or [individual task](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/viewing-tasks) that ran in this execution.
* **Version**: The version of the workflow or task that ran in this execution.
* **Launch Plan**: The [Launch Plan](https://www.union.ai/docs/v1/union/user-guide/core-concepts/launch-plans/viewing-launch-plans) that was used to launch this execution.
* **Schedule**: The schedule that was used to launch this execution (if any).
* **Execution ID**: The ID of the execution.
* **Status**: The status of the execution. One of **QUEUED**, **RUNNING**, **SUCCEEDED**, **FAILED** or **UNKNOWN**.
* **Duration**: The duration of the execution.

## Execution view

The execution view appears when you launch a workflow or task or select an already completed execution.

An execution represents a single run of all or part of a workflow (including subworkflows and individual tasks).

![Execution view - nodes](https://www.union.ai/docs/v1/union/_static/images/user-guide/core-concepts/workflows/viewing-workflow-executions/execution-view-nodes.png)

> [!NOTE]
> An execution usually represents the run of an entire workflow.
> But, because workflows are composed of tasks (and sometimes subworkflows) and Union.ai caches the outputs of those independently of the workflows in which they participate, it sometimes makes sense to execute a task or subworkflow independently.

The top part of execution view provides detailed general information about the execution.

The bottom part provides three tabs displaying different aspects of the execution: **Nodes**, **Graph**, and **Timeline**.

### Nodes

The default tab within the execution view is the **Nodes** tab.
It shows a list of the Union.ai nodes that make up this execution (A node in Union.ai is either a task or a (sub-)workflow).

Selecting an item in the list opens the right panel showing more details of that specific node:

![](../../../_static/images/user-guide/core-concepts/workflows/viewing-workflow-executions/execution-view-node-side-panel.png)

The top part of the side panel provides detailed information about the node as well as the **Rerun task** button.

Below that, you have the following tabs: **Executions**, **Inputs**, **Outputs**, and **Task**.

The **Executions** tab gives you details on the execution of this particular node as well as access to:

* **Task level monitoring**: You can access the [task-level monitoring](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/task-hardware-environment/task-level-monitoring) information by selecting **View Utilization**.

* **Logs**: You can access logs by clicking the text under **Logs**. See [Logging](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/viewing-logs).

The **Inputs**, **Outputs** tabs display the data that was passed into and out of the node, respectively.

If this node is a task (as opposed to a subworkflow) then the **Task** tab displays the Task definition structure.

### Graph

The Graph tab displays a visual representation of the execution as a directed acyclic graph:

![](../../../_static/images/user-guide/core-concepts/workflows/viewing-workflow-executions/execution-view-graph.png)

### Timeline

The Timeline tab displays a visualization showing the timing of each task in the execution:

![](../../../_static/images/user-guide/core-concepts/workflows/viewing-workflow-executions/execution-view-timeline.png)

