# Administration
> This bundle contains all pages in the Administration section.
> Source: https://www.union.ai/docs/v1/union/user-guide/administration/

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/administration ===

# Administration

> **📝 Note**
>
> An LLM-optimized bundle of this entire section is available at [`section.md`](section.md).
> This single file contains all pages in this section, optimized for AI coding agent context.

This section covers the administration of Union.ai.

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/administration/resources ===

# Resources

Select **Resources** in the top right of the Union.ai interface to open a view showing the overall health and utilization of your Union.ai installation.

![Resources link](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/resources-link.png)

Four tabs are available: **Administration > Resources > Executions**, **Administration > Resources > Resource Quotas**, and **Administration > Resources > Compute**.

## Executions

![Usage Executions](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/resources-executions.png)

This tab displays information about workflows, tasks, resource consumption, and resource utilization.

### Filter

The drop-downs at the top lets you filter the charts below by project, domain and time period:

![](../../_static/images/user-guide/administration/resources/filter.png)

* **Project**: Dropdown with multi-select over all projects. Making a selection recalculates the charts accordingly. Defaults to **All Projects**.
* **Domain**: Dropdown with multi-select over all domains (for example, **development**, **staging**, **production**). Making a selection recalculates the charts accordingly. Defaults to **All Domains**.
* **Time Period Selector**: Dropdown to select the period over which the charts are plotted. Making a selection recalculates the charts accordingly. Defaults to **24 Hours**. All times are expressed in UTC.

### Workflow Executions in Final State

This chart shows the overall status of workflows at the project-domain level.

![](../../_static/images/user-guide/administration/resources/workflow-executions-in-final-state.png)

For all workflows in the selected project and domain which reached their final state during the selected time period, the chart shows:

* The number of successful workflows.
* The number of aborted workflows.
* The number of failed workflows.

See [Workflow States](/docs/v1/flyte//architecture/content/workflow-state-transitions#workflow-states) for the precise definitions of these states.

### Task Executions in Final State

This chart shows the overall status of tasks at the project-domain level.

![](../../_static/images/user-guide/administration/resources/task-executions-in-final-state.png)

For all tasks in the selected project and domain which reached their final state during the selected time period, the chart shows:

* The number of successful tasks.
* The number of aborted tasks.
* The number of failed tasks.

See [Task States](/docs/v1/flyte//architecture/content/workflow-state-transitions#task-states) for the precise definitions of these states.

### Running Pods

This chart shows the absolute resource consumption for

* Memory (MiB)
* CPU (number of cores)
* GPU (number of cores)

You can select which parameter to show by clicking on the corresponding button at the top of the chart.
You can also select whether to show **Requested**, **Used**, or both.

![Running Pods](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/running-pods.png)

### Utilization

This chart shows the percent resource utilization for

* Memory
* CPU

You can select which parameter to show by clicking on the corresponding button at the top of the chart.

![Utilization](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/utilization.png)

## Resource Quotas

This dashboard displays the resource quotas for projects and domains in the organization.

![Resource Quotas](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/resources-resource-quotas.png)

### Namespaces and Quotas

Under the hood, Union.ai uses Kubernetes to run workloads. To deliver multi-tenancy, the system uses Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). In AWS based installations, each project-domain pair is mapped to a namespace. In GCP-based installations each domain is mapped to a namespace.

Within each namespace, a [resource quota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) is set for each resource type (memory, CPU, GPU). This dashboard displays the current point-in-time quota consumption for memory, CPU, and GPU. Quotas are defined as part of the set-up of the instance types in your data plane. To change them, talk to the Union.ai team.

### Examples

Resource requests and limits are set at the task level like this (see [Customizing task resources](https://www.union.ai/docs/v1/union/user-guide/core-concepts/tasks/task-hardware-environment/customizing-task-resources)):

```python
@union.task(requests=Resources(cpu="1", mem="1Gi"),
      limits=Resources(cpu="10", mem="10Gi"))
```

This task requests 1 CPU and 1 gibibyte of memory. It sets a limit of 10 CPUs and 10 gibibytes of memory.

If a task requesting the above resources (1 CPU and 1Gi) is executed in a project (for example **cluster-observability**) and domain (for example, **development**) with 10 CPU and 10Gi of quota for CPU and memory respectively, the dashboard will show that 10% of both memory and CPU quotas have been consumed.

<!-- TODO: Add screenshot
[Resource Quotas 10%](https://www.union.ai/docs/v1/union/user-guide/_static/images/user-guide/administration/resources/resources-resource-quotas-10.png)
-->

Likewise, if a task requesting 10 CPU and 10 Gi of memory is executed, the dashboard will show that 100% of both memory and CPU quotas have been consumed.

<!-- TODO: Add screenshot
[Resource Quotas 100%](https://www.union.ai/docs/v1/union/user-guide/_static/images/user-guide/administration/resources/resources-resource-quotas-100.png)
-->

Likewise, if a task requesting 10 CPU and 10Gi of memory is executed, the dashboard will show that 100% of both memory and CPU quotas have been consumed.

### Quota Consumption

For each resource type, the sum of all the `limits` parameters set on all the tasks in a namespace determines quota consumption for that resource. Within a namespace, a given resource’s consumption can never exceed that resource’s quota.

## Compute

This dashboard displays information about configured node pools in the organization.

![Resources compute](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/resources-compute.png)

### Configuring Resource Quotas

> [!NOTE]
> Self serve quota configuration is currently only available to AWS customers. To configure quotas on other cloud providers (GCP, Azure, etc.) please reach out to the Union team.

To configure resource quotas for a given project-domain, e.g. flytesnacks-development, start by navigating to the Dashboard in the UI for a given project-domain.

![Dashboard](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/dashboard.png)

Next, click the gear icon beside the "Resource Quotas" section which is located at the bottom of the right sidebar on the Dashboard page.
From here, you can enter your desired memory, CPU, and GPU quotas and select "Save."

![Resource Quotas Configuration](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/quotas.png)

![Resources compute](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/resources/resources-compute.png)

Union.ai will schedule tasks on a node pool that meets the requirements of the task (as defined by the `requests` and `limits` parameters in the task definition) and can vertically scale these node pools according to the minimum and maximum configured limits. This dashboard shows all currently-configured node pools, whether they are interruptible, labels and taints, minimum and maximum sizes, and allocatable resources.

The allocatable resource values reflect any compute necessary for Union.ai services to function. This is why the value may be slightly lower than the quoted value from the cloud provider. This value, however, does not account for any overhead that may be used by third-party services, like Ray, for example.

### Information displayed

The dashboard provides the following information:

* **Instance Type**: The type of instance/VM/node as defined by your cloud provider.
* **Interruptible:** A boolean. True If the instance is interruptible.
* **Labels:** Node pool labels which can be used to target tasks at specific node types.
* **Taints:** Node pool taints which can be used to avoid tasks landing on a node if they do not have the appropriate toleration.
* **Minimum:** Minimum node pool size. Note that if this is set to zero, the node pool will scale down completely when not in use.
* **Maximum:** Maximum node pool size.
* **Allocatable Resources:**
    * **CPU**: The maximum CPU you can request in a task definition after accounting for overheads and other factors.
    * **Memory**: The maximum memory you can request in a task definition after accounting for overheads and other factors.
    * **GPU**: The maximum number of GPUs you can request in a task definition after accounting for overheads and other factors.
    * **Ephemeral Storage**: The maximum storage you can request in a task definition after accounting for overheads and other factors.
    * Note that these values are estimates and may not reflect the exact allocatable resources on any node in your cluster.

### Examples

In the screenshot above, there is a `t3a.xlarge` with `3670m` (3670 millicores) of allocatable CPU, and a larger `c5.4xlarge` with `15640m` of allocatable CPU. In order to schedule a workload on the smaller node, you could specify the following in a task definition:

```python
@union.task(requests=Resources(cpu="3670m", mem="1Gi"),
      limits=Resources(cpu="3670m", mem="1Gi"))
```

In the absence of confounding factors (for example, other workloads fully utilizing all `t3a.xlarge` instances), this task will spin up a `t3a.xlarge` instance and run the execution on it, taking all available allocatable CPU resources.

Conversely, if a user requests the following:

```python
@union.task(requests=Resources(cpu="4000m", mem="1Gi"),
      limits=Resources(cpu="4000m", mem="1Gi"))
```

The workload will schedule on a larger instance (like the `c5.4xlarge`) because `4000m` exceeds the allocatable CPU on the `t3a.xlarge`, despite the fact that this instance type is [marketed](https://instances.vantage.sh/aws/ec2/t3a.xlarge) as having 4 CPU cores. The discrepancy is due to overheads and holdbacks introduced by Kubernetes to ensure adequate resources to schedule pods on the node.

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/administration/cost-allocation ===

# Cost allocation

Cost allocation allows you to track costs and resource utilization for your task and workflow executions.

It provides the following breakdowns by project, domain, workflow/task name, and execution ID:

* **Total Cost**: An estimate of the total cost. Total cost is composed of estimated costs of each container's allocated memory, CPU, and GPU, plus that container's proportion of unused compute resources on nodes it occupies.
* **Allocated Memory**: The aggregate allocated memory (gigabyte-hours) across all containers in the selection. Allocated memory is calculated as max(requested memory, used memory).
* **Memory Utilization**: The aggregate used memory divided by the aggregate allocated memory across all containers in the selection.
* **Allocated CPU**: The aggregate allocated CPU (core-hours) across all containers in the selection. Allocated memory is calculated as max(requested CPU, used CPU).
* **CPU Utilization**: The aggregate used CPU divided by the aggregate allocated CPU across all containers in the selection.
* **Allocated GPU**: The aggregate allocated GPU (GPU-hours) across all containers in the selection (allocated GPU equals requested GPU).
* **GPU SM Occupancy**: The weighted average SM occupancy (a measure of GPU usage efficiency) across all GPU containers in the selection.

Additionally, it provides a stacked bar chart of the cost over time grouped by workflow/task name.
The height of each bar is the sum of costs across each 15-minute interval.

## Suggested Usage

Cost allocation is designed to show where costs are being incurred and to highlight opportunities for cost reduction through right-sizing resource requests. All tables are sorted in descending order of total cost, so users can scan across the rows to quickly identify expensive workloads with low memory, CPU, or GPU utilization. Steps can then be taken to reduce the resource requests for particular workflows. Union.ai's task-level monitoring functionality can be used to view granular resource usage for individual tasks, making this exercise straightforward.

## Accessing Cost Data

Cost data is accessed by selecting the **Cost** button in the top right of the Union.ai interface:

![Cost link](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/cost-allocation/cost-link.png)

The **Cost** view displays three top level tabs: **Workload Costs**, **Compute Costs**, and **Invoices**.

### Workload Costs

This tab provides a detailed breakdown of workflow/task costs and resource utilization, allowing you to filter by project, domain, workflow/task name, and execution ID.
It offers views showing total cost, allocated memory, memory utilization, allocated CPU, CPU utilization, allocated GPU, and average GPU SM occupancy.
Additionally, the time series shows total cost per Workflow/Task in a stacked bar format with 15-minute bars.

![Workload costs 1](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/cost-allocation/workload-costs-1.png)

![Workload costs 2](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/cost-allocation/workload-costs-2.png)

### Compute Costs

This tab provides a summary of the cluster's overall compute costs.
It includes information on total cost of worker nodes, total uptime by node type, and total cost by node type.

![Compute costs](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/cost-allocation/compute-costs.png)

### Invoices

This tab displays the total cost of running workflows and tasks in your Union.ai installation broken out by invoice.

![Invoices](https://www.union.ai/docs/v1/union/_static/images/user-guide/administration/cost-allocation/invoices.png)

## Data collection and cost calculation

The system collects container-level usage metrics, such as resource allocation and usage, node scaling information, and compute node pricing data from each cloud provider.
These metrics are then processed to calculate the cost of each workflow execution and individual task.

## Total cost calculation

 The total cost per workflow or task execution is the sum of allocated cost and overhead cost:

  * **Allocated Cost**: Cost directly attributable to your workflow's resource usage (memory, CPU, and GPU).
    This is calculated based on the resources requested or consumed (whichever is higher) by the containers running your workloads.

  * **Overhead Cost**: Cost associated with the underlying cluster infrastructure that cannot be directly allocated to specific workflows or tasks.
    This is calculated by proportionally, assigning a share of the unallocated node costs to each entity based on its consumption of allocated resources.

## Allocated cost calculation

The cost of CPU, memory, and GPU resources is calculated using the following approach:

* **Resource consumption**: For CPU and Memory, the system determines the maximum of requested and used resources for each container.
GPU consumption is determined by a container’s allocated GPU resources.
Resource consumption is measured every 15 seconds.

* **Node-level cost**: Hourly costs for CPU, memory, and GPU are calculated using a statistical model based on a regression of node types on their resource specs.
These hourly costs are converted to a 15-second cost for consistency with the data collection interval.
For node costs, the total hourly cost of each node type is used.

* **Allocation to Entities**: The resource costs from each container are then allocated to the corresponding workflow or task execution.

## Overhead Cost Calculation

Overhead costs represent the portion of the cluster's infrastructure cost not directly attributable to individual workflows or tasks.
These costs are proportionally allocated to workflows/tasks and applications based on their use of allocated resources. Specifically:

* The total allocated cost per node is calculated by summing the allocated costs (memory, CPU, and GPU) for all entities running on that node.

* The overhead cost per node is the difference between the total node cost and the total allocated cost on that node.

* The overhead cost is then proportionally allocated to each entity running on that node according to its share of the total allocated cost on that node.

## Limitations

The system currently assumes that all nodes in the cluster are using on-demand pricing.
Therefore, cost will be overestimated for spot and reserved instances, as well as special pricing arrangements with cloud providers.

Overhead cost allocation is an approximation and might not perfectly reflect the true distribution of overhead costs.
In particular, overhead costs are only evaluated within the scope of a single 15-second scrape interval.
This means that the system can still fail to allocate costs to nodes which are left running after a given execution completes.

Union.ai services and fees such as platform fees are not reflected in the dashboards.
Cost is scoped to nodes that have been used in running executions.
The accuracy of cost allocation depends on the accuracy of the underlying resource metrics as well as per-node pricing information.

This feature limits lookback to 60 days and allows picking any time range within the past 60 days to assess cost.

## Future Enhancements

Future enhancements may include:

* Longer lookback period (i.e. 90 days)
* Customizable pricing per node type
* Data export
* Per-task cost allocation granularity

If you have an idea for what you and your business would like to see, please reach out to the Union.ai team.

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/administration/user-management ===

# User management

Union.ai comes with role-based access control management out of the box.
This authorization system is based on the following concepts:

* **Action**: An action that can be performed by a **user** or **application**.
For example, `register_flyte_inventory` is the action of registering tasks and workflows.
* **Role**: A set of **actions**.
The system includes built-in roles out of the box (see below) and also enabled administrators to define custom roles.
* **Policy**: A set of bindings between a **role** and an **organization**, **project, domain or project-domain** pair.
* **User** or **application**: An actor to which **policies** can be assigned.
Through the assigned policies, the user or application acquires permission to perform the specified **actions** on the designated resources.
A user is a person, registered and identified by **email address**.
An application is an automated process (a bot, service, or other type of program), registered and identified by **application ID**.
* **Organization**: A set of projects associated with a company, department, or other organization.
* **Project**: A set of associated workflows, tasks, launch plans, and other Union.ai entities.
* **Domain**: Categories representing the standard environments used in the development process: **development**, **staging**, and **production**.
* **Project-domain pair**: The set of projects is divided orthogonally by the three **domains**.
The result is a set of project-domain pairs.
For example: `flytesnacks/development`, `flytesnacks/staging`, and `flytesnacks/production`.

## Actions

The following is the full list of actions available in the Union.ai system:

* `administer_project`: Permission to archive and update a project and manage customizable resources.
* `manage_permissions`: Permission to manage user and machine applications and their policy assignments.
* `create_flyte_executions`: Permission to launch new flyte executions.
* `register_flyte_inventory`: Permission to register workflows, tasks, and launch plans.
* `view_flyte_executions`: Permission to view historical flyte execution data.
* `view_flyte_inventory`: Permission to view registered workflows, tasks, and launch plans.

## Built-in policies

Union.ai ships with three built-in policies: **Admin**, **Contributor**, and **Viewer**.

* An **Admin** has permission to perform all actions (`administer_project`, `manage_permissions`, `create_flyte_executions`, `register_flyte_inventory`, `view_flyte_executions`, `view_flyte_inventory`) across the organization (in all projects and domains).
In other words:
  * Invite users and assign roles.
  * View the **Monitoring** and **Billing** dashboards.
  * Do everything a **Contributor** can do.
* A **Contributor** has permission to perform the actions `create_flyte_executions`, `register_flyte_inventory`, `view_flyte_executions`, and `view_flyte_inventory`across the organization (in all projects and domains). In other words:
  * Register and execute workflows, tasks and launch plans.
  * Do everything a **Viewer** can do.
* A **Viewer** has permission to perform the actions `view_flyte_executions` and `view_flyte_inventory`across the organization (in all projects and domains).
In other words:
  * View workflows, tasks, launch plans, and executions.

## Multiple policies

Users and applications are assigned to zero or more policies.
A user or application with no policies will have no permissions but will not be removed.
For example, in the case of users, they will still appear on the list of users in the **Administration > User management > Managing users in the UI**.
A user or application with multiple policies will have the logical union of the permission sets of those policies.

> [!NOTE]
> The default roles that come out of the box are hierarchical.
> The **Admin** permission set is a superset of the **Contributor** permission set and the **Contributor** permission set is a superset of **Viewer** permission set.
> This means, for example, that if you make a user an **Admin**, then additionally assigning them **Contributor** or **Viewer** will make no difference.
> But this is only the case due to how these particular roles are defined.
> In general, it is possible to create roles where assigning multiple ones is meaningful.

## Custom roles and policies

It is possible to create new custom roles and policies.
Custom roles and policies can, for example, be used to mix and match permissions at the organization, project, or domain level.

Roles and policies are created using the [`uctl` CLI](https://www.union.ai/docs/v1/union/api-reference/uctl-cli) (not the [Union CLI](https://www.union.ai/docs/v1/union/api-reference/union-cli)).
Make sure you have the [`uctl` CLI installed and configured to point to your Union.ai instance](https://www.union.ai/docs/v1/union/api-reference/uctl-cli).

### Create a role

Create a role spec file `my_role.yaml` that defines a set of actions:

```yaml
:name: my_role.yaml

name: Workflow Runner
actions:
- view_flyte_inventory
- view_flyte_executions
- create_flyte_executions
```

Create the role from the command line:

```shell
$ uctl create role --roleFile my_role.yaml
```

### Create a policy

Create a policy spec file `my_policy.yaml` that binds roles to project/domain pairs.
Here we create a policy that binds the **Contributor** role to `flytesnacks/development` and binds the **Workflow Runner** role (defined above) to `flytesnacks/production`:

```yaml
:name: my_policy.yaml

name: Workflow Developer Policy
bindings:
- role: Workflow Runner
  resource:
    project: flytesnacks
    domain: production
- role: contributor # Boilerplate system role
  resource:
    project: flytesnacks
    domain: development
```

Create the policy from the command line:

```shell
$ uctl create policy --policyFile my_policy.yaml
```

Any user or application to which this policy is assigned will be granted **Contributor** permissions to `flytesnacks/development` while being granted (the more restrictive) **Workflow Runner** permission to `flytesnacks/production`.

### Assign the policy to a user

Once the policy is created you can assign it to a user using the **Administration > User management > Managing users in the UI** in the UI or using the command line:

```shell
$ uctl append identityassignments \
       --user "bob@contoso.com" \
       --policy "Workflow Developer Policy"
```

Similarly, you can assign the policy to an application through the command line (there is currently no facility to assign policies to applications in the UI):

```shell
$ uctl append identityassignments \
       --application "contoso-operator" \
       --policy "Workflow Developer Policy"
```

## Initial onboarding

The initial Union.ai onboarding process will set up your organization with at least one **Admin** user who will have permission to invite teammates and manage their roles.

## Managing users in the UI

You can also manage users and their policy assignments through the Union.ai UI.
Navigate to **Settings > User Management** to access the user management interface.

To manage users you must have the **Admin** policy (or a custom policy that includes the `manage_permissions` action).

From the user management interface you can:

* **View users**: See the list of all users and their assigned policies. You can search the list and filter by policy.
* **Add a user**: Add a new user by specifying their name, email, and the policies to assign. The new user will receive an email invite from Okta. They should accept the invite and set up a password, at which point they can access the Union.ai UI.
* **Change assigned policies**: Select a user to edit their assigned policies.
* **Remove a user**: Select a user and choose the option to remove them.

You can also assign policies to users from the command line using `uctl append identityassignments` (see **Administration > User management > Custom roles and policies > Assign the policy to a user** above).

=== PAGE: https://www.union.ai/docs/v1/union/user-guide/administration/applications ===

# Applications

A Union.ai application is an identity through which external systems can perform actions in the system.
An application can be bound to policies and granted permissions just like a human user.

Applications are managed through the [`uctl` CLI](https://www.union.ai/docs/v1/union/api-reference/uctl-cli).

## List existing apps

```shell
$ uctl get apps
```

Output:

```text
 -------------------- --------------------- ---------------- ----------------------------------------
| ID (4)             | CLIENT NAME        | RESPONSE TYPES | GRANT TYPES                             |
 -------------------- -------------------- ---------------- -----------------------------------------
| contoso-flyteadmin | contoso flyteadmin | [CODE]         | [CLIENT_CREDENTIALS AUTHORIZATION_CODE] |
 -------------------- -------------------- ---------------- -----------------------------------------
| contoso-uctl       | contoso uctl       | [CODE]         | [AUTHORIZATION_CODE]                    |
 -------------------- -------------------- ---------------- -----------------------------------------
| contoso-operator   | contoso operator   | [CODE]         | [CLIENT_CREDENTIALS AUTHORIZATION_CODE] |
 -------------------- -------------------- ---------------- -----------------------------------------
```

> [!NOTE]
> These 3 apps are built into the system.
> Modifying these by editing, deleting or recreating them will disrupt the system.

## Exporting the spec of an existing app

```shell
$ uctl get apps contoso-operator --appSpecFile app.yaml
```

Output:

```yaml
clientId: contoso-operator
clientName: contoso operator
grantTypes:
  - CLIENT_CREDENTIALS
  - AUTHORIZATION_CODE
redirectUris:
  - http://localhost:8080/authorization-code/callback
responseTypes:
  - CODE
tokenEndpointAuthMethod: CLIENT_SECRET_BASIC
```

## Creating a new app

First, create a specification file called `app.yaml` (for example) with the following contents (you can adjust the `clientId` and `clientName` to your requirements):

```yaml
clientId: example-operator
clientName: Example Operator
grantTypes:
- CLIENT_CREDENTIALS
- AUTHORIZATION_CODE
redirectUris:
- http://localhost:8080/authorization-code/callback
responseTypes:
- CODE
tokenEndpointAuthMethod: CLIENT_SECRET_BASIC
```

Now, create the app using the specification file:

```shell
$ uctl create app --appSpecFile app.yaml
```

The response should look something like this:

```text
 ------------------ ------------------- ------------- ---------
| NAME             | CLIENT NAME       | SECRET      | CREATED |
 ------------------ ------------------- ------------- ---------
| example-operator |  Example Operator | <AppSecret> |         |
 ------------------ ------------------- ------------- ---------
```

Copy the `<AppSecret>` to an editor for later use.
This is the only time that the secret will be displayed.
The secret is not stored by Union.ai.

## Update an existing app

To update an existing app, update its specification file as desired while leaving the `clientId` the same, to identify which app is to be updated, and then do:

```shell
$ uctl apply app --appSpecFile app.yaml
```

## Delete an app

To delete an app use the `uctl delete app` command and specify the app by ID:

```shell
$ uctl delete app example-operator
```

