# Automation Execution & Process Model

{% hint style="success" %}
**Audience:** Administrators, Developers, Solution Architects&#x20;

**Purpose:** Explains the runtime mechanics of how automations execute in Kizen so readers can design automations that behave predictably and debug them confidently when they do not.
{% endhint %}

## Overview

Knowing how to configure an automation is only part of what it takes to build one that works reliably in production. The async, queue-based execution model that powers Kizen automations has specific implications for ordering, data consistency, and timing that affect how automations should be designed, and how unexpected behavior should be diagnosed.

***

## Asynchronous Processing Model

Kizen automations do not execute synchronously. When a trigger condition is met, the automation is queued for processing rather than running immediately. There is an inherent delay between when the triggering event occurs and when execution begins. This is normal, expected, and by design. Automation logic that assumes immediate or real-time execution will produce unreliable results.

### Processing Priority

Automations can be assigned one of two processing priority levels, which affects how quickly an execution moves through the queue.

| Priority               | When to Use                                                                                                                                                                                                                                      |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Regular**            | Standard operational automations that should process with low latency in response to user or system activity                                                                                                                                     |
| **Low / Data Seeding** | Bulk operations or data migration scenarios where slower processing is acceptable. Executions at this priority level process separately from Regular priority executions, so large bulk jobs do not slow down day-to-day operational automations |

### Priority Inheritance

When one automation starts another automation, the child automation inherits the processing priority of the parent. A bulk low-priority automation that starts a chain of child automations will result in all executions in that chain processing at low priority. This is an important consideration when designing multi-automation chains that mix operational and bulk logic.

***

## Execution Lifecycle

An execution moves through a defined set of states from initiation to completion. Understanding what each state means is essential for monitoring automation health and diagnosing unexpected behavior.

| State         | What It Means                                                                                                                                                                                                                                                                                                                                   |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Active**    | The execution is currently being processed. Steps are running or the execution is waiting in the queue for the next step to be picked up.                                                                                                                                                                                                       |
| **Paused**    | The execution has stopped processing and is waiting before it can continue. Common causes include a delay step, a goal step waiting for a condition to be met, a step that errored with a pause-on-error configuration, or a manual pause. Paused executions remain associated with the record and resume when the pause condition is resolved. |
| **Completed** | The execution has finished processing and reached a terminal state. Completed does not mean every step succeeded; it means the execution ran to its end. Step-level outcomes must be inspected in execution history to understand what actually happened.                                                                                       |
| **Canceled**  | The execution has been terminated and will not resume. An execution can be canceled manually by a user, by a Stop Execution action step, or automatically when the record it is operating on is archived.                                                                                                                                       |
| **Retrying**  | The execution has encountered a step failure and the system is attempting to re-run that step. This state persists until the retry succeeds or the system exhausts its retry attempts and transitions to another state.                                                                                                                         |

***

## Save and Activation Behavior

The rules governing how automations behave when saved, activated, or deactivated have direct implications for in-flight executions. These behaviors should be understood before making changes to a live automation.

* **Deactivating stops listening:** Deactivating an automation immediately stops its triggers from listening for new events. In-flight executions that are already running or paused are not canceled — they continue until complete, canceled, or manually managed.
* **No backlog on reactivation:** Reactivating an automation does not retroactively process events or records that occurred while it was inactive. The automation begins listening fresh from the moment of reactivation.
* **Atomic save model:** There is no versioning of automation configurations. When saved, the latest configuration becomes active immediately. Steps already executed are recorded as they ran. Steps not yet evaluated will reflect the updated configuration when they run.
* **Soft-deleted steps retained in history:** Deleting a step does not remove its execution history. If a past execution ran that step, the record is preserved. Execution history that references a step no longer in the active configuration is expected behavior, not a data integrity issue.

***

## Step Execution Model

Each step in an automation is processed independently. Understanding the isolation and ordering characteristics of step execution is essential for designing automations that behave predictably, particularly in complex or branching workflows.

* **Each step runs in its own transaction:** If a step fails, only that step fails. Prior steps are not rolled back and their actions are not undone. An automation can partially complete, meaning some steps succeed while others do not, and this should be accounted for in your automation's design.
* **Isolation between steps:** A failure in one step does not directly affect other steps that are not dependent on it. However, because steps can modify shared data such as field values on the context record, one step's actions can indirectly affect a subsequent step that reads the same data.
* **Ordering is guaranteed only linearly:** Along a single sequential path, steps execute in order. Across parallel branches, the order in which branches complete is not deterministic. Ordering is also not guaranteed across multiple executions triggered simultaneously by separate events on the same record.
* **Branch timing considerations:** When execution splits into parallel branches, the order in which branches run is not deterministic. If two branches both write to the same field, a race condition is possible and the outcome may not be what you intended.

***

## Execution Consistency and Data Mutation Caveats

Because each step runs in its own transaction and steps are not processed instantaneously, the state of a record at the time one step executes may be different from the state it was in when the previous step executed. The following explains the consistency limitations of the execution model and how to design around them:

* **No guaranteed consistency between steps:** Field values can change between steps. Data read or evaluated in an early step may not remain unchanged by the time a later step runs. Automations that assume a stable data state across steps may produce unexpected results in environments where records are actively being modified.
* **Competing automations may alter data mid-execution:** Multiple automations can run concurrently against the same record. An action taken by one automation mid-execution can change the data that a concurrently running automation reads or evaluates in a subsequent step. This is expected behavior in a concurrent async system, but it has significant implications for automation design in high-activity environments.
* **Conditions may evaluate against changed data:** Because data can change between the time a trigger fires and the time a condition step evaluates, a condition may produce a different result than intended. The condition evaluates against the current value of a field at the moment it runs and not the value that existed when the trigger fired. Automations that depend on the state of data at trigger time should use variables to capture that state early rather than relying on conditions to read it later.
* **Variables as mitigation:** Automation variables are the primary mechanism for managing data consistency risks. By capturing field values into variables at trigger time or early in the execution, subsequent steps can operate against a known, stable snapshot of the data rather than reading live field values that may have changed. This is the recommended design pattern for any automation where data consistency across steps is important. For more information, see Automation Variables **(Coming Soon)**.

***

## Retry Behavior

Step failures fall into two categories that are handled differently. Understanding the distinction helps readers interpret execution history and design automations that fail gracefully.

* **Expected failures pause on failure immediately:** Expected failures occur when something is wrong with a step's configuration, for example, attempting to modify a field that has been deleted or referencing a variable that cannot be resolved. These failures pause the execution on that step immediately with no retry. The step configuration must be corrected before the execution can be restarted.
* **Unexpected failures trigger automatic retries:** Unexpected failures occur when something goes wrong at the infrastructure level, for example, a database connection issue, a deadlock, or an unexpected constraint violation. The system retries the step four times with increasing delays between attempts: 30 seconds, 90 seconds, 8 minutes, and 62.5 minutes. After the fourth retry, the step is marked as failed and the execution is terminated.
* **Conditions do not support error handling:** Condition steps do not have configurable error handling and do not support retries. If a condition errors, execution pauses and requires manual intervention to select a path and resume. For full detail, see Automation Conditions **(Coming Soon)**.
* **Triggers do not fail:** Trigger evaluation does not produce failures. The trigger either fires and creates an execution, or it does not fire. There is no trigger failure state. Webhook triggers receive the inbound request regardless of whether the payload is valid. Any failure related to missing or invalid webhook data surfaces at the variable evaluation stage, not at the trigger. For more information, see Automation Triggers **(Coming Soon)**.

***

## Throttling

Throttling controls how frequently a trigger can initiate executions for a given record within a defined time window. Understanding how throttling works, and what it does not do, is critical for using it correctly.

* **Debounce behavior:** Throttling functions as a quiet time or mute window. When throttling is configured on a trigger, the first event fires normally. Any additional trigger events for the same record within the defined window after that are suppressed entirely. They do not create new executions and they are not queued for later processing. Throttling is not a spacing or delay mechanism since executions are not held and released after the window expires. If this distinction is not understood, throttling will appear to cause missing executions rather than intentional suppression.
* **Trigger-time vs. queue-time nuance:** Because trigger evaluation is asynchronous, there is a gap between when a trigger event occurs and when it is evaluated from the queue. The throttle window interacts with this gap in ways that can affect behavior in high-frequency event environments where events may be queued and evaluated with a delay. The event timestamp and the evaluation timestamp are distinct, and both are accessible in execution history.

***

## What's Next

With a complete understanding of how executions are created, processed, and managed, the next step is understanding how delays work within that execution model, including static and variable-based delays, how delays interact with goals, and how business calendars and timezones affect time-based execution behavior.

Continue to Delays and Time-Based Behavior **(Coming Soon)** if you are designing automations that use delays, scheduled logic, or time-sensitive workflows.

<details>

<summary><strong>Related Topics</strong></summary>

* Automation Conditions **(Coming Soon)**
* Automation Triggers **(Coming Soon)**
* Automation Variables **(Coming Soon)**
* Delays and Time-Based Behavior **(Coming Soon)**

</details>
