# Feature Flags

{{< glossary_definition "feature_flag" >}} {{< badge "addon" >}}

## About Feature Flags

The format of a Feature Flag is a conditional *if* statement you add to your app or website code. It contains your flag name and any properties and wraps around the code you want the flag to control. Airship provides the flag as a code snippet for your developer to add to your app or website.

Set up Feature Flag experiments in two steps:

1. **Define the flag** — Set the flag's name, description, and properties that can be used by your app or website code within the flag.

1. **Create one or more Configurations for the flag** — Determine the audience, schedule, and property values for each Configuration. Configuration types:

   * [**A/B tests**](#ab-tests) — Compare audience behaviors when a feature is hidden or present, or experiment with distinct feature experiences, such as new home screen designs, by setting different property values for each variant. Reports provide detailed data for evaluating engagement and the overall success of a feature based on your [Goals](https://www.airship.com/docs/reference/glossary/#goals).

   * [**Rollouts**](#rollouts) — Release a feature to a targeted audience and/or a percentage of an audience, then monitor interaction event counts or other concerns, such as support capacity. In addition to experimentation, you can use rollouts to present different content versions to separate audiences. For example, for a loyalty program, individual rollouts can control which content your Gold and Silver users see.
      
   Configurations can be open-ended or time-bound, starting immediately, ending manually, and starting or ending at a scheduled time and date. Arrange Configurations in order of priority to determine which one should be available to a user if they are included in multiple Configuration audiences. Each flag can have up to 10 active Configurations.

Manage a Configuration's audience, schedule, and properties from the Airship dashboard. If something unexpected happens with the feature, or if you have reason to end access before its scheduled end time, you can easily disable it for all users. For apps, this means eliminating the need to release an app update and waiting for users to install the new version.

You can also [use Feature Flags to determine a messaging audience or trigger automation](#using-feature-flags-with-messaging).

> **Tip:** You can also create rollouts using [Sequence Control Groups](https://www.airship.com/docs/guides/experimentation/control-groups/) and [Scenes](https://www.airship.com/docs/guides/features/messaging/scenes/rollouts/).


### Audience

When creating a flag Configuration, set your audience to members of a [Test Group](https://www.airship.com/docs/reference/glossary/#preview_test_groups). When you are ready to go live, select **All Users** for your entire audience or select **Target Specific Users** and set conditions, then set a percentage of your set audience that will be able to view the feature determined by the flag. For A/B tests, the percentage is divided evenly between variants by default, or you can set your own values. Set your audience according to the purpose of your A/B test or rollout.

Audience members are randomly selected. Any user included in the set percentage is considered *eligible*, meaning they have access to the feature. For A/B tests, you have the option to hide the feature from the control variant.

Setting a percentage helps you limit the audience so you can effectively manage feedback or limit exposure to potential bugs. For a rollout, gradually increase the percentage to expand your audience. For example, you could set a condition where only users who have freshly installed your app will be able to access the flagged feature. If you set a percentage of 10%, only 10% of users who meet the condition will be able to access the feature.

For flags with multiple Configurations, if a user falls into more than one Configuration's audience, only the one with the highest priority will be active for that user. By default, each new Configuration is set to the lowest priority. See [Set priority order](#manage-configurations) in *Manage Configurations* below.

For more about audience and eligibility, see [Rollout example implementation](#rollout-example-implementation) below.

#### Conditions

For the Target Specific Users audience option, see [Targeting Specific Users](https://www.airship.com/docs/guides/audience/segmentation/target-specific-users/) for the list of conditions you can set.

Additionally, you can use the Feature Flag access condition to include or exclude users who are part of one or more specified flag audiences. Using this condition enables coordinated experiences across multiple features during phased rollouts or A/B tests. Run layered or mutually exclusive experiments, chain flags together, or gate sub-features behind primary ones.

For exclusive experiments, use the Feature Flag access condition to make sure users in one experiment are not also in an experiment running for a different flag.

To roll out sub-features that add to another flagged feature, use the Feature Flag access condition to make sure the sub-features are only made available to users who are part of the initial feature's audience. For a retail app, sub-features for a new checkout flow could be an in-store pickup option or AI-powered product recommendations. 

Feature Flag access condition requirements, behavior, and restrictions:

* **Evaluation** — The condition evaluates users who are members of all Configurations for a specified flag. You cannot select an individual Configuration.
* **Configurations** — All users who are members of the Active, Scheduled, and Ended Configuration audiences for a specified flag are included in (or excluded from, according to the condition settings) the condition audience.
   * The specified flag must have at least one currently Active, Scheduled, or Ended Configuration.
   * When you archive an Ended Configuration, its audience is no longer included in (or excluded from, according to the condition settings) the condition audience.
* **Ineligible flags** — Flags that contain a Configuration that uses the Segments condition cannot be selected for the Feature Flag Access condition.
* **Scenes targeting a Configuration audience** — When [configuring a Scene's audience](https://www.airship.com/docs/guides/messaging/in-app-experiences/scenes/create/#audience), you cannot select a Configuration that uses the Feature Flag access condition.

Supported channels and SDK minimums for each condition:

| Condition | Supported Channels |
| --- | --- |
| **App version** | App |
| **Device tags** | App, Web |
| **Feature Flag access** | App [iOS SDK 19.4+](/docs/docs/developer/sdk-integration/apple/ios-changelog/#19.4.0) [Android SDK 19.7+](/docs/docs/developer/sdk-integration/android/changelog/#19.7.0), Web  [Web SDK 2.7+](/docs/docs/developer/sdk-integration/web/changelog/#v2.7.0) |
| **Locale** | App, Web |
| **Location opt-in status** | App, Web |
| **New users** | App, Web |
| **Platforms** | App, Web |
| **Push opt-in status** | App, Web |
| **Segments** | App |

### Properties

You can add properties that can be used by your app's or website's code within a Feature Flag, bypassing the need for traditional code changes and release processes. The flag code you pass on to your development team includes references to the properties. Once implemented, edit the flag Configuration's properties in the dashboard to make immediate changes to your app or website, like variables that can be updated remotely. As a general example, you could create properties for a promotion's title, description, and button URL, then change their values when the promotion ends and a new one launches. You can override flag properties per Configuration. For A/B tests, you can set property overrides for each variant.

When creating or editing a flag, set a name, type, and default value for each property. Properties can be a string, number, boolean, or JSON. You can create up to 50 properties per flag.

Properties use cases:

* **Coffee mobile ordering app** — Create a flag with properties for controlling the promotions and rewards for loyalty membership. Using just the Airship dashboard, you can transition from pumpkin spice promotions to holiday themes in sync with seasonal campaigns. Celebrate special limited time milestones, such as the app's 10th anniversary, by offering "10x rewards" points.

* **Music streaming app** — Create a flag with properties to introduce a new premium subscription tier. Launch the feature to 25% of the audience, with flag properties "Price Point" and "Trial Period Duration" and quickly gauge engagement data and user feedback as users respond to the new tier. Update the properties to fine tune the subscription offer, and roll out the feature to 100% of users once you land on the right details. You can also use a "Promotional Messaging" property to periodically update the copy promoting the new subscription.

### Interaction events

Track interaction with the flagged feature by generating an event from the SDK. It must be explicitly called by the developer. See [Implement the code](#implement-the-code) below.

While it is called an "interaction" event, what you track is up to you and depends on the feature. Some examples of how to implement different use cases:

* **Tracking when a user encounters a change** — For a flag that changes a button's color from blue to green or adds a new button to a screen, track when a user visits the screen containing the button, since it is a visible change.

* **Tracking when a user interacts with a change** — For a flag that changes a button's destination, track when the user selects the button, since it is a non-visible change.

The events have a flag ID and flag name, which identify which flagged feature a user interacted with. They also have a boolean `eligible` field, which indicates whether or not the user was in the Feature Flag audience and had access to the feature. The `variant_id` is the UUID of the A/B test variant. This ID is listed for each variant in [A/B test reports](#ab-test-reports-and-technical-overview). See also [Feature Flag Interaction Event](https://www.airship.com/docs/developer/rest-api/connect/schemas/events/#feature-flag-interaction) in the [Real-Time Data Streaming](https://www.airship.com/docs/reference/glossary/#rtds) API reference.

Deciding what you are tracking is especially important when [using the flag to trigger a message](#using-feature-flags-with-messaging), since you can trigger based on whether or not the user is part of the Feature Flag audience.

### Draft Configurations

You can add flag code to your app or website even while a Configuration is in Draft state, and then make it active later. For apps, make it active after delivering your new code to devices in an app update.

### Workflow

The following is the general workflow for using Feature Flags:

1. [Create a flag in the dashboard](#create-feature-flags) and copy the code snippets and Mobile docs link. Code is provided for Web, Android (Kotlin and Java), iOS, Cordova, Flutter, and React Native. You can also access the code after saving.

1. Give the code snippets and docs links to your developer so they can [add the flag to your app or website](#implement-the-code).

1. [Create at least one Configuration](#create-configurations), setting the audience to members of a [Test Group](https://www.airship.com/docs/reference/glossary/#preview_test_groups). For A/B tests, all variants are distributed randomly to Test Group users by default, or you can specify which variant to make available to the them.
   
   After you update your website with the feature and flag code, the feature or A/B test will be available to the configured audience the next time they visit the website, according to the Configuration's schedule. For apps, the same is true after users install the version of your app that contains the updated code.

1. After verifying the feature or A/B test works as intended with your Test Group, change the Configuration audience to All Users or Target Specific Users and set the percentage and conditions. [Manage the Configuration](#manage-configurations) from the Airship dashboard. Repeat this step for each Configuration.

1. [View reports](#view-reports) and evaluate performance. For A/B tests, then roll out the winning variant to all test audience members.

1. After the flag has served its purpose, [archive it](#manage-feature-flags) and remove the flag code from your app or website.

## Rollouts

Use rollouts for experimentation and for controlling content versions for different audiences. Common use cases:

* **Resource management** — Release features to segments of your audience over time to prevent a strain on resources. Increase the audience according to database query volume, support ticket volume, or limited initial product supply.
* **Content testing** — Test features with a small segment of your audience before releasing the feature to a broader audience.
* **Time-limited promotions** — Turn on and off time-restricted features, either manually or according to an automated schedule, such as displaying a promotional banner only during a sale weekend.
* **Premium features** — Provide premium feature access to paid users only, based on membership tiers.
* **Holiday promotions** — Create a flag for promotional banners in your app. Launch the banners to 100% of your U.S. audience after Thanksgiving and to 100% of the E.U. audience in early November. This method ensures that each region receives the promotion at the optimal time, maximizing engagement and driving campaign success.
* **Retail app loyalty program** — Create a flag to launch a new loyalty program in your retail app. Release the program to your most loyal and lowest-tier users at different rates based on observed differences in user behavior for those audiences. You can then create individual Configurations of the Feature Flag for each audience segment and roll out the experience to 50% of your most loyal users and 10% of lowest-tier users under the same flag using different Configurations. You can also use properties to customize the promotional text for each audience and display differing content for each segment.

### Rollout example implementation

The following example is for introducing a redesigned Settings screen in a mobile app. To let all new users experience the new Settings screen:

1. Create a Feature Flag with any relevant properties and default values.
1. Create a rollout Configuration with these Audience settings:
   1. Select **Target Specific Users**.
   1. Set the Configuration audience percentage to `100`.
   1. Add the condition **New users**.
1. In your app code, set the Feature Flag interaction event to occur when users view the Settings screen.

100% of users who have freshly installed your app will be able to see the redesigned Settings screen. They are *eligible* users. For each [interaction event](#interaction-events):

* When `eligible` has a value of `true`, that means the screen was viewed by a user that **is** in the Configuration audiences for the Feature Flag. The user experienced the redesigned Settings screen.

* When `eligible` has a value of `false`, the screen was viewed by a user that **is not** in the Configuration audiences for the Feature Flag. The user saw the old version of the Settings screen.

However, if you're concerned about the potential for bugs in the redesigned screen, you would want to limit how many new users could see it. Keep all the settings the same except the percentage, which you would set to `10`. 10% of users who have freshly installed your app will be able to see the redesigned Settings screen.

Once you determine the feature is ready for a wider audience, increase the audience percentage. Keep adjusting till you reach 100% or the acceptable threshold determined by your planning.

## A/B tests

(iOS SDK 19+) (Android SDK 19+)

Use A/B tests to compare audience behaviors when a feature is hidden or present. You can also experiment by presenting different experiences by setting specific [property values](#properties) for each variant. The [audience percentage](#audience) is divided evenly between variants by default, or you can set your own values. A/B tests contain a control variant and support up to 25 additional variants.

A/B test use cases:

* **Evaluating engagement of new designs** — Create an experiment to test the effectiveness of your new home screen design with new users. Display the new design to 50% of new users and the current home screen to the other 50%, set a goal such as a purchase, and track which version of the home screen leads to more conversions. If the old design still outperforms, you can stop the experiment, and if the new one wins, you can create a new rollout from the winning variant.

* **Optimizing loyalty programs** — Create an experiment to test different reward structures for your new loyalty program. Create an experiment with two variations of the program: one offering discounts on future orders and another offering free delivery credits, and set a goal to track repeat orders. Reporting data reveals a 20% increase in repeat orders for the delivery credit variant, providing the team with concrete evidence to present to leadership on which program structure performs best.

<p>To prepare for your tests, see <a href="https://www.airship.com/docs/guides/experimentation/a-b-tests/about/">About A/B testing</a>.</p>

### Goals and reports

[Goals](https://www.airship.com/docs/reference/glossary/#goals) are the events you want to measure in your A/B tests and are required to declare a winner and generate reports. You can select from project-level Goals or create new ones. If you create Goals while setting up the A/B test, you can reuse them for other A/B test Configurations for the same flag. Maximum 10 goals per test.

You can create Goals based on [Custom or Predefined Events](https://www.airship.com/docs/guides/audience/events/events/#event-types) or for a number of Default Events. For the list of Default Events, see [Goals](https://www.airship.com/docs/guides/reports/goals/).

Reporting does not include events attributed to [Named Users](https://www.airship.com/docs/reference/glossary/#named_user) that are not associated with a platform and [Channel ID](https://www.airship.com/docs/reference/glossary/#channel_id).

View reports to see how each variant performs. You can select each Goal to update the reports with data for that Goal only. After enough data is available and time has elapsed, Airship declares a winning variant, which you can then roll out to your entire A/B test audience.

If there is no significant difference between variant performance, you may want to consider your test variables and audience. Even with significant differences, this data can help you understand what your audience responds to.

For more information, see [A/B test reports and technical overview](#ab-test-reports-and-technical-overview).

## Create Feature Flags

1. Go to **Experiments**, then **Feature Flags**.
1. Select **Create Feature Flag**.
1. Configure for the flag:
   | Field or section | Description | Steps |
   | --- | --- | --- |
   | **Display name** | The dashboard label for the flag | Enter text. |
   | **Flag name** | The name used for reference by the SDK. Must be unique. Automatically generated based on the display name, but you can change it. The name can contain letters, numbers, and underscores only, and it must start with a letter and end with a letter or number. You cannot change the flag name after making the flag active. | Enter text. |
   | **Description** | Describes what the flag controls | Enter text. |
   | **Properties** | Optional. String, number, boolean, or JSON properties that can be used by your app or website code within the Feature Flag. 50 properties maximum. | Select **Add property**, and then enter a name, select a type, and configure a value. Select **Add property** for additional properties. |
   | **Reference image** | Optional. An image to help identify what the flag controls. The image is displayed when [viewing the list of all flags](#manage-feature-flags) and when [viewing its Configurations](#manage-configurations). Supported file types: JPG, PNG, GIF. Maximum file size: 5 MB. | Select **Choose File**, and then select a file to upload. |
1. Select **Save and continue**.
1. Copy the code snippets and docs link for your developer. The code snippet is the same in all Configurations for a flag, so you only need to provide it to your developer once.
1. Select **Close**.

Your flag is now saved, and you can [create a Configuration](#create-configurations) at any time.

## Add events and creating Goals for A/B tests

<p>You must <a href="https://www.airship.com/docs/guides/audience/events/manage/">add Custom and Predefined Events</a> to your project before you can select them for Goals. You do not need to add Default Events to your project before selecting them for Goals.</p>

If you want to use project-level Goals in an A/B test Configuration, you must first create them in your project settings. See [Goals](https://www.airship.com/docs/guides/reports/goals/). Otherwise, you can create Goals as you create an A/B test.

## Create Configurations

Set up applications for a Feature Flag. If you just [created a flag](#create-feature-flags), start on step 3. If you just [duplicated a Configuration](#manage-configurations), start on step 4.

A/B test requirements: (iOS SDK 19+) (Android SDK 19+)

1. Go to **Experiments**, then **Feature Flags**, and then select **View** to access a flag's Configurations.

1. Select **Create Configuration** and then select **Feature rollout** or **Feature A/B test**.

1. Select **Definition** to continue, and then enter for the Configuration:
   | Field | Description | Steps |
   | --- | --- | --- |
   | **Rollout or A/B test name** | The dashboard label for the Configuration | Enter text. |
   | **Description** | Describes the purpose of the Configuration | Enter text. |
1. (For [A/B tests](#ab-tests) only) Select **Goals** to continue, and then search for and select Goals or create them. The winner and detailed reports do not generate without at least one Goal.<p>To create a Goal, enter a Goal name in the search field, then select <strong>Create Goal</strong> and configure fields:</p>
<table>
  <thead>
      <tr>
          <th>Field</th>
          <th>Description</th>
          <th>Steps</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Goal name</strong></td>
          <td>Used for identification within the experiment</td>
          <td>Enter text.</td>
      </tr>
      <tr>
          <td><strong>Description</strong></td>
          <td>Additional information about the Goal</td>
          <td>Enter text.</td>
      </tr>
      <tr>
          <td><strong>Event</strong></td>
          <td>The event you want to measure in the experiment</td>
          <td>Search for and select an event. If the event does not have a category assigned, select from the list or select <strong>Custom category</strong> and enter a category name.</td>
      </tr>
  </tbody>
</table>
   To move a secondary Goal to primary, select the drag handle icon () for a Goal, then drag and drop to the first position.

1. Select **Properties** or **Variants** to continue, then configure property values to override the displayed defaults.

   * The Properties step and options do not appear if the flag does not contain properties.
   * Property overrides are optional and apply to the current Configuration only.
   
   For A/B tests, two variants appear by default: **Control variant** and **Variant A**. Select ** Add variant** to add up to 25 variants in addition to the control. You can edit each variant's name and property values.
   
   The flagged feature is available to all variants, but you can disable it for users with access to the control variant. Disable **Display flagged feature** for the control to experiment on the feature's value by comparing experiences with and without it.
   
   Select **
 Delete variant** to remove a variant. You cannot delete the control or the last remaining additional variant.

1. Select **Audience** to continue, then set up your audience:
   1. Choose and configure users:
      <div class="table-scroll-wrapper">
      <table width="100%" class="reference-table">
         <col style="width:20%">
         <col style="width:40%">
         <col style="width:40%">
      <thead>
      <tr>
         <th>Option</th>
         <th>Description</th>
         <th>Steps</th>
      </tr>
      </thead>
      <tbody>
      <tr>
         <td>All Users</td>
         <td>Makes the feature or A/B test available to a percentage of your total app or web audience. Users are randomly selected.</td>
         <td>Under <b>Audience allocation</b>, limit the selected audience to your specified percentage.</td>
      </tr>
      <tr>
         <td>Target Specific Users</td>
         <td>Makes the feature or A/B test available to a percentage of users who meet specified conditions.</td>
         <td>Select and configure one or more conditions. See <a href="#conditions">Conditions</a> above for the list of conditions and their requirements and restrictions. Then, under <b>Audience allocation</b>, limit the selected audience to your specified percentage. Users are randomly selected from those who qualify.<p>For the <b>Feature Flag access</b> condition, search for a flag and then specify whether or not users must be in the selected flag's audience. You can select multiple flags.<p>For all other conditions, follow the steps in <a href='https://www.airship.com/docs/guides/audience/segmentation/target-specific-users/'>Targeting Specific Users.</td>
      </tr>
      <tr>
         <td>Test Users</td>
         <td>Makes the feature or A/B test available to users in a <a href='https://www.airship.com/docs/guides/audience/preview-test-groups/'>Test Group</a>.</td>
         <td>Select a Test Group.</td>
      </tr>
      </tbody>
      </table>
      </div>
   1. (Optional, for A/B tests only) Override the default variant distribution:
      * **All Users** and **Target Specific Users** — The audience percentage is divided evenly between variants. To change it, enable **Allow uneven allocations**. Then, under **Variant allocation**, edit the percentage for each variant.
      * **Test Group** — All variants are distributed randomly to Test Group users. To change it, select **Specific variant only** and select the control or other variant.

1. Select **Schedule** to continue and then schedule the period when the Configuration will be active. For specific times and dates, also specify the time zone. The UTC conversion displays below the settings and updates as you make changes.

1. Select **Review** to continue and then review your Configuration's settings.

1. Select **Launch** to make the Configuration active or **Exit** to save it as a draft. See the status information in [Manage Configurations](#manage-configurations).

## Implement the code

This section describes implementation for the mobile SDKs. For web implementation, see [Web Feature Flags](https://www.airship.com/docs/developer/sdk-integration/web/feature-flags/) and also [contact Support](https://support.airship.com/).

You can return to the dashboard to get the code snippets at any time:

1. Go to **Experiments**, then **Feature Flags**.
1. Select **View** to access a flag's Configurations.
1. Select **</> Code snippet**.
1. Copy the code snippet for each platform.
1. Select **Close**.

### Access flags

The Airship SDK will refresh Feature Flags when the app is brought to the foreground. If a Feature Flag is accessed before the foreground refresh completes, or after the foreground refresh has failed, Feature Flags will be refreshed during flag access. Feature Flags will only be updated once per session and will persist for the duration of each session.

Once [defined in the dashboard](#create-feature-flags), a Feature Flag can be accessed by its name in the SDK after `takeOff`.


#### Android Kotlin



The SDK provides asynchronous access to Feature Flags using Kotlin suspend functions, which is intended to be called from a coroutine. For more info, see [Coroutines Overview guide](https://kotlinlang.org/docs/coroutines-overview.html).

```kotlin
// Get the FeatureFlag result
val result: Result<FeatureFlag> = FeatureFlagManager.shared().flag("YOUR_FLAG_NAME")

// Check if the app is eligible or not
if (result.getOrNull()?.isEligible == true) {
    // Do something with the flag
} else {
    // Disable feature or use default behavior
}
```



#### Android Java


```java
// Get the FeatureFlag 
FeatureFlag featureFlag = FeatureFlagManager.shared().flagAsPendingResult("YOUR_FLAG_NAME").getResult();

// Check if the app is eligible or not
if (featureFlag != null && featureFlag.isEligible()) {
    // Do something with the flag
} else {
    // Disable feature or use default behavior
}
```



#### iOS Swift



 The SDK provides asynchronous access to Feature Flags using an async method, which are intended to be called from a Task or a function that supports concurrency. For more info, see [Concurrency guide](https://docs.swift.org/swift-book/documentation/the-swift-programming-language/concurrency/).

```swift
// Get the FeatureFlag
let flag: FeatureFlag = try? await Airship.featureFlagManager.flag(name: "YOUR_FLAG_NAME")

// Check if the app is eligible or not
if (flag?.isEligible == true) {
    // Do something with the flag
} else {
    // Disable feature or use default behavior
}
```



#### iOS Objective-C


// Not supported


#### React Native


```ts
const flag = await Airship.featureFlagManager.flag("YOUR_FLAG_NAME");
if (flag.isEligible) {
    // Do something with the flag
} else { 
    // Disable feature or use default behavior
}
```



#### Flutter


```dart
var flag = await Airship.featureFlagManager.flag("my-flag");
if (flag.isEligible) {
    // Do something with the flag
} else {
    // Disable feature or use default behavior
}
```



#### Cordova


```js
Airship.featureFlagManager.flag("YOUR_FLAG_NAME", (flag) => {
    if (flag.isEligible) {
        // Do something with the flag
    } else {
        // Disable feature or use default behavior
    }
});
```



#### Capacitor


```js
const flag = await Airship.featureFlagManager.flag("YOUR_FLAG_NAME")
if (flag.isEligible) {
    // Do something with the flag
} else {
    // Disable feature or use default behavior
}
```



#### .NET MAUI


```csharp
// Not supported
```



#### Xamarin


```csharp
// Not supported
```



#### Titanium


```js
// Not supported
```



#### Unity


```csharp
// Not supported
```




### Track interaction

To generate the [Feature Flag Interaction Event](https://www.airship.com/docs/developer/rest-api/connect/schemas/events/#feature-flag-interaction), you must manually call `trackInteraction` with the Feature Flag. Analytics must be enabled. See [Privacy Manager](https://www.airship.com/docs/reference/data-collection/sdk-data-collection/#privacy-manager) in Mobile *Data Collection*.


#### Android Kotlin


```kotlin
FeatureFlagManager.shared().trackInteraction(featureFlag)
```



#### Android Java


```java
FeatureFlagManager.shared().trackInteraction(featureFlag)
```



#### iOS Swift


```swift
Airship.featureFlagManager.trackInteraction(flag: featureFlag)
```



#### iOS Objective-C


// Not supported


#### React Native


```ts
await Airship.featureFlagManager.trackInteraction(flag);
```



#### Flutter


```dart
Airship.featureFlagManager.trackInteraction(flag)
```



#### Cordova


```js
Airship.featureFlagManager.trackInteraction(flag);
```



#### Capacitor


```js
await Airship.featureFlagManager.trackInteraction(flag)
```



#### .NET MAUI


```csharp
// Not supported
```



#### Xamarin


```csharp
// Not supported
```



#### Titanium


```js
// Not supported
```



#### Unity


```csharp
// Not supported
```




### Handle errors

If a Feature Flag allows evaluation with stale data, the SDK will evaluate the flag if a definition for the flag is found. Otherwise, Feature Flag evaluation will depend on updated local state. If the SDK is unable to evaluate a flag due to data not being able to fetched, an error will be returned or raised. The app can either treat the error as the flag being ineligible or retry again at a later time. 


#### Android Kotlin


```kotlin
FeatureFlagManager.shared().flag("YOUR_FLAG_NAME").fold(
        onSuccess = { flag -> 
            // do something with the flag
        },
        onFailure = {error ->
            // do something with the error
        }
)
```



#### Android Java


```java
FeatureFlag featureFlag = FeatureFlagManager.shared().flagAsPendingResult("YOUR_FLAG_NAME").getResult();
if (featureFlag == null) {
    // error
} else if (featureFlag.isEligible()) {
    // Do something with the flag
}
```



#### iOS Swift


```swift
do {
    let flag = try await Airship.featureFlagManager.flag(name: "YOUR_FLAG_NAME")
    if (flag.isEligible == true) {
        // Do something with the flag
    }
} catch {
    // Do something with the error
}
```



#### iOS Objective-C


// Not supported


#### React Native


```ts
try {
    await Airship.featureFlagManager.flag("YOUR_FLAG_NAME");
} catch(error) {
    // Do something with the error
}
```



#### Flutter


```dart
Airship.featureFlagManager.flag("another_rad_flag").then((flag) => {
    if (flag.isEligible) {
        // Do something with the flag
    }
}).catchError((error) => {
    debugPrint("flag error: $error")
});
```



#### Cordova


```js
Airship.featureFlagManager.flag(
  "another_rad_flag",
  (flag) => { 
    // do something with the flag
  },
  (error) => {
    console.log("error: " + error)
  }
);
```



#### Capacitor


```js
try {
    const flag = await Airship.featureFlagManager.flag("another_rad_flag")
} catch (error) {
    console.log("error: " + error)
}
```



#### .NET MAUI


```csharp
// Not supported
```



#### Xamarin


```csharp
// Not supported
```



#### Titanium


```js
// Not supported
```



#### Unity


```csharp
// Not supported
```




## Using Feature Flags with messaging

You can use a Configuration's audience as the audience for an [In-App Automation](https://www.airship.com/docs/reference/glossary/#iaa) or [Scene](https://www.airship.com/docs/reference/glossary/#scene). See the Audience step in each *Create* guide:

* [Create an In-App Automation](https://www.airship.com/docs/guides/messaging/in-app-experiences/in-app-automation/create/#audience)
* [Create a Scene](https://www.airship.com/docs/guides/messaging/in-app-experiences/scenes/create/#audience)

You can also trigger an In-App Automation, Scene, or [Sequence](https://www.airship.com/docs/reference/glossary/#sequence) when a Feature Flag [interaction event](#interaction-events) occurs. See the Feature Flag Interaction Event trigger in each *Triggers* guide:

* [In-App Experience Triggers](https://www.airship.com/docs/guides/messaging/in-app-experiences/configuration/triggers/#feature-flag-interaction-event)
* [Sequence Triggers](https://www.airship.com/docs/guides/messaging/messages/sequences/triggers/#feature-flag-interaction-event)

### Example campaign strategy

For feature rollout in an app, your developer would implement tracking when users view the screen containing the new feature. Your campaign strategy could look like this:

1. **Inform users of the new feature** — Create an In-App Automation or Scene with these settings:

   * **Audience:** Select **Feature Flag Audience** and select your flag's rollout Configuration.
   * **Content:** Tell your users about the feature, explain its benefits, and encourage use.
   * **Behavior:** Select the **App Update** trigger, specify the version of your app that contains the feature and flag code, and enter the number of times users must open your app before they will see your message.

   The feature will be available to the Feature Flag audience after they install the version of your app that contains the feature and flag code and according to the flag's schedule. The message will display for the user after the number app opens you specified when setting up the trigger.

1. **Trigger a survey** — Create a Scene that requests feedback from Feature Flag Audience members who have seen or interacted with the flagged feature:

   * **Audience:** Select **Feature Flag Audience** and select your flag's rollout Configuration.
   * **Content:** Add questions or an NPS survey about their experience with the feature.
   * **Trigger:** Select the **Feature Flag Interaction Event** trigger (the flag you selected in the Audience step will be preselected for the trigger), select the user group **Users with feature access**, then enter the number of times the event must occur before the Scene is triggered.

   The Scene will display for members in any of the Configuration audiences for that flag after the number of event occurrences you specified when setting up the trigger.

Maximize adoption by designing a [Journey](https://www.airship.com/docs/reference/glossary/#journey) that combines the above with a [Sequence](https://www.airship.com/docs/reference/glossary/#sequence) that follows a user's interaction with the flagged feature and sends a customized message for each key step along the way.

## Manage Feature Flags

To view a list of your flags, go to **Experiments**, then **Feature Flags**. Your current flags are shown by default. Use the **Current/Archived** filter to update the list. The default sort order is by last modified, and each row displays:

* Display and flag names
* Description
* Date modified
* Status — Active (has at least one Active or Scheduled Configuration) or Inactive (has Draft or Ended Configurations only)
* Number of Configurations

Manage flags by selecting an icon or button in a flag row:

<div class="table-scroll-wrapper">
<table width="100%" class="reference-table">
  <col style="width:20%">
   <col style="width:40%">
   <col style="width:40%">
<thead>
  <tr>
   <th>Option</th>
   <th>Description</th>
   <th>Steps</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td>View image</td>
    <td>Displays the flag's <a href="#create-feature-flags">reference image</a> in a modal window.</td>
    <td>Select the photo icon.
  </tr>
  <tr>
    <td>Edit flag</td>
    <td>Opens the flag for editing. You can change a flag's display name, description, properties, and reference image. You can also change the flag name if the flag is not yet Active. You cannot edit archived flags. See IMPORTANT box following this table. See also <a href="#editing-flag-properties">Editing flag properties</a></td>
    <td>Select the pencil icon (
), make your changes, then select <b>Save and continue</b>.</td>
  </tr>
  <tr>
    <td>Manage Configurations</td>
    <td>Opens the list of Configurations for a flag.</td>
    <td>Select <b>View</b> for a flag's Configurations. See <a href="#manage-configurations">Manage Configurations</a>.</td>
  </tr>
  <tr>
     <td>Duplicate flag and Configurations</td>
     <td>Creates a copy of the flag and all its Configurations. The display and flag names are appended with "copy". Configurations have the same names as the originals and are in Draft state.</td>
     <td>Select the duplicate icon (
). You can then select the pencil icon (
) to edit the flag details, edit manage Configurations, or create a new Configuration.</td>
  </tr>
  <tr>
    <td>Archive flag</td>
    <td>Moves a flag from the Current list to the Archived list. You cannot archive an Active flag. You cannot archive a flag if an active message is targeting a Configuration audience.</td>
    <td>Select the archive icon (
).</td>
  </tr>
  <tr>
    <td>Restore/Unarchive flag</td>
    <td>Restores an archived flag to your list of Current flags.</td>
    <td>Select the <b>Archived</b> filter, then select the archive icon (
) for a flag.</td>
  </tr>
  <tr>
    <td>View and cancel related messages</td>
    <td>Opens a list of <a href="#using-feature-flags-with-messaging">In-App Automations and Scenes targeting any of the flag's Configuration audiences</a>. Messages are listed by name, type, and status. Selecting a name opens the message to its Review step, where you can check for conflicts between the Configuration and message schedules.<p>You can cancel a single Active message or all Active messages. Canceling a message is effectively the same as <a href='https://www.airship.com/docs/guides/messaging/in-app-experiences/configuration/optional-features/#specify-start-and-end-dates'>setting an end date</a> for the current date and time. See also <a href='https://www.airship.com/docs/guides/messaging/manage/change-status/#restart'>Restart an In-App Automation or Scene</a> in <i>Change message status</i>.</td>
    <td>Select the link icon () to view the list. To cancel, select <b>
 Stop</b> for a single message or <b>Stop all</b>. To check for scheduling conflicts, select a message name, then see the <b>Schedule</b> section to compare the start and end settings.
  </tr>
</tbody>
</table>
</div>

### Editing Flag properties

If a Feature Flag does not have an active or scheduled Configuration, you can edit the flag's property names, types, and values at any time.

When editing a flag that has active or scheduled Configurations, note the following:

* If a flag has an active or scheduled rollout or A/B test Configuration, you cannot edit the flag's property names or types.
* If a flag has an active or scheduled rollout Configuration, you can edit the flag's property values at any time. The Configurations will inherit the new property value.
* If a flag has an active or scheduled A/B test Configuration, you cannot edit the flag's property values unless all variants have an override value set for that property.

Whenever you change property names or types at the flag level, you must update the code snippet in your app or website for changes to take effect. You do not need to update the code snippet when changing a flag's default property values only.

## Manage Configurations

To manage Configurations, go to **Experiments**, then **Feature Flags**, then select **View** to access a flag's Configurations. If a [reference image](#create-feature-flags) is present, you can hover over it for a preview or select it to view a larger version in a modal window.

Active and Scheduled Configurations are listed in priority order, with the following information:

* Priority number
* Configuration type — Rollout or A/B test
* Configuration name
* Status — Active or Scheduled
* Description
* Goal name (for A/B test Configurations only)
* Audience — "Test group" or percentage
* Start and end dates and times in UTC

For Ended and Draft Configurations, use the **Current/Archived** filter to update the list. The default sort order is by last modified, and each row displays:

* Configuration name
* Configuration type — Rollout or A/B test
* Description
* Date modified
* Schedule
* Status — Draft or Ended

Manage Configurations by selecting an icon or link in a row. Select the three dots icon (
) for more. Options:

<div class="table-scroll-wrapper">
<table width="100%" class="reference-table">
  <col style="width:20%">
   <col style="width:40%">
   <col style="width:40%">
<thead>
  <tr>
   <th>Option</th>
   <th>Description</th>
   <th>Steps</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td>Set priority order</td>
    <td>For flags with multiple Configurations, if a user falls into more than one Configuration's audience, only the one with the highest priority will be active for that user. By default, each new Configuration is set to the lowest priority.</td>
    <td>Select the drag handle icon (), then drag and drop to a new position.</td>
  </tr>
  <tr>
    <td>View reports</td>
    <td>Opens reports for Active and Ended Configurations.</td>
    <td>Select the report icon (
). See <a href="#view-reports">View reports</a> for more information.</td>
  </tr>
  <tr>
    <td>Edit Configuration</td>
    <td>Opens Active and Draft Configurations for editing.</td>
    <td>Select the pencil icon (
), make your changes, then select <b>Update</b> or <b>Launch</b> in the Review step.</td>
  </tr>
  <tr>
    <td>End A/B test</td>
    <td>Opens options for rolling out a variant or ending the test without a rollout.</td>
    <td>Select the stop icon (
). See <a href="#end-an-ab-test">End an A/B test</a>.</td>
  </tr>
  <tr>
    <td>Edit audience allocation</td>
    <td>Opens the audience allocation setting for an Active Configuration. You also have the option to end the Configuration. See the description for <b>End/Cancel Configuration</b> in this table.</td>
    <td>Select the filter icon, set a new percentage, then select <b>Save</b>. To end the Configuration, select the settings icon, then select <b>End Configuration</b>.</td>
  </tr>
  <tr>
     <td>Duplicate Configuration</td>
     <td>Creates a copy of the Configuration and opens it for editing. The Configuration name is appended with " copy".</td>
     <td>Select the duplicate icon (
), and then complete the steps for <a href="#create-configurations">creating a new Configuration</a>.</td>
  </tr>
  <tr>
    <td>End/Cancel Configuration</td>
    <td>Immediately ends an Active Configuration or cancels a Scheduled Configuration. To make it Active or Scheduled again later, you can edit the Configuration and set a new end date.</td>
    <td>Select the pencil icon (
), and then <b>Stop</b>.</td>
  </tr>
  <tr>
    <td>Archive Configuration</td>
    <td>Moves a Configuration from the Current list to the Archived list. You cannot archive an Active or Scheduled Configuration.</td>
    <td>Select the archive icon (
).</td>
  </tr>
  <tr>
    <td>Restore/Unarchive Configuration</td>
    <td>Moves an Archived Configuration to the list of Current Ended and Draft Configurations.</td>
    <td>Select the <b>Archived</b> filter, then select the archive icon (
) for a Configuration.</td>
  </tr>
  <tr>
    <td>View and cancel related messages</td>
    <td>Opens a list of <a href="#using-feature-flags-with-messaging">In-App Automations and Scenes targeting the Configuration's audience</a>. Messages are listed by name, type, and status. Selecting a name opens the message to its Review step, where you can check for conflicts between the Configuration and message schedules.<p>You can cancel a single Active message or all Active messages. Canceling a message is effectively the same as <a href='https://www.airship.com/docs/guides/messaging/in-app-experiences/configuration/optional-features/#specify-start-and-end-dates'>setting an end date</a> for the current date and time. See also <a href='https://www.airship.com/docs/guides/messaging/manage/change-status/#restart'>Restart an In-App Automation or Scene</a> in <i>Change message status</i>.</td>
    <td>Select the link icon () to view the list. To cancel, select <b>
 Stop</b> for a single message or <b>Stop all</b>. To check for scheduling conflicts, select a message name, then see the <b>Schedule</b> section to compare the start and end settings.
  </tr>
</tbody>
</table>
</div>

## View reports

To access reports showing performance and interaction data:

1. Go to **Experiments**, then **Feature Flags**.
1. Select **View** to access a flag's Configurations.
1. Select the report icon (
) for a Configuration. See [Rollout reports](#rollout-reports) and [A/B test reports and technical overview](#ab-test-reports-and-technical-overview) for details.

You can also view reports and export data in [Performance Analytics](https://www.airship.com/docs/reference/glossary/#pa). For usage data, see [View Feature Flag and Scene Rollout usage](https://www.airship.com/docs/guides/getting-started/admin/usage-payment/#view-feature-flag-and-scene-rollout-usage).

### Rollout reports

The following are available for when [viewing reports](#view-reports) for rollouts:

| Report | Description |
| --- | --- |
| **Feature Flag interactions** | Counts of users in the Configuration audience with at least one [interaction event](#interaction-events) and interaction events per date. The default view is the last 30 days. Use the date selector to define a different time period. |
| **Users in Configuration audience with interaction events** | A count of users in the Configuration audience with at least one [interaction event](#interaction-events). Users are counted as [Channel IDs](https://www.airship.com/docs/reference/glossary/#channel_id). |

To download the data, select the down arrow icon, select CSV or TEXT format, and then select **Download**. For **Feature Flag interactions**, the download lists user and event counts per date. For **Users in Configuration audience with interaction events**, the download lists the platform and [Named User](https://www.airship.com/docs/reference/glossary/#named_user) for each Channel ID.

### A/B test reports and technical overview

When [viewing reports](#view-reports) for A/B tests, limited data appears if a [Goal](https://www.airship.com/docs/reference/glossary/#goals) was not set for the test. A summary displays the status of the experiment. Reports load with data for the test's primary Goal. If multiple Goals were set, select a different one, and the reports will reload with the data for that Goal. Select the info icon () for more information in each section.

Data represented in A/B test reports:

<div class="table-scroll-wrapper">
<table width="100%" class="reference-table">
<col style="width:30%">
<col style="width:70%">
<thead>
<tr>
   <th>Data</th>
   <th>Description</th>
</tr>
</thead>
<tbody>
<tr>
   <td>ID</td>
   <td>This is a variant's UUID. It appears in <a href="#interaction-events">interaction events</a>.</td>
</tr>
<tr>
   <td>Probability to Be Best</td>
   <td>This metric represents the likelihood that a particular variant is the top performer based on your test results. The closer the probability is to 100%, the more confidence that this variant is the best choice. A value of 95% or above suggests the variant is very likely to outperform the others. Hover over a variant for additional information.</td>
</tr>
<tr>
   <td>Loss</td>
   <td>Expected loss quantifies the risk of making a suboptimal decision. It accounts for both the uncertainty in the A/B test results and the potential missed opportunities if another variant performs better. A higher loss value suggests a greater risk of missing out on potential conversions, while a lower loss value indicates that even if the variant isn't the absolute best, the downside of choosing it is minimal.<p>For example, if the variant you select to roll out turns out to not be the best one, you might lose 3% of the conversions by having selected it. So if you have a P2BB of 70% but a small loss, it might be worth it to use that variant even though P2BB might not be 95%+.</td>
</tr>
<tr>
   <td>Conversion count</td>
   <td>This is the total number of users who completed the Goal event within this variant group during the A/B test.</td>
</tr>
<tr>
   <td>Conversion rate (vs Top)</td>
   <td>This shows the percentage of users who completed the Goal event, calculated as (conversion count / sample size) x 100. The comparison to the top-performing variant indicates how much lower the conversion rate is for this variant relative to the best option, where the top variant shows a difference of 0%.</td>
</tr>
<tr>
   <td>Sample size</td>
   <td>This represents the total number of users who triggered the interaction event in the A/B test for each variant. A larger sample size increases confidence in the results.</td>
</tr>
<tr>
   <td>Posterior Probability</td>
   <td>This graph visualizes the probability distribution of conversion rates for each variant based on the test data, highlighting the range of likely performance outcomes.<p>
   <ul><li><b>X-Axis (Conversion Rate)</b>: Represents the posterior distribution of possible conversion rates for each variant based on the test data. It shows the range of values a variant's true conversion rate is likely to fall within, rather than just observed conversion rates.</li><li><b>Y-Axis (Probability Density)</b>: Represents the likelihood of different conversion rates occurring, given the test data. Higher peaks indicate conversion rates that are more probable, while broader distributions suggest greater uncertainty in the estimate.</li><li><b>Overlap of Distributions</b>: If two posterior distributions overlap significantly, this indicates uncertainty about which variant is better. Minimal overlap suggests a clearer winner.</li></ul></td>
</tr>
<tr>
   <td>Relative Uplift</td>
   <td>This graph shows how each variant's performance compares to the others, highlighting the percentage increase or decrease in conversions relative to the top performing variant. It provides insight into whether a variant is making a meaningful improvement or if the difference is small.<p>
   <ul><li><b>0% uplift line</b>: Represents that there is no difference between variants.</li><li><b>Distribution Spread</b>: A wide distribution suggests uncertainty in the uplift estimate. A narrow distribution indicates more confidence.</li><li><b>Position of Bulk Mass</b>: If most of the distribution lies above zero for a variant, then it is likely to outperform others.</li></ul></td>
</tr>
</tbody>
</table>
</div>

As you review the report data, you may want to disable an underperforming variant. In the table, select **Stop** for the variant, and it will no longer be available to its configured audience.

To download table data as a CSV file, select the down arrow icon.

#### Statistical methods

Airship analyzes Feature Flag A/B test results using [Bayesian statistics](https://en.wikipedia.org/wiki/Bayesian_statistics), measuring confidence in each variant's success while accounting for uncertainty in the data. Rather than relying on a fixed confidence threshold, Bayesian methods allow for continuously updating the understanding of variant performance as data comes in.

Airship estimates probability distributions for each variant's performance. These distributions help calculate how likely each variant is to be the best. A [Beta(1,1) prior](https://en.wikipedia.org/wiki/Beta_distribution) is used to create the distributions, starting with a neutral assumption and letting the data drive the results.

Instead of only comparing variants to a single control, Airship evaluates each variant against all other variants. This gives a more complete picture of which variant performs best in the test.

Benefits of using Bayesian methods:

* **Transparent decision-making** — You can see whether a variant is performing better than others and the confidence in that result.
* **More than just statistical significance** — Instead of a pass/fail outcome, Bayesian methods give you  probability-based confidence in the results.
* **Flexibility** — You can decide how much certainty you need before rolling out a winning variant.

#### Calculating the winning variant

After a minimum runtime of one week and for a minimum sample size of 1,000 users, Airship declares the winning variant in the dashboard when Probability to Be Best exceeds 95% and Loss remains less than 5%. 

* A one week minimum is required to ensure that results are not overly influenced by short-term anomalies such as holidays, weekend effects, or day-of-week traffic fluctuations. It provides a more stable and representative sample of user behavior.

* A sample size of at least 1,000 users per variant is required to ensure enough data is collected to provide statistically meaningful insights. This threshold helps avoid results that are skewed by randomness or small sample bias, leading to more reliable conclusions.

* A Probability to Be Best of at least 95% provides strong statistical evidence that the winning variant outperforms all other variants.

* An expected loss of less than 5% is required to ensure the winning variant is unlikely to perform significantly worse than others, minimizing risk and providing confidence in its effectiveness.

## End an A/B test

You can end an active A/B test at any time.

From the [A/B test report](#view-reports):

1. Select **End A/B test**.
1. Select an option to determine what will happen with the variants after ending the test:
   | Option | Description |
| --- | --- |
| **&lt;Any variant&gt;** | Create a rollout Configuration for the variant that will be allocated to 100% of the A/B test audience. All other variants will no longer be available to their configured audiences. |
| **Stop all variants** | No variants will be available to their configured audiences. |
1. Confirm your selection.

You can also end the experiment by selecting **Stop** in the list of Configurations or by selecting **Roll out** for a variant listed in the table:

![Stop or roll out variants in a Feature Flag A/B test](https://www.airship.com/docs/images/feature-flag-a-b-test-report-table_hu_ca60292b1c4a4cb6.webp)

*Stop or roll out variants in a Feature Flag A/B test*

Once a winner has been determined, you will see an option to create a rollout for it in the report summary and table. Select **Roll out winner** and confirm your choice. The rollout will be allocated to 100% of the A/B test audience, and all other variants will no longer be available to their configured audiences.

To download the displayed test results in a CSV file, select **Download data**. Change your Goal selection to download results for that Goal. The following data is listed per [Channel ID](https://www.airship.com/docs/reference/glossary/#channel_id):

* Variant ID
* Variant name
* First interaction event time
* First Goal event time
* Goal event count
* [Named User](https://www.airship.com/docs/reference/glossary/#named_user)
* Platform


<!--
<div class="table-scroll-wrapper">
<table width="100%" class="reference-table">
   <col style="width:12%">
   <col style="width:18%">
   <col style="width:40%">
   <col style="width:20%">
   <thead>
   <tr>
      <th>Metric</th>
      <th>Definition</th>
      <th>Calculation</th>
      <th>Interpretation</th>
      </tr>
   </thead>
   <tbody>
   <tr>
      <td>Probability to Be Best (P2BB)</td>
      <td>The probability that a variant is the best-performing among all variants, based directly on the Posterior Distributions.</td>
      <td><code>P(X \text{ is best}) = \int \prod_{i \neq X} P(X > B_i) P(X) \, dX</code><p>
      Where:
      <ol><li><code>$P(X > B_i)$</code> - The probability that the metric of variant $X$ is greater than that of another variant $B_i$.</li>
      <li><code>$P(X)$</code> - The probability density of $X$ as defined by its posterior distribution.</li>
      <li><code>The Integral</code> - Sums over all possible values of $X$, weighted by the likelihood that $X$ is better than all other variants.</li></ol></td>
      <td>A high P2BB indicates that the variant is likely the best-performing option.</td>
   </tr>
   <tr>
      <td>Loss</td>
      <td>Measures the probability that a specific variant is worse than all others. For instance, "Loss for Variant X" is the probability that Variant X performs worse than other variants.</td>
      <td><code>P(X < \text{best of the rest}) = \frac{\text{Sample Count where } X < B_{best}}{\text{Total Sample Count}}</code><p>
      Where:
      <ol><li><code>$B_{best}$</code>: Conversion rate for the best-performing variant among the remaining variants, defined as: <code>B_{best} = \max(B_1, B_2, \dots, B_N)</code></li>
      <li><code>$\text{Total Sample Count}$</code> - The total number of posterior samples drawn for <code>$X$</code></li></ol></td>
      <td>This calculation provides the likelihood that Variant X is outperformed by at least one other variant, which helps identify its relative performance in a competitive set.</td>
   </tr>
   <tr>
      <td>Posterior Distributions</td>
      <td>Represents the updated belief about a parameter (for example: conversion rate) after observing the data. Posterior distributions combine prior knowledge and observed data using Bayes' theorem.</td>
      <td><code>P(\theta | D) = \frac{P(D | \theta) P(\theta)}{P(D)}</code><p>
      Where:
      <ol><li><code>$P(\theta | D)$</code>: Posterior probability (updated belief)</li>
      <li><code>$P(D | \theta)$</code>: Likelihood (probability of data given <code>$\theta$</code>)</li>
      <li><code>$P(\theta)$</code>: Prior probability (initial belief about <code>$\theta$</code>). Airship uses an uninformed prior of <code>$Beta(1,1)$</code></li>
      <li><code>$P(D)$</code>: Marginal likelihood (normalizing constant).</li></ol></td>
      <td>The posterior distribution provides the most likely values of the parameter given the observed data. The shape of the distribution reflects uncertainty:
      <ul><li>Narrow distributions indicate high confidence.</li>
      <li>Wide distributions indicate greater uncertainty.</li></ul>
</td>
   </tr>
   <tr>
      <td>Relative Uplift Distributions</td>
      <td>Shows the percentage change in performance of a variant relative to the best-performing variant from all other variants.</td>
      <td><code>P(X) = \frac{X - B_{best}}{B_{best}}</code><p>
      Where:
      <ol><li><code>$X$</code>: Conversion rate for the variant being evaluated.</li>
      <code>$B_{best}$</code>: Conversion rate for the best-performing variant among the remaining variants, defined as: <code>B_{best} = \max(B_1, B_2, \dots, B_N)</code></li></ol></td>
      <td><ul><li><b>Positive Values</b>: Indicate that the variant outperforms the best of the rest.</li>
      <li><b>Negative Values</b>: Indicate that the variant underperforms compared to the best of the rest.</li>
      <li><b>Shape</b>: The distribution shows the range and uncertainty of uplift values, providing insights into the variant's performance relative to the strongest alternative.</li></ul></td>
   </tr>
   </tbody>
</table>
</div>
-->