Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 21 additions & 10 deletions docs/tools/experiments-v1/configuring-experiments-v1.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,15 +50,24 @@ Duplicating is particularly useful when you want to:

Regardless of which method you choose, you'll need to configure the experiment settings as described below.

## Required fields
## Fields

To create your experiment, you must first enter the following required fields:
To create your experiment, you must first enter:

- **Experiment name**: A descriptive name for your test
- **Experiment type** (optional): Choose from preset types (Introductory offer, Free trial offer, Paywall design, Price point, Subscription duration, Subscription ordering, or Other) to get relevant default metric suggestions
- **Notes** (optional): Add markdown-formatted notes to document your hypothesis and track insights
- **Variant A (Control)**: The Offering(s) for your control group (baseline)
- **Variant B (Treatment)**: The Offering(s) for your first treatment group

### Adding more variants for multivariate testing

You can add up to 2 additional treatment variants:
- **Variant C (Treatment)**: Optional second treatment variant
- **Variant D (Treatment)**: Optional third treatment variant

Multivariate experiments allow you to test multiple variations against your control simultaneously, helping you identify the best-performing option more efficiently than running sequential A/B tests.

- Experiment name
- Control variant
- The Offering(s) that will be used for your Control group
- Treatment variant
- The Offering(s) that will be used for your Treatment group (the variant in your experiment)

## Using Placements in Experiments

Expand Down Expand Up @@ -97,15 +106,15 @@ Select from any of the available dimensions to filter which new customers are en

**New customers to enroll**

You can modify the % of new customers to enroll in 10% increments based on how much of your audience you want to expose to the test. Keep in mind that the enrolled new customers will be split between the two variants, so a test that enrolls 10% of new customers would yield 5% in the Control group and 5% in the Treatment group.
You can modify the % of new customers to enroll (minimum 10%) based on how much of your audience you want to expose to the test. Keep in mind that the enrolled new customers will be split evenly between all variants. For example, an A/B test (2 variants) that enrolls 10% of new customers would yield 5% in the Control group and 5% in the Treatment group. A 4-variant multivariate test enrolling 20% of new customers would yield 5% in each variant.

Once done, select **CREATE EXPERIMENT** to complete the process.

## Starting an experiment

When viewing a new experiment, you can start, edit, or delete the experiment.

- **Start**: Starts the experiment. Customer enrollment and data collection begins immediately, but results will take up to 24 hours to begin populating.
- **Start**: Starts the experiment. Customer enrollment and data collection begins immediately, but results will take up to 24 hours to begin populating. After that, results refresh periodically - check the **Last updated** timestamp on the Results page to see when data was last refreshed.
- **Edit**: Change the name, enrollment criteria, or Offerings in an experiment before it's been started. After it's been started, only the percent of new customers to enroll can be edited.
- **Delete**: Deletes the experiment.

Expand Down Expand Up @@ -203,7 +212,7 @@ When an experiment is running, only the percent of new customers to enroll can b
| Can I edit the Offerings in a started experiment? | Editing an Offering for an active experiment would make the results unusable. Be sure to check before starting your experiment that your chosen Offerings render correctly in your app(s). If you need to make a change to your Offerings, stop the experiment and create a new one with the updated Offerings. |
| Can I run multiple experiments simultaneously? | Yes, as long as they meet the criteria described above. |
| Can I run an experiment targeting different app versions for each app in my project? | No, at this time we don't support setting up an experiment in this way. However, you can certainly create unique experiments for each app, and target them by app version to achieve the same result in independent test. |
| Can I add multiple Treatment groups to a single test? | No, you cannot add multiple Treatment groups to a single test. However, by running multiple tests on the same audience to capture each desired variant you can achieve the same result. |
| Can I add multiple Treatment groups to a single test? | Yes, experiments support up to 4 variants total: 1 Control (Variant A) and up to 3 Treatment variants (B, C, D). This allows you to test multiple variations simultaneously in a single multivariate experiment. |
| Can I edit the enrollment criteria of a started experiment? | Before an experiment has been started, all aspects of enrollment criteria can be edited. However, once an experiment has been started, only new customers to enroll can be edited; since editing the audience that an experiment is exposed to would alter the nature of the test. |
| What's the difference between pausing and stopping an experiment? | Pausing temporarily stops new customer enrollment while existing participants continue to see their assigned variant. The experiment can be resumed later. Stopping permanently ends the experiment: new customers won't be enrolled and existing participants will see the Default Offering on their next paywall view. A stopped experiment cannot be restarted. Both paused and stopped experiments continue collecting data for up to 400 days. |
| Can I pause an experiment multiple times? | Yes, you can pause and resume an experiment as many times as needed. This allows you to control enrollment based on your testing needs and timeline. |
Expand All @@ -213,4 +222,6 @@ When an experiment is running, only the percent of new customers to enroll can b
| Can I restart an experiment after it's been stopped? | After you choose to stop an experiment, new customers will no longer be enrolled in it, and it cannot be restarted. However, if you need to temporarily halt new enrollments with the option to resume later, consider using the pause feature instead. Paused experiments can be resumed at any time. If you've already stopped an experiment and want to continue testing, create a new experiment and choose the same Offerings as the stopped experiment. You can use the duplicate feature to quickly recreate the same experiment configuration. *(NOTE: Results for stopped experiments will continue to refresh for 400 days after the experiment has ended)* |
| Can I duplicate an experiment? | Yes, you can duplicate any existing experiment from the experiments list using the context menu. This creates a new experiment with the same configuration as the original, which you can then modify as needed before starting. This is useful for running similar tests or follow-up experiments. |
| What happens to customers that were enrolled in an experiment after it's been stopped? | New customers will no longer be enrolled in an experiment after it's been stopped, and customers who were already enrolled in the experiment will begin receiving the Default Offering if they reach a paywall again. Since we continually refresh results for 400 days after an experiment has been ended, you may see renewals from these customers in your results, since they were enrolled as part of the test while it was running; but new subscriptions started by these customers after the experiment ended and one-time purchases made after the experiment ended will not be included in the results. |
| How many variants should I use in my experiment? | Start with 2 variants (A/B test) for most cases. Use 3-4 variants (multivariate) when you have multiple distinct hypotheses to test simultaneously. Keep in mind that more variants require more customers to reach statistical significance, so tests take longer. |
| What experiment type should I choose? | Choose the preset that best matches what you're testing. Presets provide relevant default metrics: "Price point" suggests revenue metrics, "Free trial offer" suggests trial conversion metrics, etc. You can always customize metrics after selecting a type. |

12 changes: 12 additions & 0 deletions docs/tools/experiments-v1/creating-offerings-to-test.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,18 @@ Through Experiments, you can test any variable related to the products you're se

In addition, by programming your app to be responsive to Offering Metadata, you can test any other paywall variable outside of your product selection as well. [Learn more here](/tools/offering-metadata).

### Testing multiple variations at once

With support for up to 4 variants, you can test multiple hypotheses simultaneously. For example:
- **Variant A (Control)**: Current $9.99/month pricing
- **Variant B**: Test $7.99/month (lower price)
- **Variant C**: Test $12.99/month (higher price)
- **Variant D**: Test $9.99/month with 7-day trial (same price, add trial)

This multivariate approach can be faster than running sequential A/B tests, but requires more traffic to reach statistical significance.

When choosing experiment types in the dashboard, select the preset that matches your primary variable (e.g., "Price point" for pricing tests, "Free trial offer" for trial tests). This will suggest relevant metrics to track for your experiment.

## Setting up a new offering to test your hypothesis

Experiments uses [Offerings](/getting-started/entitlements#offerings) to represent the hypothesis that's being tested (aka: the group of products that will be offered to your customers). An Offering is a collection of Packages that contain Products from each store you're looking to serve that Offering on.
Expand Down
4 changes: 3 additions & 1 deletion docs/tools/experiments-v1/experiment-results-summaries.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ You must first verify your email address with us in order to receive Experiment

We'll send you an email for each experiment you've had running in the last week in the Projects that you've subscribed to receive these summaries for. It will include the latest results for the experiment, focused on the following key metrics.

For multivariate experiments (3-4 variants), the summary includes performance for all variants compared to the control.

| Metric | Definition |
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Initial conversion rate | The percent of customers who purchased any product. |
Expand All @@ -33,7 +35,7 @@ We'll send you an email for each experiment you've had running in the last week
| Realized LTV (revenue) | The total revenue that's been generated so far (realized). |
| Realized LTV per customer | The total revenue that's been generated so far (realized), divided by the number of customers. This should frequently be your primary success metric for determining which variant performed best. |

All metrics are reported separately for the Control variant, the Treatment variant, and the relative difference between them.
All metrics are reported separately for the Control variant, each Treatment variant, and the relative difference between each treatment and control.

:::tip Full results on the Dashboard
To analyze how these metrics have changed over time, review other metrics, and breakdown performance by product or platform; you can click on the link in the email to go directly to the full results of your experiment.
Expand Down
26 changes: 23 additions & 3 deletions docs/tools/experiments-v1/experiments-overview-v1.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ slug: experiments-overview-v1
hidden: false
---

Experiments allow you to answer questions about your users' behaviors and app's business by A/B testing two unique paywall configurations in your app and analyzing the full subscription lifecycle to understand which variant is producing more value for your business.
Experiments allow you to answer questions about your users' behaviors and app's business by A/B testing multiple paywall configurations (2-4 variants) in your app and analyzing the full subscription lifecycle to understand which variant is producing more value for your business.

While price testing is one of the most common forms of A/B testing in mobile apps, Experiments are based on RevenueCat Offerings, which means you can A/B test more than just prices, including: trial length, subscription length, different groupings of products, etc.

Expand All @@ -25,6 +25,22 @@ If you need help making your paywall more dynamic, see [Displaying Products](/ge
To learn more about creating a new Offering to test, and some tips to keep in mind when creating new Products on the stores, [check out our guide here](/tools/experiments-v1/creating-offerings-to-test).
:::

## Experiment Types

When creating an experiment, you can choose from preset experiment types that help guide your setup with relevant default metrics:

- **Introductory offer** - Test different introductory pricing strategies
- **Free trial offer** - Compare trial lengths or presence/absence of trials
- **Paywall design** - Test different paywall layouts and presentations
- **Price point** - Compare different price points for your products
- **Subscription duration** - Test different subscription lengths (monthly vs yearly)
- **Subscription ordering** - Test different product ordering or prominence
Choosing the right preset automatically suggests relevant metrics for your experiment type, making it easier to track what matters most for your test.

You can also click **+ New experiment** to create a custom experiment with your own metrics without selecting a preset.

![Experiment type selection](/docs_images/experiments/v1/experiments-type-selection.png)

![Experiments](/docs_images/experiments/v1/experiments-learn.webp)

As soon as a customer is enrolled in an experiment, they'll be included in the "Customers" count on the Experiment Results page, and you'll see any trial starts, paid conversions, status changes, etc. represented in the corresponding metrics. (Learn more [here](/tools/experiments-v1/experiments-results-v1))
Expand Down Expand Up @@ -55,8 +71,8 @@ Programmatically displaying the `current` Offering in your app when you fetch Of
:::

1. Create the Offerings that you want to test (make sure your app displays the `current` Offering.) You can skip this step if you already have the Offerings you want to test.
2. Create an Experiment and choose the Offerings to test. You can create a new experiment from scratch or duplicate an existing experiment to save time when testing similar configurations. By default you can choose one Offering per variant, but by creating Placements your Experiment can instead have a unique Offering displayed for each paywall location in your app. [Learn more here](https://www.revenuecat.com/docs/tools/experiments-v1/configuring-experiments-v1#using-placements-in-experiments).
3. Run your experiment and monitor the results. There is no time limit on experiments, so stop it when you feel confident choosing an outcome. (Learn more about interpreting your results [here](/tools/experiments-v1/experiments-results-v1))
2. Create an Experiment and choose between 2-4 variants to test. You can select from experiment type presets (Price point, Free trial offer, etc.) to get relevant default metrics, or create a custom experiment. You can create a new experiment from scratch or duplicate an existing experiment to save time when testing similar configurations. By default you can choose one Offering per variant, but by creating Placements your Experiment can instead have a unique Offering displayed for each paywall location in your app. [Learn more here](https://www.revenuecat.com/docs/tools/experiments-v1/configuring-experiments-v1#using-placements-in-experiments).
3. Run your experiment and monitor the results. There is no time limit on experiments, so you can pause enrollment if needed and stop it when you feel confident choosing an outcome. (Learn more about interpreting your results [here](/tools/experiments-v1/experiments-results-v1))
4. Once you’re satisfied with the results you can set the winning Offering(s), if any, as default manually.
5. Then, you're ready to run a new experiment.

Expand All @@ -82,6 +98,10 @@ You can't restart a test once it's been stopped.

It's tempting to try to test multiple variables at once, such as free trial length and price; resist that temptation! The results are often clearer when only one variable is tested. You can run more tests for other variables as you further optimize your LTV.

:::tip Multivariate testing
With support for up to 4 variants, you can test multiple variations of the same variable simultaneously (e.g., testing $5, $7, and $9 price points in a single experiment). This is different from testing multiple variables at once - each variant should differ by the same variable to keep results interpretable.
:::

**Run multiple tests simultaneously to isolate variables & audiences**

If you're looking to test the price of a product and it's optimal trial length, you can run 2 tests simultaneously that each target a subset of your total audience. For example, Test #1 can test price with 20% of your audience; and Test #2 can test trial length with a different 20% of your audience.
Expand Down
Loading