The Experiments feature gives users the power to create variants of a Page and compare their performance against defined goals. Once an Experiment is finished, the user can then promote the winning variant to become the new main version of the Page.
Through the use of Experiments, you can refine your site's presentation through user interaction, testing updates for relative engagement levels and using that data to help decide which changes best serve both you and your audience.
Note: Enabling Experiments requires configuration via the Analytics App. Please contact your Customer Success Representative to get started.
Usage
To begin an Experiment, access any Page in one of its viewing modes — such as through the Pages Tool.
On the right-hand menu bar, the button labeled A/B with the icon of a beaker () takes you to the Page's Experiments pane. There, two controls are present at all times.
First, a dropdown allows filtering on the basis of Experiment states; check as few or as many of these states as you wish to view the corresponding Experiments.
Secondly, across from the dropdown menu is the Create a New Experiment button; click this and enter a name to begin configuring an Experiment.
Each Experiment consists of several component parts: variants, goals, a traffic allocation scheme, and a schedule. Even before an Experiment finishes, as long as the Experiment has recorded a minimum of 10 sessions, a results pane will provide data visualizations summarizing the results in progress.
Variants
Every experiment must have either one or two variants defined alongside the original page.
Each variant is a discrete dotCMS page that can be edited with the full suite of page editor tools. This means, for example, that you can create different versions of the variant for different personas, different languages, and so on. As long as the banner at the top of the page editor indicates that you're editing a variant, all changes will be served to different users — including on the basis of language, persona, etc. — when those users are first routed to that variant.
To add a new variant, click the Add New Variant button, and then assign it a name.
Once the named variant appears on the list, you may assign that variant a weight. This determines what random percentage of traffic served the Experiment will receive that variant. By default, all variants are given equal weight, but the user may instead assign custom weight percentages.
Each variant may also be edited independently; clicking Edit will take you to Edit Mode, treating the variant as an entirely separate Page. You may add new content, modify or remove existing content, or change the Page's layout without affecting the original Page.
Goals
An experiment must also have a goal — the objective of the Experiment, according to which the results are evaluated.
A goal can be established from a number of available metrics:
Metric | Parameters | Description |
---|---|---|
Bounce Rate | N/A | A Bounce occurs when the Page inside the Experiment is the first and the last Page of the session; the user landed on the Page and didn’t click a link to produce further navigation, or perform any other activity. |
Exit Rate | N/A | Similar to Bounce Rate, An Exit takes place when the Page inside the Experiment is simply the last Page of the session, rather than the last and only one. |
Reach Page | A URL or partial-URL pattern | Gauges success according to whether a user visits a particular Page in the course of their session, after visiting the Experiment. |
URL Parameter | A URL parameter or partial pattern | Tracks whether a user accessed the Page with the appropriate URL query parameter present after visiting the Experiment. This option affords the most flexibility to define behaviors that will result in conversion — e.g., clicking a specific link on a specific page, clicking a control, or Javascript code. |
Traffic Allocation
Traffic allocation is an optional element of an Experiment. This describes is the percentage of the user traffic that will take part in the Experiment.
This works in tandem with the weights described under Variants. When receiving traffic, the traffic allocation determination is performed first; if the traffic is determined to take part in the Experiment, only then are variant weights factored.
In other words, if 50% of traffic is allocated to the Experiment, and the Experiment consists of a 50% weighted split between an original and a variant, then any given visitor has a 25% chance to see the variant instead of the original.
Although many experiments will use 100% traffic allocation to achieve more accurately representative results, traffic allocation can be helpful to ensure only a small number of users see the variant — especially where changes might reduce conversions instead of increase them.
Scheduling
Another optional element, scheduling allow the user to define start and end dates and times for the Experiment. You may choose to set a start date and time, an end date and time, both, or neither. Each result yields slightly different behaviors.
Start Defined | End Defined | Result |
---|---|---|
✅ | ✅ | A user-defined start and end will cause the Experiment to run for a the specified amount of time, up to the configured maximum (initial value: 90 days). |
✅ | ⬜ | If a start is specified without an end, an end will be set automatically to give the Experiment the configured “default” length (initial value: 14 days). |
⬜ | ✅ | If an end is specified without a start, the experiment will automatically start to ensure the Experiment runs for the configured “default” length (initial value: 14 days). |
⬜ | ⬜ | With neither start nor end specified, the Experiment must be started manually, and will automatically end after the configured “default” length (initial value: 14 days). |
There is no way to run an Experiment indefinitely.
An Experiment may always be ended manually, but to do so is typically discouraged; browsing habits tend toward weekly patterning, and uneven sampling can adversely impact the accuracy of the collected data. Only under certain circumstances it is advisable to end an experiment manually — such as if you wish to change an experiment by starting over, or if you are otherwise confident that you have collected enough data with a sufficient degree of clarity.
States
Every Experiment is always in one of the following states:
- Draft
- Scheduled
- Running
- Ended
- Archived
When you view the Experiments list for any page, by default it will show all Experiments for the Page in any of these states except Archived. You can change the filter on the Experiments list to show or hide any of these states (including Archived).
Actions
Users can perform a number of actions on Experiments once the mandatory configuration is completed. Actions may move the Experiment into a different state. The actions available for an Experiment can be viewed through the “hamburger” buttons (⋮
) at the right side of the Experiments list.
Action | Required State | New State | Description |
---|---|---|---|
Edit/View Configuration | Any | No Change | Allows you to view — or edit, if it has not begun — the configuration of the Experiment. |
View Results | Running / Ended | No Change | Brings you to the result charts once an Experiment has accrued data. |
Start Experiment | Draft / Scheduled | Running | Begins the Experiment immediately, for the default length of 14 days unless otherwise specified. |
Schedule Experiment | Draft | Scheduled | Allows scheduling of the Experiment per the section above. |
End Experiment | Running | Ended | Ends Experiment. Allows immediate viewing of results, promotion of winner, etc. |
Abort Experiment | Running | Draft | Halts the Experiment and discards all data gathered during its run. |
Archive | Ended | Archived | Archives the Experiment; this removes it from the default list, and renders its results no longer viewable. |
Cancel Scheduling | Scheduled | Draft | Removes scheduling from the Experiment. |
Push Publish | Any | No Change | See section below. |
Add to Bundle | Any | No Change | See section below. |
Push Publishing
Experiments will not be push published when the Page is pushed. To push an Experiment, you must explicitly push the Experiment itself, using the blue hamburger button (⋮
) at the top of the Experiment.
When you push an Experiment, the state of all the Experiments on the page will be synchronized from the sender to the receiver. This means, for example, that if you stop an Experiment on the sender, then start a new Experiment on the sender, and then push the new Experiment, then the new one will be pushed and started while the old one will likewise be stopped on the receiver.
Push Publishing Dependencies
The Page is a push publishing dependency of the Experiment. If an experiment is pushed, but the Page was not already pushed, the Page will be pushed with the Experiment. However, the Experiment is not a dependency of the Page; so, the Experiment is never pushed unless it’s explicitly added to the bundle.
Pushing Experiment Data
The data for an Experiment is shared among all servers the Experiment exists on.
Since the state of the Experiment is always synchronized between the sender and receiver, this means that the Experiment must be running on the sender while it is running on the receiver.
This means that if a user visits a page with an Experiment on the front end of the sender, they’ll add data to the experiment that’s indistinguishable from data collected from the same page on the receiver — i.e., the datasets merge during synchronization.
Because of this arrangement, users on the sending server are able to view the data being collated on the receiver in real time, without the need to configure permissions on the receiver accordingly.
Results Pane
The Results Pane provides data visualization in the form of interactive charts and a summary data table, providing different views of the same data.
Along the top of the Results Pane, you can see the current “winner” of the Experiment, as well as its goal and run dates, as well as the number of sessions it has recorded so far. These can be refreshed via the button immediately to the right.
The default chart, Daily Results simply shows the conversion rate for each day for each variant. Hovering your cursor over the chart will display the value at the indicated datapoint.
Bayesian Results
The second chart tab displays Bayesian Results for the original and each variant.
Bayesian inference is an advanced method of analyzing the data based on prior beliefs repeatedly updated by data illustrating likelihood, generating an increasingly accurate posterior probability. Bayesian inference can provide a high degree of confidence on which variant will perform better, and can do so in a relatively short time.
Bayesian inference operates from a different philosophical starting point than frequentist inference, which emphasizes long-run frequencies of repeated events with fixed parameters.
The Bayesian chart shows curves reflecting probability distributions. In calculating these, Experiments automatically choose a value for the prior, which assumes no knowledge of what the actual conversion rate might be.
In the data table, the two rightmost numeric columns are both calculated along Bayesian lines:
- Probability to be Best gives the probability that each variant is the one likely to result in the highest conversion rate;
- Conversion Rate Range specifies a range indicating the lowest and highest values the actual conversion rate is likely to have, with a 95% confidence interval.
Experiment Data Is Immutable
Due to the nature of the statistical method used, once data is collected for an Experiment, it cannot be removed from that Experiment without fully invalidating or resetting the Experiment (as with the Abort Experiment action). For example, you cannot elect to change the start date after the fact to exclude some test data you generated at the beginning of the Experiment.
If you collect data that you do not wish to include in the Experiment, the only way to remove it is to abort and restart the Experiment.
Configuration Properties
Certain Experiment settings can be adjusted globally by editing configuration properties.
Duration
Experiments are limited to a specific minimum and maximum duration, and there is a default duration that an Experiment will use if a specific end date has not been scheduled. Each of these values may be changed.
Their default values, integers representing a number of days, are as follows:
EXPERIMENTS_MIN_DURATION=8
EXPERIMENTS_MAX_DURATION=90
EXPERIMENTS_DEFAULT_DURATION=14
Auto-Inject
In order for Experiments to track conversions on a Page, a block of JavaScript must be included in the Page. There are two ways to include the code: manually — by editing the Theme, Template, or Page — or automatically by configuring the ENABLE_EXPERIMENTS_AUTO_JS_INJECTION
property as follows:
DOT_ENABLE_EXPERIMENTS_AUTO_JS_INJECTION=true
If the ENABLE_EXPERIMENTS_AUTO_JS_INJECTION
property is set to true
, then the appropriate code will be added to ALL Pages on the dotCMS server — even pages which do not have an Experiment on them, and which don’t have any bearing on the conversion for any Experiments.
If this property is set to false
(the default value), then the user must manually add their code to both the Page the Experiment is running on, and any other Pages which may be involved in the conversion — for instance, the Page a user is intended to reach with the Reach Page goal.
To perform manual inclusion, simply add the following Velocity command to the Page in question:
$dotExperiments.code()
This injection code is not idempotent, so it is important to ensure this line is only included one time per Page, and that it is not included while the auto-inject option is enabled; repeated inclusions may impact site performance and data collection quality.
Note: The auto-inject option is disabled by default; depending on the sorts of code present or expected on a certain Page, automatic injection may in some cases result in errors. Always test before enabling in a production environment.
Lookback Window
The lookback window is the length of time (in days, as an integer) a site visitor will be remembered within the Experiment. A remembered user will be served the same variant they were originally served on their first Experiment visit.
After the lookback window expires, a user visiting the Page may be routed to a different variant than they were the first time. The default configuration property value is expressed below.
EXPERIMENTS_LOOKBACK_WINDOW=14