With Yieldify, you can create three types of tests – incremental, lead and promotional – to reach your business goals. Yieldify assigns a primary performance metric to each test type to determine its success.
This article explains how the performance metric varies based on the goal of a test.
Yieldify test types
Incremental
The objective of an incremental test is to determine if adding new content to your website via Yieldify drives an increase in overall website performance. Yieldify measures an increase in performance via Revenue Uplift when consumers can spend money on a website and via Conversion Uplift otherwise (for ex: an Insurance website in which creating an account would be considered the primary 'conversion').
In order to make a test incremental on Yieldify, you only need to add a control group to your test.
The control group is a portion of traffic within an incremental test that fulfils all of the criteria to trigger a Yieldify Experience, but is not exposed to a Yieldify Experience. It allows Yieldify to compare and validate the incremental revenue and sales generated by consumers who saw the experience and those who did not.
Lead Capture
The objective of a lead capture test is to identify the variant of an experience that generates the most leads. Its performance metric is Lead Capture Rate.
You can create a lead capture test by adding a submission form to every variant of the experience for every targeted device.
If you add a control group to a lead capture test, it will be treated as an incremental test and measure performance via Revenue Uplift or Conversion Uplift. This is because the control group lacks any capability to submit a lead in a Yieldify test and therefore cannot compare lead capture performance against other variants.
Promotional
The objective of a promotional test is to identify which variation on a Yieldify experience maximizes the revenue/conversions of a short-term promotion. These experiences contain a Call-to-Action (CTA) button and Yieldify measures the impact of a promotion based on how consumers engage with this CTA. The performance metric of promotional tests is either Click Revenue Uplift if the website reports revenue and Click Conversion Uplift if it only reports conversions.
You can create a promotional test by adding a CTA to every variant of the experience for every targeted device.
If you add a control group to a promotional test, it will be treated as an incremental test and measure performance via Revenue Uplift or Conversion Uplift. This is because the control group lacks any capability to submit a click on a Yieldify test and therefore cannot compare click performance against other variants.
Use cases
Use case experiences are a campaign type that is not meant to be tested.
The objective of a use case experience is to display a temporary and urgent message to all website consumers. As a result, these experiences are not given a control group.
These experiences are informative and do not have a performance objective. As a result, use case experiences do not have a performance metric.
Summary of test types and their performance metric
Follow the logic of the table below to identify the performance metric of a test.
Website reports revenue? | Lead Capture | CTA | Control Group | Test Type | Primary Metric |
Yes | Yes | Yes | Yes | Incremental | Revenue Uplift |
Yes | No | Yes | Yes | Incremental | Revenue Uplift |
Yes | No | No | Yes | Incremental | Revenue Uplift |
Yes | Yes | Yes | No | Lead | Lead Capture Rate |
Yes | No | Yes | No | Promotional | Click Revenue Uplift |
Yes | No | No | No | Use Case | No performance metric |
No | Yes | Yes | Yes | Incremental | Conversion Uplift |
No | No | Yes | Yes | Incremental | Conversion Uplift |
No | No | No | Yes | Incremental | Conversion Uplift |
No | Yes | Yes | No | Lead | Lead Capture Rate |
No | No | Yes | No | Promotional | Click Conversion Uplift |
No | No | No | No | Use Case | No performance metric |
Revenue vs. Conversions
The performance metric assigned to every test type depends on the reporting style of the website:
Websites that report revenue have revenue-based performance metrics. A revenue reporting website conversion is one order of goods/services attached to a financial value.
Websites that do not report revenue have conversion-based metrics. A non-revenue reporting website conversion is one order of goods/services not attached to a financial value (i.e. number of form submissions for a quote.)
Where do I see the performance metric of a test?
Define the performance metric of a test in the Campaign Testing section of the Settings page in the Campaign Builder. Adding a control group to your test will determine the recommended performance metric.
Note: Multi-variant experiences that do not contain a lead capture form or a CTA are required to run as incremental tests. The control group is necessary to compare the impact of each variant given that they don’t contain measurable user interactions. As a result, it is not possible to select “Test this campaign without a control group” in the Settings page of the Campaign Builder.
How does the performance metric impact reporting?
The winning variant of a test is the one that generates the highest positive performance metric.
Once your campaign is launched, you’ll see the performance metric highlighted in the reporting tables of a test’s Performance tab.
Read more about probability of success here.
Considerations
Yieldify will still provide reporting on all tests’ supporting metrics (i.e. sessions, conversions, etc.). See the breakdown of how we report on test performance here.
The presence of lead capture forms and CTAs, the number of variants as well as the presence of a control group all contribute to the performance metric of a test.
Users cannot override the performance metric for a test. The logic is automated to reflect the objective of the test.
It is not possible to change the performance metric of any test after it has gone live.
It is not possible to add or remove the control group of an incremental test after it has gone live. Once Yieldify has identified the winning variant of an incremental test – the variant with the highest positive performance metric uplift compared to the control group – traffic allocated to the control group should be set between 0% and 10%. It is not possible to entirely remove it from the test.
It is not possible to add or remove fundamental design elements (i.e. forms and CTAs) of a muli-variant test running without a control group of any test after it has gone live. Doing so implies a change in the objective of the test and requires another performance metric. In such cases, please start a new test by creating a new campaign.
It is not recommended to add or remove fundamental design elements (i.e. forms and CTAs) of an incremental test after it has gone live. Doing so changes the nature of a variant may cause significant changes to its performance mid-test. In such cases, please start a new test by creating a new campaign.