As of the first week of March 2020, the total number of confirmed cases in mainland China, the epicentre of the COVID-19 outbreak, is slightly over 80,000. This works out to be no more than 6 cases in 100,000 people. The probability is much lower in most other places, such as 3.38 cases in 100,000 people in Italy, 1.89 in 100,000 in Singapore, and 0.03 in 100,000 in the US.

Despite the low probability, many people are appearing to be more fearful than they should be, with an exaggerated perceived risk.

Panic buying happened within hours when the DORSCON level was raised to Orange in Singapore early last month. Canned food, rice, instant noodles, and even toilet papers were swept off the shelves that evening, with queues longer than we have ever seen in supermarkets. The same phenomenon hit the US, Germany, Italy and Indonesia this week, after more local cases were confirmed. Masks, sanitizers, and disinfectants are sold out, social events and activities are cancelled, and many instances of racism against people of Chinese ethnicity have been observed around the world.

Is this fear rational? It seems the fear is spreading faster, and affecting people’s lives to a larger extent, than the virus itself. Why is that?

The following five cognitive biases can explain most of these irrational behaviours during the COVID-19 outbreak.

1.     Negativity bias – we have the tendency to pay more attention to bad things

Humans have a natural tendency to place more emphasis to negative things, such as remembering negative incidents more clearly, being more affected by criticisms than compliments, or feeling more emotional pain for a loss of $10 than happiness gained for the picking up $10.

“Good things last eight seconds…Bad things last three weeks.” – Linus van Pelt, Peanuts

During the COVID-19 outbreak, we tend to pay more attention to bad news (in part also due to news channels’ willingness to focus on negative news as well, following the same principle) – the number of new cases/deaths/infected patients in critical condition – much more than the number of recoveries. Some people actively search for information that scares themselves more, such as ‘evidence’ that shows masks are not effective in protecting you from the virus, reading up on past global pandemics, or even unknowingly landing on fake news which exacerbates the severity of the situation. All these contribute to the psychological fear of ‘Could it happen to me?’.

2.     Confirmation bias – we pay more attention to information that supports our belief

People are prone to believe what they want to believe, and actively look out for evidence to support their beliefs, while dismissing those that contradict. This confirmation bias is more prevalent in anxious individuals, which makes them perceive the world to be more dangerous than it is. For example, an anxious person is more likely to be more sensitive about what people think of him/her, and constantly look out for signs that show people do not like them, biasing towards negative words or actions.

We naturally seek information to protect ourselves, because the ‘unknown’ is more fearful than the ‘known’. If we think the situation is severe, we tend to focus on news that talks about the severity of the situation, which results in a self-fulfilling prophecy. With greater amount of information now being spread much more quickly over social media, the effects of this bias are a lot more pronounced. A cursory scroll through the Reddit thread on COVID-19 can quickly convince someone that it will bring about the end of the world! 

3.     Probability neglect – we have the tendency to disregard probability when making decisions

A potential outcome that is incredibly pleasant or terrifying is likely to affect our rational minds. We are more likely to be swayed by our emotions towards the potential outcome and pay less attention to the actual probability.

Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

Looking factually at the numbers of COVID-19, the probability of getting the virus is very low, and much lower than many other risks that we are accustomed to, such as the common flu or cold. Yet people are terrified and have extreme panic or preventive behaviours towards the situation. The fact that the virus is new, and that it can be fatal, could have added to the fear, clouding judgement. Many are avoiding malls, reducing dining out, cancelling travels. This effect extends into greater economic implications. The ‘unknown’ is playing with our feelings, and we react to the feelings, not probability, towards the risk. 

4.     Stereotyping – we tend to make unjustified generalisations

On 11 February, the World Health Organization (WHO) announced the official new name of the coronavirus to be COVID-19. According to WHO, they had to find a name that did not refer to a geographical location, an animal, an individual or a group of people.

This is not just a WHO naming guideline, but an important step to reduce negative stereotypes. During the early stages of the outbreak, there was hatred against Wuhan, or China, and this prejudice has even extended to all Chinese people outside of China. In many countries, many people also irrationally avoid visiting the Chinatown, or dining in Chinese restaurants, as if you visit a neighbourhood Chinese restaurant, you will get the virus, even if your neighbourhood is safe[ML1] [DG2] . Aside from how stereotyping individuals is in and off itself a negative social action, such perceptions can also lead to feelings of false assurance, that one is ‘immune’ to the virus, which in turn can result in behaviours that run counter to public health advisories.

5.     Illusory truth effect – it’s true if it’s repeated

 “Repeat a lie often enough and it becomes truth” – people tend to believe what they constantly see or hear in the news, regardless of whether there is any evidence of its veracity. A recent study [ML3] [DG4] has shown this effect to be present even if people are familiar with the subject, as the repeated lies introduce doubt into their psyche.

This is one of the key reasons why “fake news” has been able to take hold during this outbreak – from quack sesame oil remedies to protect against the virus to misconceptions that packages from China are dangerous to handle. In Singapore, after the same few photos of panic buying being circulated via social media many times makes it a ‘nationwide phenomenon’. WHO and governments around the world have been actively trying to take back the narrative from these “fake news” sources, but the prevalence of social media and the ease of sharing such information to one’s friends and families will present an uphill challenge to combat them.

What it means for brands

Firstly, it is important to remember that cognitive biases exist in human beings, and consumer behaviours aren’t always rational. During the crisis, such behaviours are magnified, and the impact/ repercussions of these irrationalities become amplified.  you should consider what consumers are thinking, and how they are reacting. Understanding where the biasness is from, and how it manifests in thinking and actions, can help you decide on strategies what can potentially lead to behavioural changes.

Secondly, we also need to understand that relying on past information may not be able to help you accurately predict into the future, because people’s reaction to the same stimulus may have changed. For example, the last time DORSCON was raised to Orange in Singapore during the H1N1 crisis in 2019, there wasn’t ‘panic buying’ that led to the severe shortage of masks or sanitizers. Planning in the future, you can think about whether your brand will be perceived any differently once the outbreak is over – how would people’s mindset change because of the outbreak? What will people be looking out for, post- this crisis? Consider how you can address the post-crisis world, and find your competitive advantage.

Our kids media experts Bianca Abulafia and Sarah Serbun shared their top tips at Qual 360 of how to conduct qual research with kids and the culture considerations to bar in mind in each market.

Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

As you put the Halloween decorations away for another year, are you one of the many people thinking twice about that age old tradition of carving a pumpkin? 

#pumpkinrescue is trending on social media as organisations and consumers alike raise awareness of unnecessary food waste that the Halloween tradition creates. According to Hubbub, in the U.K., 18,000 tonnes of pumpkin go to landfill every year (that is the equivalent of 360 million portions of pumpkin pie) and many people have had enough, using the hashtag to encourage consumers to eat the remains of their pumpkin instead. 

Concerns around food waste are no fad. Our latest research, The Concerned Consumer, found that food waste is a key issue globally, with 63% of consumers telling us they do their bit to address food waste. This is particularly important for consumers in the UK and the US, where the figure rises to 71%. 

Keen to explore this topic in more detail, we’ve been digging into the conversations around food waste on Twitter, using a comparative analytics tool called Relative Insight. 

Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

So aside from discussions around #pumpkinrescue, how is food waste being discussed online?

Freezing food is a key topic of conversation. It is seen as a sustainable way to keep food fresh for longer, minimising food waste overall. And while thinking about pumpkins (which is a fruit by the way – yes, we googled it), we found that consumers are generally confused about whether they can or can’t freeze certain vegetables and fruit.

Another popular topic around food waste is finding a purpose for food scraps. Consumers are calling for more recipe suggestions incorporating vegetable scraps, or ways of composting it. Take a pumpkin as an example; the flesh can be used in pies and bread, the guts can be used for broth and mulled wine, the skin is edible in small varieties, and the seeds can be roasted. 

Want to discover more about the environmental, ethical and health concerns driving purchase behaviour in food and drink? Download our Concerned Consumer research.

Cannabis talk in the US media is unavoidable these days as changing legislation and recreational dispensaries continue to open up across selected states in the country.  How can companies outside the cannabis space take advantage of this growing trend? Our research with over 2,000 US consumers sought to understand this new opportunity for brands.

One-in-five (20%) adults nationwide report they have used cannabis in the last 12 months. Of those, two-thirds (66%) consume regularly (at least once a week). While two thirds tell us that consuming cannabis has not changed their social life in any way, 17% are staying home more and 8% say they are going out more. 

Ultimately, this opens up a variety of opportunities for marketers to offer products and services that are tailored to the needs of this group. Meal kit delivery companies could make “dinner party boxes” suited to a night in with friends. Game makers could create games that facilitate creativity and fun. Netflix or Amazon could offer content particularly suited for cannabis-influenced viewers. And clearly, snack makers could have a field day.

In the survey, adults were asked whether they would prefer to consume cannabis or alcohol while doing different popular activities. While clubbing and hosting a dinner party are more likely paired with alcohol, for many other pastimes, cannabis wins.  At home, watching TV/ movies, doing chores, playing board games and socializing with family and friends are all activities where cannabis is preferred.  Going to the movies or to watch live music are also events where adults would prefer cannabis.  A host of other activities are decidedly not alcohol activities, but may be considered “cannactivities” – yoga, gardening, outdoor activities, going to the spa, cultural events and reading.  See the table below for details.

How can your business take advantage of this fast-growing industry? Download the full research report to learn more.

“For each of the following, would you rather do this activity while consuming cannabis, drinking beverages containing alcohol, or neither?”

Cannabis research: would rather consume cannabis vs would rather consume alcohol

Trusted by

We’ve all been there. That moment of frustration when you visit a store or restaurant or hotel and are so entirely and completely underwhelmed by the experience. Perhaps it was the inattentive or poorly trained staff. Or the unclear and confusing information. Or the restricting opening hours. But what makes the whole thing worse is that this is not what you were promised – the ads; marketing and branding all suggest a very different experience. As an extreme example, the hot water that United got into for forcibly removing a passenger is a complete mismatch of its brand promise of: “connecting people. Uniting the world.”

On the flip side, there are golden moments when the unexpectedly wonderful happens. The barista remembers your name and favourite order; you’re given a hotel room upgrade; the restaurant goes out of their way to accommodate your food allergy.

The reason for both of these reactions is because of the unexpected. The experience you were primed for by the brand promise is different. Causing an emotional reaction as we deal with that.

Experiences have become perhaps the most important aspect of shaping the brand. Not only can experiences be documented and shared more easily than ever with camera phones and social media; but an experience is more visceral and powerful than any marketing and will live on much longer in the memory.

However, a recent survey by the Chartered Institute of Marketing suggests that only 53% of marketers claim successful alignment between brand promise and experience; just 37% believe their employees understand how to deliver this brand promise; and a measly 17% feel they enable their employees to suggest way to improve brand experience.

Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

Part of the reason for this is that it’s hard to measure the brand experience. Brand health studies measure the brand promise not experience; Satisfaction studies test the brand’s SOPs rather than the consumers’ experience; and mystery shopping relies on a small sampling of outsiders’ opinions. Relying on these studies alone is not enough for the CXO to draw any kind of conclusions about how their customers are experiencing the brand. Also, is it even relevant?

After all, while ‘satisfaction scores’ and ‘likelihood to promote’ a brand can be assumed to imply that the customer ‘likes’ the brand, that inference does not necessarily show the CXO what is the nature of the experience, and what specifically about it created the ‘emotional hook’ strong enough for the customer to want to ‘promote’ the brand to other users or have been satisfied. In short, it will likely leave more questions than answers, rather than illuminating actionable next steps for improving the process.

Rather, you need a measurement tool that tells you what customers of your brand (as well as your competitor, and even category) value when it comes to experience. Something that complements current studies you already have; but offers deeper insights that can help you create a strategic plan of action. A piece of research that sheds light on not just the ‘what’, but the ‘why’ of your customers’ emotional connection (or disconnection) with your brand based on their experience.

In short, Kadence’s Emotional Connection Matrix (ECM) is what you need. We have completed a study amongst Singapore consumers across categories on how individual brands scored in terms of emotionally-connecting with them, how these brands compare to others, which product category has the highest tendency to provoke positive emotional connections based solely on brand experiences, and what kinds of actions actually lead to said positive emotional connections. Drop by the CX Conference 2019 at Four Seasons Hotel on 26th July to satisfy your curiosity, as we talk more about the Emotional Connection Matrix.

How do you create customer delight? Our latest research sought to explore what matters to customers in 11 markets which match our international footprint: the UK, US, Singapore, Vietnam, Thailand, the Philippines, Japan, Indonesia, India and China and Hong Kong.

As part of the research, we uncovered 5 must-have principles for any global customer experience strategy. In this blog post, we share these principles, together with examples of brands getting it right, to inspire your strategy development.

1. Understand customers’ needs and feelings

We discovered that what matters most to customers globally is that service personnel take the time to listen and really understand their needs. This far outweighed any other factor by a long way. So how do go further than in-store interactions and deliver this on a strategic level?

Research, of course, is crucial – and doing this in-store can further strengthen the customer experience. A good example of this comes from British supermarket, Morrisons. The brand implemented a “customer listening programme” in 80 stores across the country to speak to customers about their in-store experiences and overall perceptions of brand. Not only did the strategy enable Morrison’s to build relationships with customers, it helped the supermarket understand what elements of its positioning to leverage in its future communications and campaigns.

2. React positively to customer requests

Another element that matters to customers is that the service personnel react positively to their requests. But beyond staff training to ensure this is happening in store, what else can brands do?

Starbucks has one solution. They launched My Starbucks Idea, a crowdsourcing platform where customers can request everything from new drink flavours to customer service improvements. Since the site was established, hundreds of ideas have been launched by Starbucks. Providing free WiFi in store was a My Starbucks Idea, as was introducing new payment solutions, and numerous product lines and flavourings started out life on the site. As a way of reacting positively to customer requests and feeding its innovation pipeline at the same time, it’s a real win-win for Starbucks.

Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

3. Show customers they matter

It’s also important that service personnel express how important customers are to the brand. There are numerous ways of achieving this, ranging from small tactical actions to more comprehensive loyalty schemes.

There’s lots that established brands can learn from smaller businesses here. From handwritten notes to customer appreciation events, small gestures can really make a customer feel valued, building that bond with your business.

4. Empower staff to go above and beyond

Customers also value service personnel going beyond their usual responsibilities. But how do you get your staff to make this a reality? One tip is to move away from rigid customer service processes and to embrace a more flexible approach. This will empower your staff to react to customers in the most appropriate way,  creating a personalised and therefore superior customer experience.

A great example of this comes from UK coffee chain Pret. Each week, staff in the store are allowed to give away a certain number of free drinks to customers. Giving employees the freedom to offer a free coffee to a flustered customer is a small gesture that delivers big returns, quickly making someone’s day and creating a positive brand experience.

5. Give gifts that reflect customers’ needs

Another component to consider adding to your customer experience strategy is gifting. But to really resonate, gifts needs to take customer wants and needs into account. If you’re in search of inspiration, look no further than Sephora. The French beauty brand delivers personalised emails – based on an individual’s search history – that give customers chance to get their hands on a relevant free gift.

As part of our research, we investigated how these factors vary by market. Get in touch with your local office to find out the 5 must-have principles for a best-in-class customer experience strategy in your market.

What does it take to delight today’s customers? Our latest research sought to explore the factors that create truly exceptional customer experiences across 11 markets which match Kadence International’s global footprint: the UK, US, Singapore, Vietnam, Thailand, the Philippines, Japan, Indonesia, India and China and Hong Kong.

Take a look at the infographic below to get a taste of the research or get in touch to learn about the factors that matter most in your country.

"Delight Customer" Infographic
Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

Trusted by

Imagine you’re a digital marketer for an online retailer specialising in fitness gear. You’ve just launched a new line of eco-friendly yoga mats, and you’re tasked with maximising sales through your website. You test two different product page versions to see which drives more purchases. 

Version A features a prominent “Limited Time Offer” banner at the top, while Version B includes a series of customer testimonials right beneath the product title. The results of this A/B test could significantly affect your sales figures and offer deeper insights into what motivates your customers to buy.

Such is the power of A/B testing, a method companies of all sizes use to make data-driven decisions that refine user experiences and improve conversion rates. 

A/B testing provides a data-driven solution to optimise website effectiveness without the guesswork. By comparing two versions of a page or element directly against each other, brands can see which changes produce positive outcomes and which ones do not, leading to better business results and a deeper understanding of customer behaviour.

Whether you’re looking to increase conversion rates, enhance user engagement, or drive more sales, effective A/B testing is the key to achieving your goals precisely and confidently.

A/B testing, or split testing, is a method in which two versions of a webpage or app are compared to determine which performs better. Imagine you’re at the helm of a ship; A/B testing gives you the navigational tools to steer more accurately toward your desired destination—increased sales, more sign-ups, or any other business goal. It involves showing the original version (A) and a modified version (B), where a single element may differ, such as the colour of a call-to-action button or the layout of a landing page, to similar visitors simultaneously. The version that outperforms the other in achieving a predetermined goal is then used moving forward.

The Importance of A/B testing and ROI

The compelling advantage of A/B testing is its direct contribution to enhancing business metrics and boosting return on investment (ROI). 

Online retailers frequently use A/B testing to optimise website leads and increase conversion rates. This includes split testing product pages and online advertisements, such as Google Shopping Ads. By A/B testing different product page layouts, retailers can identify a version that increases their sales, impacting annual revenue. Similarly, SaaS providers test and optimise their landing pages through A/B testing to find the version that increases user sign-ups, directly improving their bottom line.

A/B testing is less about guessing and more about evidence-based decision-making, ensuring every change to your interface is a strategic enhancement, not just a cosmetic tweak.

Preparing for A/B Testing

1. Setting Objectives

Before launching an A/B test, defining clear, measurable objectives is critical. These objectives should be specific, quantifiable, and aligned with broader business goals. Common goals include increasing conversion rates, reducing bounce rates, or boosting the average order value. The clarity of these objectives determines the test’s focus and, ultimately, its success.

2. Identifying Key Elements to Test

Choosing the right elements on your website for A/B testing can significantly affect the outcome. High-impact elements often include:

  • CTAs: Testing variations in the text, color, or size of buttons to see which drives more clicks.
  • Layouts: Comparing different arrangements of elements on a page to determine which layout keeps visitors engaged longer.
  • Content: Tweaking headlines, product descriptions, or the length of informational content to optimise readability and conversion.
  • Images and Videos: Assessing different images or video styles to see which leads to higher engagement or sales.

3. Understanding Your Audience

Effective A/B testing requires a deep understanding of your target audience. Knowing who your users are, what they value, and how they interact with your website can guide what you test and how you interpret the data from those tests.

Data Analytics Snapshots:

Utilising tools like Google Analytics, heatmaps, or session recordings can provide insights into user behaviour. Heatmaps, for example, can show where users are most likely to click, how far they scroll, and which parts of your site draw the most attention. These tools can highlight areas of the site that are performing well or underperforming, guiding where to focus your testing efforts.

Importance of Audience Insights:

Understanding user behaviour through these tools helps tailor the A/B testing efforts to meet your audience’s needs and preferences, leading to more successful outcomes. For instance, if heatmaps show that users frequently abandon a long signup form, testing shorter versions or different layouts of the form could reduce bounce rates and increase conversions.

These preparatory steps—setting objectives, identifying key elements, and understanding the audience—create a strong foundation for successful A/B testing. By meticulously planning and aligning tests with strategic business goals, companies can ensure that their efforts lead to valuable, actionable insights that drive growth and improvement.

Designing A/B Tests

Developing Hypotheses

A well-crafted hypothesis is the cornerstone of any successful A/B test. It sets the stage for what you’re testing and predicts the outcome. A strong hypothesis is based on data-driven insights and clearly states what change is being tested, why, and its expected impact.

Guidance on Formulating Hypotheses:

  • Start with Data: Analyze your current data to identify trends and areas for improvement. For instance, if data shows a high exit rate from a checkout page, you might hypothesise that simplifying the page could retain more visitors.
  • Be Specific: A hypothesis should clearly state the expected change. For example, “Changing the CTA button from green to red will increase click-through rates by 5%,” rather than “Changing the CTA button colour will make it more noticeable.”
  • Link to Business Goals: Ensure the hypothesis aligns with broader business objectives, enhancing its relevance and priority.

Examples:

  • Good Hypothesis: “Adding customer testimonials to the product page will increase conversions by 10% because trust signals boost buyer confidence.”
  • Poor Hypothesis: “Changing things on the product page will improve it.”

Creating Variations

Once you have a solid hypothesis, the next step is to create the variations that will be tested. This involves tweaking one or more elements on your webpage based on your hypothesis.

Instructions for Creating Variations:

  • Single Variable at a Time: To understand what changes affect outcomes, modify only one variable per test. If testing a CTA button, change the color or the text, but not both simultaneously.
  • Use Design Tools: Utilise web design tools to create these variations. Ensure that the changes remain true to your brand’s style and are visually appealing.
  • Preview and Test Internally: Before going live, preview variations internally to catch potential issues.

Choosing the Right Tools

Selecting the appropriate tools is crucial for effectively running A/B tests. The right tool can simplify testing, provide accurate data, and help interpret results effectively.

By following these steps—developing a strong hypothesis, creating thoughtful variations, and choosing the right tools—you can design effective A/B tests that lead to meaningful insights and significant improvements in website performance. This strategic approach ensures that each test is set up for success, contributing to better user experiences and increased business outcomes.

Implementing A/B Tests

Effective implementation of A/B tests is critical to achieving reliable results that can inform strategic decisions. 

Test Setup and Configuration

Setting up an A/B test properly ensures that the data you collect is accurate and that the test runs smoothly without affecting the user experience negatively.

Step-by-step Guide on Setting Up Tests:

  • Define Your Control and Variation: Start by identifying your control version (the current version) and the variation that includes the changes based on your hypothesis.
  • Choose the Type of Test: Decide whether you need a simple A/B test or a more complex split URL test. Split URL testing is useful when major changes are tested, as it redirects visitors to a different URL.
  • Set Up the Test in Your Chosen Tool: Using a platform like Google Optimise, create your experiment by setting up the control and variations. Input the URLs for each and define the percentage of traffic directed to each version.
  • Implement Tracking: Ensure that your analytics tracking is correctly set up to measure results from each test version. This may involve configuring goals in Google Analytics or custom-tracking events.

Interactive Checklists or Setup Diagrams:

A checklist can help ensure all steps are followed, such as:

  • Define control and variation
  • Choose testing type
  • Configure the test in the tool
  • Set traffic allocation
  • Implement tracking codes

Best Practices for Running Tests

Once your test is live, managing it effectively is key to obtaining useful data.

Tips for Managing and Monitoring A/B Tests:

  • Monitor Performance Regularly: Check the performance of your test at regular intervals to ensure there are no unexpected issues.
  • Allow Sufficient Run Time: Let the test run long enough to reach statistical significance, usually until the results stabilise. You have enough data to make a confident decision.
  • Be Prepared to Iterate: Depending on the results, be prepared to make further adjustments and rerun the test. Optimisation is an ongoing process.

Visual Dos and Don’ts Infographics

To help visualise best practices, create an infographic that highlights the dos and don’ts:

  • Do: Test one change at a time, ensure tests are statistically significant, and use clear success metrics.
  • Don’t Change multiple elements at once, end tests prematurely, and ignore variations in user behaviour.

Statistical Significance and Sample Size

Understanding these concepts is crucial for interpreting A/B test results accurately.

Explanation of Key Statistical Concepts:

  • Statistical Significance: This measures whether the outcome of your test is likely due to the changes made rather than random chance. Typically, a result is considered statistically significant if the probability of the result occurring by chance is less than 5%.
  • Sample Size: The number of users you need in your test to reliably detect a difference between versions. A sample size that is too small may not accurately reflect the broader audience.

Graphs and Calculators:

  • Provide a graph showing how increasing sample size reduces the margin of error, enhancing confidence in the results.
  • Link to or embed a sample size calculator, allowing users to input their data (like baseline conversion rate and expected improvement) to determine how long to run their tests.

By following these guidelines and utilising the right tools and methodologies, you can implement A/B tests that provide valuable insights into user behavior and preferences, enabling data-driven decision-making that boosts user engagement and business performance.

Analyzing Test Results

Once your A/B test has concluded, the next crucial step is analyzing the results. This phase is about interpreting the data collected, understanding the statistical relevance of the findings, and making informed decisions based on the test outcomes.

Interpreting Data

Interpreting the results of an A/B test involves more than just identifying which variation performed better. It requires a detailed analysis to understand why certain outcomes occurred and how they can inform future business decisions.

How to Read Test Results:

  • Conversion Rates: Compare the conversion rates of each variation against the control. Look not only at which had the highest rate but also consider the context of the changes made.
  • Segmented Results: Break down the data by different demographics, device types, or user behaviours to see if there are significant differences in how certain groups reacted to the variations.
  • Consistency Over Time: Evaluate how the results varied over the course of the test to identify any patterns that could influence your interpretation, such as a weekend vs. weekday performance.

Statistical Analysis

A deeper dive into the statistical analysis will confirm whether the observed differences in your A/B test results are statistically significant and not just due to random chance.

Understanding Statistical Significance and Other Metrics:

  • P-value: This metric helps determine the significance of your results. A p-value less than 0.05 typically indicates that the differences are statistically significant.
  • Confidence Interval: This range estimates where the true conversion rate lies with a certain level of confidence, usually 95%.
  • Lift: This is the percentage increase or decrease in the performance metric you are testing for, calculated from the baseline of the control group.

Making Informed Decisions

With the data interpreted and the statistical analysis complete, the final step is to decide how to act on the insights gained from your A/B test.

Guidelines on How to Act on Test Results:

  • Implement Winning Variations: If one variation significantly outperforms the control, consider implementing it across the site.
  • Further Testing: If results are inconclusive or the lift is minimal, running additional tests with adjusted variables or targeting a different user segment may be beneficial.
  • Scale or Pivot: Depending on the impact of the changes tested, decide whether to scale these changes up to affect more of your business or to pivot and try a different approach entirely.

Decision Trees or Flowcharts:

Create a decision tree or flowchart that outlines the decision-making process following an A/B test. This could include nodes that consider whether the test was statistically significant, whether the results align with business goals, and what follow-up actions (like further testing, full implementation, or abandonment of the change) should be taken based on different scenarios.

By thoroughly analyzing A/B test results through data interpretation, statistical analysis, and strategic decision-making, organisations can ensure that they are making informed decisions that will enhance their website’s user experience and improve overall business performance. This data-driven approach minimises risks associated with website changes and ensures that resources are invested in modifications that provide real value.

Beyond Basic A/B Testing

Once you have mastered basic A/B testing, you can explore more sophisticated techniques that offer deeper insights and potentially greater improvements in user experience and conversion rates. This section delves into advanced testing strategies and the importance of ongoing optimisation through iterative testing.

Advanced Testing Techniques

Advanced testing methods allow you to explore more complex hypotheses about user behaviour and website performance, often involving multiple variables or entire user journeys.

Multivariate Testing (MVT):

  • Overview: Unlike A/B testing, which tests one variable at a time, multivariate testing allows you to test multiple variables simultaneously to see which combination produces the best outcome.
  • Application: For example, you might test different versions of an image, headline, and button on a landing page all at once to determine the best combination of elements.
  • Benefits: This approach can significantly speed up the testing process and is particularly useful for optimising pages with multiple elements of interest.

Multipage Testing:

  • Overview: Also known as “funnel testing,” this technique involves testing variations across multiple pages that make up a user journey or funnel.
  • Application: You might test variations of both the product and checkout pages to see which combination leads to higher conversion rates.
  • Benefits: Multipage testing helps ensure consistency in messaging and user experience across multiple stages of the user journey, which can improve overall conversion rates.

Continuous Improvement and Iteration

The goal of A/B testing is not just to find a winning variation but to continually refine and enhance your website based on user feedback and behaviour.

Importance of Ongoing Optimisation:

  • Iterative Process: Optimisation is an ongoing process that involves continually testing and refining website elements based on user data and business objectives.
  • Learning from Each Test: Each test provides valuable insights into whether a variation wins. These insights can inform future tests, leading to better user experiences and higher conversion rates.

Iterative Testing Strategies:

  • Start with Broad Tests: Begin with broader tests to identify which elements have the most significant impact on user behaviour.
  • Refine and Repeat: Use the insights gained to refine your hypotheses and test more specific variations.
  • Expand Testing: Once you’ve optimised major elements, expand your testing to less prominent components that could still affect user experience and conversions.

Timelines and Case Studies:

  • Timeline Example: Show a timeline that outlines an annual testing strategy, with phases for broad testing, refinement, and expansion.
  • Case Study: Present a case study of a company that implemented continuous testing. Highlight how iterative testing helped them achieve a significant, sustained increase in conversion rates over time. For instance, a tech company could use iterative testing to fine-tune its sign-up process, resulting in a 50% increase in user registrations over a year.

By advancing beyond basic A/B testing and embracing more complex and continuous testing strategies, companies can optimise their websites more effectively and foster a culture of data-driven decision-making. This approach leads to improvements that align with user preferences and business goals, ensuring sustained growth and a competitive edge in the market.

Common Pitfalls and How to Avoid Them

A/B testing is a powerful tool for website optimisation, but common pitfalls can undermine its effectiveness. This section explores typical errors that occur during the testing process and provides strategies to ensure the validity and reliability of your tests.

List of Common Mistakes

Identifying Errors and Solutions:

  • Testing Too Many Changes at Once: It can make determining which change affected the outcome difficult.
    • Solution: Focus on testing one change at a time or use multivariate testing for simultaneous changes and analyze the impact of each element separately.
  • Not Allowing Enough Time for the Test to Run: Ending a test too soon can lead to conclusions that aren’t statistically significant.
    • Solution: Ensure each test runs long enough to collect adequate data, reaching statistical significance before making decisions.
  • Testing Without a Clear Hypothesis: Starting tests without a clear, data-backed hypothesis leads to unclear outcomes.
    • Solution: Develop a precise hypothesis for each test based on thorough data analysis and clear business objectives.
  • Ignoring User Segmentation: Different segments may react differently to the same change.
    • Solution: Segment your audience and analyze how different groups respond to each variation.

Visuals of Pitfalls vs. Best Practices:

  • Create side-by-side infographics showing examples of these mistakes versus best practices. For example, visually compare the outcome of a test that changed multiple elements simultaneously against one that tested a single change.

Ensuring Validity and Reliability

Maintaining the integrity of your A/B tests is crucial for obtaining reliable, actionable insights.

Tips on Maintaining Test Integrity:

  • Use Proper Randomisation: Ensure that the distribution of users between the control and test groups is random to avoid selection bias.
    • Tool Tip: Utilise tools that automatically handle randomisation to avoid manual errors.
  • Control External Factors: Holidays, marketing campaigns, or significant news events can skew test results.
    • Solution: Monitor external factors, adjust the testing period, or filter the data to account for anomalies.
  • Ensure Consistent Test Conditions: Changes in the testing environment or platform during the test can invalidate results.
    • Solution: Keep the testing conditions consistent throughout the test period and verify configuration settings regularly.
  • Validate Test Setup Before Going Live: A misconfigured test can lead to incorrect data interpretation.
    • Solution: Run a smaller pilot test or use a checklist to ensure every test element is correctly set up before full deployment.

Troubleshooting Guide with Graphic Aids:

  • Develop a troubleshooting guide that includes common scenarios where A/B test integrity might be compromised. Include flowcharts or decision trees that help identify and resolve issues such as data discrepancies, unexpected user behaviour, or sudden changes in conversion rates.
  • Example Graphic Aid: A flowchart that helps determine actions when test results seem inconsistent with historical data or benchmarks. Steps might include checking configuration settings, reviewing segmentation criteria, or extending the test duration.

By understanding and avoiding these common pitfalls and maintaining rigorous standards for validity and reliability, organisations can ensure that their A/B testing efforts lead to meaningful improvements and robust data-driven decisions. This approach not only enhances the effectiveness of current tests but also builds a foundation for future testing strategies that are even more successful.

A/B Testing Case Studies

A/B testing has proven to be a critical tool for businesses aiming to optimise their online presence based on data-driven decisions. Here, we delve into some specific real-life case studies from different industries, highlighting the successes and lessons from A/B testing.

Success Stories

E-commerce: Humana

  • Overview: Humana, a well-known health insurance company, conducted an A/B test to increase click-through rates on one of their primary campaign landing pages. They tested the simplicity and message of their banner and CTA.
  • Changes Tested: The original banner had a lot of information and a standard “Shop Medicare Plans” button. The test variation simplified the message and changed the button text to “Get Started Now.”
  • Results: The variation led to a 433% increase in click-through rates to the insurance plans page.

B2B: SAP

  • Overview: SAP, a leader in enterprise application software, tested the copy of their CTA on a product page. The hypothesis was that a more action-oriented CTA would increase engagement.
  • Changes Tested: The original CTA read “Learn more,” which was changed to “See it in action” in the variation.
  • Results: This simple change in wording resulted in a 32% increase in clicks.

.

Digital Media: The Guardian

  • Overview: The Guardian tested different wordings for their support and donation CTAs to determine which would more effectively encourage readers to contribute financially.
  • Results: The test revealed that a direct ask for contributions using emotive language resulted in a higher click-through rate than a more generic request for support.
  • Lesson: This A/B test highlighted the importance of emotional resonance in messaging, especially for non-profit or cause-based initiatives.

Travel Industry: Expedia

  • Overview: Expedia conducted A/B testing to optimise hotel booking conversions on their site by altering the display of discount offers.
  • Changes Tested: They tested the visibility and presentation of savings messages (e.g., showing a percentage off versus a specific dollar amount saved).
  • Results: Showing the amount of money saved led to a slight decrease in conversion rates, contrary to expectations.
  • Lesson: The test underscored the potential for “over-optimising” to backfire and the need to balance how offers are presented to avoid overwhelming customers.

Final Checklist of A/B Testing Steps

To help ensure your A/B testing journey is structured and effective, here is a visual checklist encapsulating the process:

  1. Define Objectives: Clearly state what you aim to achieve.
  2. Formulate Hypotheses: Base your assumptions on data and prior insights.
  3. Select the Testing Tool: Choose a platform that suits your scale and complexity needs.
  4. Design the Test: Create variations based precisely on your hypotheses.
  5. Run the Test: Ensure the test is long enough to gather meaningful data.
  6. Analyze Results: Use statistical analysis to interpret the outcomes.
  7. Implement Changes: Apply successful variations or further refine and test.
  8. Repeat: Use insights gained to continuously improve further testing.

Regardless of the outcome, every test is a step forward in understanding your users better and refining your digital offerings to meet their needs more effectively. The journey of optimisation is continuous, and each effort builds upon the last, opening new doors to innovation and growth.

Harness the power of A/B testing to start making informed decisions that propel your business forward. Your next breakthrough could be just one test away.