At a time when there is concern that news outlets are feeding coronavirus panic and confusion, it may have been easy to miss some of the more positive news stories emerging in the last few weeks.

Chief among them is the impact that digital technology has had across Asia, as parts of China in particular have gone into lockdown, and the implications of this.

Across China, as The Economist reported earlier this week, subscriptions to digital health services have increased exponentially – a shift in consumer behaviour that previously had been expected to take five whole years. Similarly, we have seen reports that mobile, social media and streaming services are experiencing a strong uptick in usage whilst people are stuck indoors. Schooling has also moved online, with students taking classes through grade-specific TV channels, and the internet.

Above all, we’ve seen people using digital resources to overcome the loneliness of isolation. Gyms are offering sessions via WeChat, clubs are hosting club nights online, and gamers are congregating online to play together in increasing numbers, with Tencent’s Honor of Kings game reaching a peak in average daily users.

So will there be in any digital silver linings for the market research industry?

Non face-to-face methodologies are hardly new in our industry, but a shift towards online – particularly when it comes to qualitative research – now feels unavoidable. Where once a traditional focus group or face-to-face interviews may have sufficed, we’ll undoubtedly see digital techniques coming in to play more and more.

But herein lies a word of caution: because not all digital techniques are created equally, and not all solutions are suitable for certain projects: the most appropriate methodology will always depend on a study’s objectives.

There are plenty of digital options available to researchers: online focus groups, skype depth interviews, mobile diaries, and online communities to name but a few, but how do you work out which methodology is best suited to your study?

First of all, it’s important to start your thinking with your objectives, not your methodology. Just because you might have once used focus groups or face-to-face depth interviews in the past, doesn’t necessarily mean an online focus group or skype interview are the best ways to meet your objectives using digital tools. Start by asking:

  • Are you looking for breadth, or depth of insight?
  • Who are you looking to influence with your findings? What kinds of asset are most likely to have impact and support real change across your organisation? How quickly do your stakeholders need access to your insights?
  • How important is it to observe discussion and interaction between respondents – are you looking to compare different points of view?

How you answer these questions will heavily impact the methodology that’s right for you.

For instance, say you are conducting a concept or product test. Typically, you’d use a focus group setting so your product and design team could observe respondent reactions, and make on-the-spot changes to your product.

Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

If you’re looking for breadth, speedy insights, and discussion between respondents to understand how views differ, you might automatically think that an online focus group session, with respondents and stakeholders logging in from separate locations is your answer. However, while online focus group technology mimics the experience of a focus group setting, in practice, it is much harder for respondents to communicate with one anyone other than the moderator – you’re unlikely to meet your ‘discussion between respondents’ objective.

Instead, an online community would allow you to hit the nail on the head of all three of your objectives and then some. The key difference versus an online focus group is your ability to nurture and observe conversations between respondents in the community in a much more natural environment.

You can even use the platform to segment different audiences together, or keep the community broad to observe discussions across the whole group. Stakeholders are able to log on at any time they choose, to observe conversations, and input suggestions for additional questions to the moderators. And say you have one or two topics you’d like to explore in more depth? You can always set up private questions, to conduct one-to-one research as part of the community. And when it comes to final assets, online communities are really unrivalled when it comes to video and photo content that can be used to help land insights with your stakeholders.

If, however, observing interaction between respondents really isn’t a key necessity, and you’re looking for depth of insight, you may want to consider depth Skype interviews instead of your traditional focus group. Digital depth interviews work beautifully for concept and product testing as part of a staged programme of research, especially when you meld multiple touch-points together. You could consider following an initial Skype interview with a selfie-style filmed product review in-home for example, to really dig into consumer views.

Ultimately, while all of these methodologies have been around for some time, it’s likely that a reduction in face-to-face research will see us being far more creative with the digital options available to us. It will be fascinating to see whether or not these changes result in a long-term shift towards digital methodologies. Back in 2014 during London’s tube strikes, commuters were forced to find alternative routes to get travel around the city. Following the strikes, Transport for London reported that one in 20 commuters actually stuck with the new route they’d discovered. Will the research industry see a similar permanent shift? Time will tell.

Kadence has a wealth of experience in using digital research methodologies to help answer critical questions for brands and businesses. If you’re looking for support to help you find the best approach to meet your business objectives, please get in touch.  

Our kids media experts Bianca Abulafia and Sarah Serbun shared their top tips at Qual 360 of how to conduct qual research with kids and the culture considerations to bar in mind in each market.

Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

As Greenbook endeavors to expand its presence within Asia, Kadence International identified with that desire to spread innovative market research practices across the world, for the betterment of the industry. With that in mind, Kadence International stepped up and became Title Sponsor for its third IIeX Conference in Dec 2019.

IIeX Bangkok 2019 - logo

The two-days conference brought together both clients and agencies in Thailand and within the region, and the Kadence booth was at the center of all ‘buzzworthy’ interactions and conversations, discussing what interesting methodologies are being carried out, and what kinds of possibilities and potential the future can hold, when clients and agencies achieve perfect synergy.

IIeX Bangkok 2019 - venue with groups of people

To that point, Kadence’s presentation at the conference was great proof of that: it worked with Bloomberg on a project, the first of its kind in Asia, by marrying neuro-centric measures of respondents and traditional quantitative surveys, to understand consumer reactions to the same ads placed on different platforms. Results of the study will be released in the public domain in Q1 2020, but the study proved how traditional research methodologies, in the face of evolving technologies, can still play a complementary role in enhancing outcomes and strengthening gained insights 

IIeX Bangkok 2019 - speakers presenting

The presentation was part of a larger series of sessions that covered other interesting subject matters: from Google’s sharing of what makes a fad a trend (or, when does a ‘thing’ actually become a THING) and the commercial potential that brands can tap into when thinking about the urban phenomenon of loneliness, to why the over-60-year-olds are brands’ best bet for market growth in Asia and how visual communication partially explains chat platform Line’s success in Thailand, there were food-for-thought aplenty, and many topics that sparked discussions during lunch and networking breaks

Kadence also noticed 3 phenomena during the conference, which it believes is evidence that a larger trend is taking shape:

  1. Greater willingness to appreciate research from a multitude of angles – Google’s own study on the formation of trends highlights how data analytics, however advanced and wide in reach, can only explain part of the story
  2. Greater access to research respondents outside of traditional channels – on top of reaching out to online panels for respondents to complete online surveys, there’s increasing experimentation to access data from a brand’s own users (e.g. True Mobile in Thailand and its millions of subscribers), or new vendors that are using different platforms to offer agencies that reach (e.g. crowdsourcing, social media, etc.)
  3. Distinction vs. differentiation – precisely because of the plethora of new partners for both agencies and brands to work with (e.g. in the space of accessing respondents for studies), the ones that work well understand the classic marketing notion of ‘distinction’: what they offer may not be different to their competitors, but they are at least clear about what it is that allows them to stand out from the crowd
Stay ahead

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

In summary, that larger trend is the notion of ‘connections, not alternatives’; this is at the heart of Kadence’s strategy for 2020, as it believes in order to raise the impact of research within the region, through meaningful insights and business-relevant recommendations, it will benefit by playing that ‘matchmaking’ role, working with partners who’s business is in making sense of cutting-edge research technologies, and deciphering that to put it together with traditional research, in service of answering clients’ strategic questions. Kadence is certain that the industry will certainly benefit from the notion of ‘this-AND-that’, rather than ‘this-OR-that’

IIeX Bangkok 2019 - presenting with data

Amy Lo explores her personal experiences growing up across two vastly different continents and how these have shaped her insight career. 

When I was 12 my Dad announced I was to leave my home in Taiwan to attend boarding school in England. The resulting 10 years were to shape me in a way neither he, nor I could ever have imagined. 

Growing up across two continents that are so vastly different in terms of culture, climate and consumption meant living in a state of perpetual adaptation, seeking ways to adjust to the environment around me, both at school and then back at home during school holidays. 

I think this constant need to adapt to my surroundings is the reason I first started to closely observe the people around me, their behavior, their motivations, the things that made them similar and the things that made them different.

Skip to the present and as it turns out my fascination with people, their stories, backgrounds and culture has influenced me in my choice of career. As a qualitative market researcher it is my job to investigate the beliefs, perceptions and essential truths behind people’s behavior – and establish patterns amongst them.

I love this career for the opportunity it gives me to gain insight into our respondent’s lives and, of course to deliver Insight Worth Sharing to my clients. 

There is also a lot of variety; during my first role as a Graduate Insight Executive in Taiwan I spent time with a wide variety of respondents from tech-savvy consumers aiming to optimise a mobile-friendly home page for Yahoo to new mums sharing all about their nappy usage. One weekend we would be speaking to HNWIs about luxury holidays and the following weekend, accompanying Chinese teenagers on their hunt for the perfect pair of jeans! 

During a recent project in my current role at Kadence International in London I found myself face to face with my two ‘home’ nations. The study, for a luxury technology brand, involved investigating some of London and Shanghai’s wealthiest individuals and uncovered some vast and fascinating differences in priorities, preferences and behaviours of the Chinese elite against their UK counterparts. 

This study motivated me to understand more about today’s Asian consumer. How can brands adapt their approach to suit this vast and lucrative market? And, how can we as researchers select the best methodologies in order to gather the richest, most valuable insights?

Growing up across two continents that are so vastly different in terms of culture, climate and consumption meant living in a state of perpetual adaptation. I love this career for the opportunity it gives me to gain insight into our respondent’s lives and, of course to deliver Insight Worth Sharing to my clients. With over 700 million Internet users and a little shy of 600 million smartphone users in China (as of 2016), the future of online qual is extremely exciting.

In true millennial style, I started my investigations through my own social network. My friends from Asia were always posting in feeds, reviewing the latest products they have tried. I observed a willingness to share allegiances to particular brands, which doesn’t represent brand loyalty per se, simply that they are not afraid to share their opinions. Many of my female friends have their own blogs, discussing their views on the latest trends in clothes and make up and my feed is regularly inundated with ‘outfit of the day’ posts with links that take you to web shops where you can make a quick purchase from the endorsed brand or seller. 

Surprisingly for China, a country where censorship is widespread, opinions and voices on the Internet are loud and plentiful. Unlike Western countries, there is little trust in traditional media sources such as TV, press or radio. Instead, word of mouth is an increasingly powerful tool, as people use social media platforms to personally share information and opinions with friends and family. 

This trend has been identified by brands in China, who have made it their priority to create intelligent, comprehensive digital campaigns to facilitate the spread of their products or services. This is also why brands are carefully monitoring their e-reputation. Product reviews on the web have a growing influence on people’s decision making. Brands understand the need to nurture advocates within each and every social circle to build credibility and customer proximity. 

Back to my professional experience, working closely with a wide range of Chinese audiences both in Asia and in the UK, I have learnt that I most enjoy using methodologies that give me longer and closer contact with my audience, these allow me to really get to know each and every one of their stories, background and culture. 

Market research online communities offer a highly effective way for UK researchers to gather insight from Chinese audiences. Logistically simple (no working around time differences); methodologically effective (tap into natural online behaviours to provide a truthful engagement with our target audience) and financially efficient (no expensive flights and hotels!). 

Chinese audiences can often be more comfortable providing their opinions via the Internet particularly with certain more sensitive or divisive topics where they can retain a sense of anonymity. With online research methods, there are fewer concerns about their voices or faces being identified – and therefore a greater willingness to share.

With over 700 million Internet users and a little shy of 600 million smartphone users in China (as of 2016), the future of online qual is extremely exciting for me. Mobile devices are the main mode of Internet access and instant messaging is the top online activity in China. Apps such as WeChat are used on a daily basis, just as you and I use WhatsApp to keep in touch with friends and family. WeChat has evolved from a pure instant messaging app to (quoting the FT in April 2016) an app that is a phone, messenger, video conference, ecommerce platform and gaming console, not to mention noodle delivery service, for a nation of people in love with their smartphones. 

Some companies are already using WeChat as a data collection tool for short quantitative surveys, tapping into its mass user base and taking full advantage of its ability to provide instant responses.

And given that the app is already in most people’s pockets means we can largely conduct many of the conventional qualitative methods through WeChat as well. We’re already gaining insights through both interaction and observation, from in-depth interviews to accompanied shopping, to digital diary logging. It’s amazing – but we’re able to follow the steps of Chinese respondents through the lenses of their smartphones from the comfort of their chairs in London. 

The casual nature, accessibility and users’ familiarity with WeChat helps encourage user interaction, engagement and participation, thereby improving our capability to obtain accurate and honest insights. 

The opportunity to use social media platforms for qualitative research is not completely unique to the Chinese market. We know some have been doing focus groups on WhatsApp, and some are using Facebook as a research tool. There is no reason why something similar cannot become a more prevalent research method in the West, provided we have a similar multifunctioning social media platform and the same abundance of users already familiar with the platform.

Personally, I find the possibility of conducting focus groups and in-depth interviews from my iPhone a very exciting prospect. With social media platforms such as WeChat, in a click of a button, I’m in touch with a group of people 5000 miles away, tapping into every aspect and every minute of their lives and uncovering trends through my very own device. I can do this whilst on the go and, when something I see on the street suddenly inspires me, I no longer have to wait until Monday. I can simply pop a question to my group and wait 5 seconds to see what they have to say. 

Looking back, whilst my 12 year old self may have resented my Dad’s decision to send me away from Taiwan to the UK, in hindsight, it was the best decision he ever made.

Imagine you’re a digital marketer for an online retailer specialising in fitness gear. You’ve just launched a new line of eco-friendly yoga mats, and you’re tasked with maximising sales through your website. You test two different product page versions to see which drives more purchases. 

Version A features a prominent “Limited Time Offer” banner at the top, while Version B includes a series of customer testimonials right beneath the product title. The results of this A/B test could significantly affect your sales figures and offer deeper insights into what motivates your customers to buy.

Such is the power of A/B testing, a method companies of all sizes use to make data-driven decisions that refine user experiences and improve conversion rates. 

A/B testing provides a data-driven solution to optimise website effectiveness without the guesswork. By comparing two versions of a page or element directly against each other, brands can see which changes produce positive outcomes and which ones do not, leading to better business results and a deeper understanding of customer behaviour.

Whether you’re looking to increase conversion rates, enhance user engagement, or drive more sales, effective A/B testing is the key to achieving your goals precisely and confidently.

A/B testing, or split testing, is a method in which two versions of a webpage or app are compared to determine which performs better. Imagine you’re at the helm of a ship; A/B testing gives you the navigational tools to steer more accurately toward your desired destination—increased sales, more sign-ups, or any other business goal. It involves showing the original version (A) and a modified version (B), where a single element may differ, such as the colour of a call-to-action button or the layout of a landing page, to similar visitors simultaneously. The version that outperforms the other in achieving a predetermined goal is then used moving forward.

The Importance of A/B testing and ROI

The compelling advantage of A/B testing is its direct contribution to enhancing business metrics and boosting return on investment (ROI). 

Online retailers frequently use A/B testing to optimise website leads and increase conversion rates. This includes split testing product pages and online advertisements, such as Google Shopping Ads. By A/B testing different product page layouts, retailers can identify a version that increases their sales, impacting annual revenue. Similarly, SaaS providers test and optimise their landing pages through A/B testing to find the version that increases user sign-ups, directly improving their bottom line.

A/B testing is less about guessing and more about evidence-based decision-making, ensuring every change to your interface is a strategic enhancement, not just a cosmetic tweak.

Preparing for A/B Testing

1. Setting Objectives

Before launching an A/B test, defining clear, measurable objectives is critical. These objectives should be specific, quantifiable, and aligned with broader business goals. Common goals include increasing conversion rates, reducing bounce rates, or boosting the average order value. The clarity of these objectives determines the test’s focus and, ultimately, its success.

2. Identifying Key Elements to Test

Choosing the right elements on your website for A/B testing can significantly affect the outcome. High-impact elements often include:

  • CTAs: Testing variations in the text, color, or size of buttons to see which drives more clicks.
  • Layouts: Comparing different arrangements of elements on a page to determine which layout keeps visitors engaged longer.
  • Content: Tweaking headlines, product descriptions, or the length of informational content to optimise readability and conversion.
  • Images and Videos: Assessing different images or video styles to see which leads to higher engagement or sales.

3. Understanding Your Audience

Effective A/B testing requires a deep understanding of your target audience. Knowing who your users are, what they value, and how they interact with your website can guide what you test and how you interpret the data from those tests.

Data Analytics Snapshots:

Utilising tools like Google Analytics, heatmaps, or session recordings can provide insights into user behaviour. Heatmaps, for example, can show where users are most likely to click, how far they scroll, and which parts of your site draw the most attention. These tools can highlight areas of the site that are performing well or underperforming, guiding where to focus your testing efforts.

Importance of Audience Insights:

Understanding user behaviour through these tools helps tailor the A/B testing efforts to meet your audience’s needs and preferences, leading to more successful outcomes. For instance, if heatmaps show that users frequently abandon a long signup form, testing shorter versions or different layouts of the form could reduce bounce rates and increase conversions.

These preparatory steps—setting objectives, identifying key elements, and understanding the audience—create a strong foundation for successful A/B testing. By meticulously planning and aligning tests with strategic business goals, companies can ensure that their efforts lead to valuable, actionable insights that drive growth and improvement.

Designing A/B Tests

Developing Hypotheses

A well-crafted hypothesis is the cornerstone of any successful A/B test. It sets the stage for what you’re testing and predicts the outcome. A strong hypothesis is based on data-driven insights and clearly states what change is being tested, why, and its expected impact.

Guidance on Formulating Hypotheses:

  • Start with Data: Analyze your current data to identify trends and areas for improvement. For instance, if data shows a high exit rate from a checkout page, you might hypothesise that simplifying the page could retain more visitors.
  • Be Specific: A hypothesis should clearly state the expected change. For example, “Changing the CTA button from green to red will increase click-through rates by 5%,” rather than “Changing the CTA button colour will make it more noticeable.”
  • Link to Business Goals: Ensure the hypothesis aligns with broader business objectives, enhancing its relevance and priority.

Examples:

  • Good Hypothesis: “Adding customer testimonials to the product page will increase conversions by 10% because trust signals boost buyer confidence.”
  • Poor Hypothesis: “Changing things on the product page will improve it.”

Creating Variations

Once you have a solid hypothesis, the next step is to create the variations that will be tested. This involves tweaking one or more elements on your webpage based on your hypothesis.

Instructions for Creating Variations:

  • Single Variable at a Time: To understand what changes affect outcomes, modify only one variable per test. If testing a CTA button, change the color or the text, but not both simultaneously.
  • Use Design Tools: Utilise web design tools to create these variations. Ensure that the changes remain true to your brand’s style and are visually appealing.
  • Preview and Test Internally: Before going live, preview variations internally to catch potential issues.

Choosing the Right Tools

Selecting the appropriate tools is crucial for effectively running A/B tests. The right tool can simplify testing, provide accurate data, and help interpret results effectively.

By following these steps—developing a strong hypothesis, creating thoughtful variations, and choosing the right tools—you can design effective A/B tests that lead to meaningful insights and significant improvements in website performance. This strategic approach ensures that each test is set up for success, contributing to better user experiences and increased business outcomes.

Implementing A/B Tests

Effective implementation of A/B tests is critical to achieving reliable results that can inform strategic decisions. 

Test Setup and Configuration

Setting up an A/B test properly ensures that the data you collect is accurate and that the test runs smoothly without affecting the user experience negatively.

Step-by-step Guide on Setting Up Tests:

  • Define Your Control and Variation: Start by identifying your control version (the current version) and the variation that includes the changes based on your hypothesis.
  • Choose the Type of Test: Decide whether you need a simple A/B test or a more complex split URL test. Split URL testing is useful when major changes are tested, as it redirects visitors to a different URL.
  • Set Up the Test in Your Chosen Tool: Using a platform like Google Optimise, create your experiment by setting up the control and variations. Input the URLs for each and define the percentage of traffic directed to each version.
  • Implement Tracking: Ensure that your analytics tracking is correctly set up to measure results from each test version. This may involve configuring goals in Google Analytics or custom-tracking events.

Interactive Checklists or Setup Diagrams:

A checklist can help ensure all steps are followed, such as:

  • Define control and variation
  • Choose testing type
  • Configure the test in the tool
  • Set traffic allocation
  • Implement tracking codes

Best Practices for Running Tests

Once your test is live, managing it effectively is key to obtaining useful data.

Tips for Managing and Monitoring A/B Tests:

  • Monitor Performance Regularly: Check the performance of your test at regular intervals to ensure there are no unexpected issues.
  • Allow Sufficient Run Time: Let the test run long enough to reach statistical significance, usually until the results stabilise. You have enough data to make a confident decision.
  • Be Prepared to Iterate: Depending on the results, be prepared to make further adjustments and rerun the test. Optimisation is an ongoing process.

Visual Dos and Don’ts Infographics

To help visualise best practices, create an infographic that highlights the dos and don’ts:

  • Do: Test one change at a time, ensure tests are statistically significant, and use clear success metrics.
  • Don’t Change multiple elements at once, end tests prematurely, and ignore variations in user behaviour.

Statistical Significance and Sample Size

Understanding these concepts is crucial for interpreting A/B test results accurately.

Explanation of Key Statistical Concepts:

  • Statistical Significance: This measures whether the outcome of your test is likely due to the changes made rather than random chance. Typically, a result is considered statistically significant if the probability of the result occurring by chance is less than 5%.
  • Sample Size: The number of users you need in your test to reliably detect a difference between versions. A sample size that is too small may not accurately reflect the broader audience.

Graphs and Calculators:

  • Provide a graph showing how increasing sample size reduces the margin of error, enhancing confidence in the results.
  • Link to or embed a sample size calculator, allowing users to input their data (like baseline conversion rate and expected improvement) to determine how long to run their tests.

By following these guidelines and utilising the right tools and methodologies, you can implement A/B tests that provide valuable insights into user behavior and preferences, enabling data-driven decision-making that boosts user engagement and business performance.

Analyzing Test Results

Once your A/B test has concluded, the next crucial step is analyzing the results. This phase is about interpreting the data collected, understanding the statistical relevance of the findings, and making informed decisions based on the test outcomes.

Interpreting Data

Interpreting the results of an A/B test involves more than just identifying which variation performed better. It requires a detailed analysis to understand why certain outcomes occurred and how they can inform future business decisions.

How to Read Test Results:

  • Conversion Rates: Compare the conversion rates of each variation against the control. Look not only at which had the highest rate but also consider the context of the changes made.
  • Segmented Results: Break down the data by different demographics, device types, or user behaviours to see if there are significant differences in how certain groups reacted to the variations.
  • Consistency Over Time: Evaluate how the results varied over the course of the test to identify any patterns that could influence your interpretation, such as a weekend vs. weekday performance.

Statistical Analysis

A deeper dive into the statistical analysis will confirm whether the observed differences in your A/B test results are statistically significant and not just due to random chance.

Understanding Statistical Significance and Other Metrics:

  • P-value: This metric helps determine the significance of your results. A p-value less than 0.05 typically indicates that the differences are statistically significant.
  • Confidence Interval: This range estimates where the true conversion rate lies with a certain level of confidence, usually 95%.
  • Lift: This is the percentage increase or decrease in the performance metric you are testing for, calculated from the baseline of the control group.

Making Informed Decisions

With the data interpreted and the statistical analysis complete, the final step is to decide how to act on the insights gained from your A/B test.

Guidelines on How to Act on Test Results:

  • Implement Winning Variations: If one variation significantly outperforms the control, consider implementing it across the site.
  • Further Testing: If results are inconclusive or the lift is minimal, running additional tests with adjusted variables or targeting a different user segment may be beneficial.
  • Scale or Pivot: Depending on the impact of the changes tested, decide whether to scale these changes up to affect more of your business or to pivot and try a different approach entirely.

Decision Trees or Flowcharts:

Create a decision tree or flowchart that outlines the decision-making process following an A/B test. This could include nodes that consider whether the test was statistically significant, whether the results align with business goals, and what follow-up actions (like further testing, full implementation, or abandonment of the change) should be taken based on different scenarios.

By thoroughly analyzing A/B test results through data interpretation, statistical analysis, and strategic decision-making, organisations can ensure that they are making informed decisions that will enhance their website’s user experience and improve overall business performance. This data-driven approach minimises risks associated with website changes and ensures that resources are invested in modifications that provide real value.

Beyond Basic A/B Testing

Once you have mastered basic A/B testing, you can explore more sophisticated techniques that offer deeper insights and potentially greater improvements in user experience and conversion rates. This section delves into advanced testing strategies and the importance of ongoing optimisation through iterative testing.

Advanced Testing Techniques

Advanced testing methods allow you to explore more complex hypotheses about user behaviour and website performance, often involving multiple variables or entire user journeys.

Multivariate Testing (MVT):

  • Overview: Unlike A/B testing, which tests one variable at a time, multivariate testing allows you to test multiple variables simultaneously to see which combination produces the best outcome.
  • Application: For example, you might test different versions of an image, headline, and button on a landing page all at once to determine the best combination of elements.
  • Benefits: This approach can significantly speed up the testing process and is particularly useful for optimising pages with multiple elements of interest.

Multipage Testing:

  • Overview: Also known as “funnel testing,” this technique involves testing variations across multiple pages that make up a user journey or funnel.
  • Application: You might test variations of both the product and checkout pages to see which combination leads to higher conversion rates.
  • Benefits: Multipage testing helps ensure consistency in messaging and user experience across multiple stages of the user journey, which can improve overall conversion rates.

Continuous Improvement and Iteration

The goal of A/B testing is not just to find a winning variation but to continually refine and enhance your website based on user feedback and behaviour.

Importance of Ongoing Optimisation:

  • Iterative Process: Optimisation is an ongoing process that involves continually testing and refining website elements based on user data and business objectives.
  • Learning from Each Test: Each test provides valuable insights into whether a variation wins. These insights can inform future tests, leading to better user experiences and higher conversion rates.

Iterative Testing Strategies:

  • Start with Broad Tests: Begin with broader tests to identify which elements have the most significant impact on user behaviour.
  • Refine and Repeat: Use the insights gained to refine your hypotheses and test more specific variations.
  • Expand Testing: Once you’ve optimised major elements, expand your testing to less prominent components that could still affect user experience and conversions.

Timelines and Case Studies:

  • Timeline Example: Show a timeline that outlines an annual testing strategy, with phases for broad testing, refinement, and expansion.
  • Case Study: Present a case study of a company that implemented continuous testing. Highlight how iterative testing helped them achieve a significant, sustained increase in conversion rates over time. For instance, a tech company could use iterative testing to fine-tune its sign-up process, resulting in a 50% increase in user registrations over a year.

By advancing beyond basic A/B testing and embracing more complex and continuous testing strategies, companies can optimise their websites more effectively and foster a culture of data-driven decision-making. This approach leads to improvements that align with user preferences and business goals, ensuring sustained growth and a competitive edge in the market.

Common Pitfalls and How to Avoid Them

A/B testing is a powerful tool for website optimisation, but common pitfalls can undermine its effectiveness. This section explores typical errors that occur during the testing process and provides strategies to ensure the validity and reliability of your tests.

List of Common Mistakes

Identifying Errors and Solutions:

  • Testing Too Many Changes at Once: It can make determining which change affected the outcome difficult.
    • Solution: Focus on testing one change at a time or use multivariate testing for simultaneous changes and analyze the impact of each element separately.
  • Not Allowing Enough Time for the Test to Run: Ending a test too soon can lead to conclusions that aren’t statistically significant.
    • Solution: Ensure each test runs long enough to collect adequate data, reaching statistical significance before making decisions.
  • Testing Without a Clear Hypothesis: Starting tests without a clear, data-backed hypothesis leads to unclear outcomes.
    • Solution: Develop a precise hypothesis for each test based on thorough data analysis and clear business objectives.
  • Ignoring User Segmentation: Different segments may react differently to the same change.
    • Solution: Segment your audience and analyze how different groups respond to each variation.

Visuals of Pitfalls vs. Best Practices:

  • Create side-by-side infographics showing examples of these mistakes versus best practices. For example, visually compare the outcome of a test that changed multiple elements simultaneously against one that tested a single change.

Ensuring Validity and Reliability

Maintaining the integrity of your A/B tests is crucial for obtaining reliable, actionable insights.

Tips on Maintaining Test Integrity:

  • Use Proper Randomisation: Ensure that the distribution of users between the control and test groups is random to avoid selection bias.
    • Tool Tip: Utilise tools that automatically handle randomisation to avoid manual errors.
  • Control External Factors: Holidays, marketing campaigns, or significant news events can skew test results.
    • Solution: Monitor external factors, adjust the testing period, or filter the data to account for anomalies.
  • Ensure Consistent Test Conditions: Changes in the testing environment or platform during the test can invalidate results.
    • Solution: Keep the testing conditions consistent throughout the test period and verify configuration settings regularly.
  • Validate Test Setup Before Going Live: A misconfigured test can lead to incorrect data interpretation.
    • Solution: Run a smaller pilot test or use a checklist to ensure every test element is correctly set up before full deployment.

Troubleshooting Guide with Graphic Aids:

  • Develop a troubleshooting guide that includes common scenarios where A/B test integrity might be compromised. Include flowcharts or decision trees that help identify and resolve issues such as data discrepancies, unexpected user behaviour, or sudden changes in conversion rates.
  • Example Graphic Aid: A flowchart that helps determine actions when test results seem inconsistent with historical data or benchmarks. Steps might include checking configuration settings, reviewing segmentation criteria, or extending the test duration.

By understanding and avoiding these common pitfalls and maintaining rigorous standards for validity and reliability, organisations can ensure that their A/B testing efforts lead to meaningful improvements and robust data-driven decisions. This approach not only enhances the effectiveness of current tests but also builds a foundation for future testing strategies that are even more successful.

A/B Testing Case Studies

A/B testing has proven to be a critical tool for businesses aiming to optimise their online presence based on data-driven decisions. Here, we delve into some specific real-life case studies from different industries, highlighting the successes and lessons from A/B testing.

Success Stories

E-commerce: Humana

  • Overview: Humana, a well-known health insurance company, conducted an A/B test to increase click-through rates on one of their primary campaign landing pages. They tested the simplicity and message of their banner and CTA.
  • Changes Tested: The original banner had a lot of information and a standard “Shop Medicare Plans” button. The test variation simplified the message and changed the button text to “Get Started Now.”
  • Results: The variation led to a 433% increase in click-through rates to the insurance plans page.

B2B: SAP

  • Overview: SAP, a leader in enterprise application software, tested the copy of their CTA on a product page. The hypothesis was that a more action-oriented CTA would increase engagement.
  • Changes Tested: The original CTA read “Learn more,” which was changed to “See it in action” in the variation.
  • Results: This simple change in wording resulted in a 32% increase in clicks.

.

Digital Media: The Guardian

  • Overview: The Guardian tested different wordings for their support and donation CTAs to determine which would more effectively encourage readers to contribute financially.
  • Results: The test revealed that a direct ask for contributions using emotive language resulted in a higher click-through rate than a more generic request for support.
  • Lesson: This A/B test highlighted the importance of emotional resonance in messaging, especially for non-profit or cause-based initiatives.

Travel Industry: Expedia

  • Overview: Expedia conducted A/B testing to optimise hotel booking conversions on their site by altering the display of discount offers.
  • Changes Tested: They tested the visibility and presentation of savings messages (e.g., showing a percentage off versus a specific dollar amount saved).
  • Results: Showing the amount of money saved led to a slight decrease in conversion rates, contrary to expectations.
  • Lesson: The test underscored the potential for “over-optimising” to backfire and the need to balance how offers are presented to avoid overwhelming customers.

Final Checklist of A/B Testing Steps

To help ensure your A/B testing journey is structured and effective, here is a visual checklist encapsulating the process:

  1. Define Objectives: Clearly state what you aim to achieve.
  2. Formulate Hypotheses: Base your assumptions on data and prior insights.
  3. Select the Testing Tool: Choose a platform that suits your scale and complexity needs.
  4. Design the Test: Create variations based precisely on your hypotheses.
  5. Run the Test: Ensure the test is long enough to gather meaningful data.
  6. Analyze Results: Use statistical analysis to interpret the outcomes.
  7. Implement Changes: Apply successful variations or further refine and test.
  8. Repeat: Use insights gained to continuously improve further testing.

Regardless of the outcome, every test is a step forward in understanding your users better and refining your digital offerings to meet their needs more effectively. The journey of optimisation is continuous, and each effort builds upon the last, opening new doors to innovation and growth.

Harness the power of A/B testing to start making informed decisions that propel your business forward. Your next breakthrough could be just one test away.