Having the opportunity to work on a regular basis with many of the world’s leading mobile brands, I’m always impressed by the urgency they bring to operating at the speed of mobile. They identify with the expectations of consumers that interactions with brands should be dynamic and personalized — in real time.
As mobile practitioners, we strive to meet (and exceed) this consumer expectation but also work within the constraints of our environment. We want to react in real time, but just as importantly, we want to make data-driven and informed decisions.
Marketing in the digital age is by nature iterative — the process of incremental improvements that continually refine and evolve the customer experience leading to better business outcomes. To be efficient, these improvements should be based on quantitative findings.
I always marvel at the innovation and creativity of brands in the design of promotional initiatives. I’m equally surprised at how infrequently brands test to determine the effectiveness of their initiatives to create impactful customer experiences and whether slight changes can improve results.
The mobile experience is constantly evolving along with the consumer expectations and the key to keeping pace with those expectations while driving incremental performance is to test and learn. While most brands apply some level of testing, it is often inconsistent. Brands that adopt a culture of experimentation design testing into all of their marketing and product activities. They instill the importance of quantifiable insights in their teams and provide the processes and tools required to properly apply their testing plans.
CULTURE OF EXPERIMENTATION
Shifting from intermittent testing to a culture of experimentation can drive improved customer insights, accelerate learnings and increase conversions and high-value actions that lead to more revenue.
We see four common practices among brands that adopt a culture of experimentation:
- Commitment to data-driven decisions, including management sponsorship of experimentation for continuous learning and improvement.
- Defined and documented testing plan that is continually updated based on findings and results.
- Robust suite of testing tools to support the testing plan across the mobile environment
- An environment that allows for failure. Not all tests will be successful and the brands with a culture of experimentation foster creativity by allowing employees to explore new ideas without the fear of failure.
TESTING ACROSS THE MOBILE LANDSCAPE
Experimentation should take place across the mobile ecosystem with brands testing the three major components of their mobile experience — app store, mobile app/mobile website, and messages/experiences.
The app stores are where the majority of consumers discover apps to download, so it’s crucial that your app store listing is highly visible across all relevant keywords and clearly conveys your value proposition. Of course, you’re also cross-promoting your app through your paid and owned channels to drive traffic to the stores. These cross-promotions present great opportunities to experiment with different value proposition descriptions, message copy and methods for linking directly to your app store listing (e.g. QR codes, buttons or short codes).
Inside the app stores, it’s crucial to continually test and refine your keyword strategy to be sure potential customers can easily find your app when researching options in your category. Marketers may feel that a well-known brand name is enough to drive success in the app stores, but that isn’t the case. Even the largest, most well-known brands must optimize their app for non-branded keywords in the app stores. Insight from Airship Group’s leading ASO provider Gummicube indicates successfully optimized keyword strategies can generate up to 50% of their organic traffic from keywords related to features and functionality of their apps.
Even if your brand or app name does return your app in the search results, it will also return other apps and/or competitors who rank highly for your brand, app name or relevant keywords. This highlights the importance of a well-designed, informative and impactful store listing to be sure that you are able to convert potential downloads at the highest rate.
Testing inside the app stores is difficult and knowing what components of the listing are impacting conversion rates is important to know before you make updates to your product page. Running tests on the key components of your listing — visuals, copy, app rating, icon, reviews — will allow you to determine the best-performing variations before you submit your store listing updates to the stores.
Understanding the nuances between the app stores and how they operate is also important. Testing your app store listing is best performed outside of the stores to allow you to submit only the most relevant changes to your listings. App Store testing and optimization requires a specific set of tools and Gummicube offers the leading keyword optimization platform (Datacube) and the premier store listing AB testing tool (Splitcube).
Mobile App / Mobile Website
The depth of engagement and monetization from digital customer interactions is directly impacted by how easily they can accomplish their desired objective using your digital products (mobile app or mobile web). While the principles of good app and website design are important, the key to incrementally improving results is dependent on knowing what, and how, to refine over time. And the stakes are high. For example, 57% of app users decide whether to delete an app after only two uses1. When you consider that customers with a brand’s app are 3 – 4 times more valuable than those without the app, and the average cost to generate an app download is between $3.50 (EMEA) and $5.28 (North America) the possibility of losing more than half of them after two uses highlights the urgency of improving the ROI on your acquisition efforts.
The first 60 seconds a new user spends in your app or on your website will influence how long they stay and whether they come back. The user is there because they are looking for something and they hope you can provide it. In the first 60 seconds, they’ll decide if you can, and either continue exploring your app/website or search elsewhere.
Understanding what actions and behaviors customers demonstrate when they arrive, which ones result in continued interaction or lead to an exit is a starting point for determining what to test. To that end, brands typically need to test the native experience they have created in their design, along with the curated experience they are creating with their messaging content and flow.
For example, despite all marketing and mobile product owners agreeing that feature tutorials and opt-in flows significantly impact app user behaviors, nearly 50% of enterprises2 only improve these critical first experiences quarterly or less often. Testing is a tool to achieve optimal onboarding flows, but should also be applied across the app lifecycle.
Testing the native experience doesn’t have to focus exclusively on components that are fixed like layout, imagery, navigation, text. It’s equally important to test new features or additions to your product before widely rolling them out to your customers to be sure you’ve designed the user experience properly. This can include new short cuts, premium features or pricing. All of these can easily be tested using feature flags to create variations that will be shown to different audiences to measure the behavior of users exposed to one variant or another. Continuously testing the native experience is critical to determining the best combination of variants for driving ongoing engagement leading to high-value actions. Multivariate and programmatic testing allows brands to test multiple variations at the same time to accelerate their learnings and iterate more quickly. This approach also requires a well-thought-out approach to test design and control group management to ensure you have statistically significant findings and avoid indeterminate results.
Messages and Experiences
The other crucial testing area for optimizing your business results is within the experiences you create to drive activation, feature adoption, engagement and ultimately monetization. Designed to drive high-value actions, these experiences are composed of messages curated and presented once or dynamically sequenced over time.
There are dozens of individual components that go into experiences so to maximize their impact you must understand which set of components produces the best results. Of course, imagery, copy, layout and offers influence the performance of an experience — but don’t overlook timing, frequency, use of personalization, subject lines and context.
When testing experiences, there are usually two high-level questions to answer:
- How can the impact of messaging and experiences be measured?
- How can experiences be improved to deliver the optimal outcomes?
How can the impact of messaging and experiences be measured?
The only way to know if the experiences you are creating for customers are having an impact is through a holdout group. By defining a holdout group that will not be exposed to any of your messaging or experiences you can compare their behavior to customers who were exposed. If the exposed group performs the desired high-value actions at a higher rate than your holdout group, then the incremental performance can be attributed to the marketing efforts and calculated as the lift associated with the experiences you are creating. Important considerations when creating a holdout group is to ensure the group is a random sampling of your audience to eliminate any bias in the test. You will also want to determine whether you want this group to continue receiving transactional messages required for the delivery of your service or goods. If you choose to continue sending your holdout group transactional messages, check to confirm that there aren’t embedded offers or recommendations in those messages as this could influence your test results.
How can experiences be improved to deliver the optimal outcomes?
Once you’ve concluded there’s a positive lift when customers are exposed to a message or experience, the next step is to understand how to drive the most impact through refinement of the experience.
A simple example might be a desire to understand which color call-to-action button drove the best performance. To determine this you would design a simple A/B test that measures the response rate of customers exposed to one variation of the button compared to those who were exposed to the alternative variation. To ensure accurate results it’s important to randomize the control group to eliminate any biases.
While this example is straightforward, there are dozens of components to the messages and experiences customers are exposed to on a daily basis. Testing each of these in a sequential order can be time-consuming but running multivariate tests can accelerate learning and therefore improvements in performance. Multivariate tests require a well-designed testing framework and randomized control group approach to assure the findings are accurate.
Testing an experience is similar to testing a message but also allows for the evaluation of sequencing in your testing matrix. Understanding the influence that message sequence, frequency, channel orchestration and time of day have on your messages enhances your ability to design journeys that have greater business impact.
Airship has invested heavily in building a mobile testing suite to enhance mobile experimentation. Our Feature Flags offering is available now as a Special Access tool. If you’d like to learn more about the experimentation tools outlined above, please reach out to your account manager or click the link below.
Navigator is Airship’s customer newsletter covering the latest mobile industry trends, product updates, use cases and best practices, and other learning resources. It’s yet another resource to help you deliver better mobile experiences and create greater value more quickly. If you’d like to receive our monthly Navigator newsletter please sign up here.
1 The Mobile Consumer 2023 survey, Airship
2 Mobile App Experience Gap survey, Airship
Subscribe for updates
If the form doesn't render correctly, kindly disable the ad blocker on your browser and refresh the page.
- About AXP
- About Data Privacy
- About MAX
- Airship News & Updates
- Analytics & Data
- Best Practices
- Digital Marketing Trends
- Engagement Channels
- Lifecycle Marketing
- Marketing Orchestration