Analysing mobile app marketing campaign data is a complex process. One wrong number leads to misinterpretation of the entire campaign. A set of issues arises in this context: how to receive quality data to analyse? How to make the analytics process as transparent as possible? How to properly measure campaign or app issues by metrics? We, the mobile performance marketers at AdQuantum, are ready to answer these questions.
This article is based on a real case. Our agency had been promoting one fitness app for some time. The case we are going to talk about led to a loss of time and money due to the client’s lack of trust in our expertise and their mistake of incorrectly analysing their app data.
AQ had been running performance campaigns for this app in English speaking countries. Then after a while the product was localised for Portugal, France and Spain. At the client’s request, we had launched campaigns not only for these countries, but also worldwide.
For a while, the metrics were consistently good, but then the product team became concerned about the decline in the Conversion Rate (CR) from trial to subscription and they decided to stop our campaigns until the problem was clarified.
It was obvious to us that the CR of the entire app had dropped because of the traffic from new countries. For example, we started acquiring users from Brazil — as a rule, they use mostly free apps or only their free features.
After 2 months, when the product team conducted their analysis of the available data and confirmed our theory, we restarted the campaign. We excluded all the countries besides English-speaking ones — the conversion, as expected, then stabilised.
Stopping a campaign for such a long time is a waste of time and money. The client should not have made rash decisions: they had to collect statistics by GEO and analyse which countries affected the conversion rate.
We have also gained a positive experience from this case: the client started to listen more to our opinion and we continue to work with this app today.
It is easy to get fooled by your own marketing campaign data. Any marketing metrics are just numbers, they do not make much sense without interpretation. In order to avoid common mistakes in working with data, you need to know the right approach to data analytics. Which one? We will tell you all about it below.
In a global sense, analytics is the feedback received as a result of your actions. It is thanks to feedback that a person can make rational conclusions about their behaviour.
In business, everything works in the same way: feedback is needed to analyse previous actions and then form or change strategy. To get feedback, you first need to collect data, which marketing analysis can benefit from.
Therefore, marketing analytics indicates how correct the chosen strategy and the audience for the acquisition is.
There are a variety of solutions for data analytics in mobile marketing. The choice of what to use normally depends on the promoted product. AQ, for example, most often uses tracking platforms already connected to the product. Typically, these are AppsFlyer, Adjust, or Google Analytics (usually used for web projects).
How to avoid pitfalls and analyse data correctly? Here are some guidelines:
Remember, that before starting the analysis, it is better to figure out what data you are going to be dealing with. What traffic sources and campaign optimisations are to be used. It is not always relevant to compare different traffic sources, since users have different content consumption patterns within each one. Comparing initially disparate metrics or choosing the wrong key metrics can be misleading or simply useless.
A metric is a qualitative or quantitative indicator reflecting a particular characteristic and level of a product’s success. Quantitative metrics indicate the number of installs and audience’s activity. When interpreting data, it is important to understand what a particular metric signals and how it is connected to the rest of the metrics.
It makes no sense to analyse all marketing metrics at once, each one must be considered for specific purposes. Let's divide them into several groups:
Active Users (DAU, WAU, MAU), Revenue, Retention Rate (RR) and Churn Rate (CR). They give a general understanding of the situation: how many users there are in the app, how do they like it, how much they pay and how many of them leave the app. However, these metrics do not allow us to make certain marketing decisions or to measure the impact of product changes. When working on an app, the main thing is its volume, not the mass.
Lifetime Value (LTV), Average Revenue Per User (ARPU), Customer Acquisition Cost (CAC) and Return on Investment (ROI) are the metrics defining the product’s financial success and its value to the user.
Cost Per Mille (CPM), Cost Per Install (CPI), Click Through Rate (CTR), Install Rate (IR), Engagement Rate (ER), Cost Per Action (CPA), Conversion Rate (CR) and Return on Advertising Spend (ROAS) are the main metrics for marketers. CTR, IR and CR are especially important as they allow us to draw conclusions about the efficiency of our work on the product and show the quality of the acquired traffic.
When we find out that a metric’s outlying emitter was not caused by our campaign, but by problems in the product, together with the client's team we discover what changes exactly were made to the app that led to this deviation. It can be any changes. For instance, there was an error with the localisation system: not everyone would enjoy captions in the app in their native language containing errors. This can lead to both user churn and unwillingness to subscribe.
If we see a drop in metrics at the very beginning of the marketing funnel, even before an install, the problem may be hidden in the app’s Google Play or App Store pages. Perhaps the screenshots in the app store had been changed, so the product page no longer seems attractive for a user to convert (make an install, purchase, etc.).
There can also be situations when the initial metrics of the funnel are satisfactory, but the conversion rate from install to purchase drops. We would conclude that, most likely, there had been a mistake in our campaign — for instance, we started buying non-target users or the monetisation system had been changed: we replaced Paywall or Special Offer.
There are plenty of options. But it is essential to be able to “read” the data correctly. An extremely important point here also is to constantly exchange data between a marketing agency (or a marketing department) and a product team.
When starting a performance campaign analysis, it is of paramount importance to define your goal: what exactly do you want to solve with this particular analysis? Once the goal is defined, it is time to strategise.
Here are some common marketing campaign analysis strategies:
When implementing the chosen strategy, remember about the connection of one metric with others. Suppose, we saw a sharp drop in CR from trial to subscription. We analyse what could have influenced this drop and begin to split the obtained data into smaller components. Dividing our metrics by GEO, days of the week, time. Trying to figure out if this happened on a certain day or there was a gradual decline. Then, using the metrics with deviations, we are determining what the problem is: maybe we chose the wrong audience to acquire or the wrong UA strategy. Or maybe the problem even lies in the app itself.
From our practice, we have collected several rules allowing us to organise transparent and high-quality data analytics:
Before launching our own performance campaign, AdQuantum always asks the client for benchmarks. These are the metrics that the product already had up until they started working with us. This allows us to find out how well our campaign performs compared to past metrics.
In order for the analysis to be objective, it is important that you have all the details in the big picture of the data. From which traffic sources are we taking metrics? Which countries? Types of optimisation? Traffic volumes? It makes a huge difference, either we are considering metrics from TikTok or from Google Ads; either this traffic comes from the USA or from India; either the campaign was optimised for installs or for conversions; either we are gathering data from 100 users or from 1 million.
There is also a common situation when we are given a range of target CPI from $3 to $10. This is a significant range for the cost per install, so in such cases it is worth clarifying what this spread depends on. It often turns out that for one type of campaign optimisation, the CPI can be $3, while for another — $10. This is considered as incorrect data, since each type of optimisation should have its own price.
Suppose, a product team provides us with a set of metrics: IR, CPI, CR, and CPA. Now we have four metrics, but they do not have much meaning for us as long as we do not understand which traffic sources they are taken from. Facebook, TikTok, Snapchat and Google Ads, for instance, all have their own characteristics, which is why if we run a Facebook campaign, we have to consider benchmarks from that particular traffic source. It’s the same situation with countries: it is incorrect to take worldwide data as a benchmark if we run a US campaign.
Marketing analytics starts with quality, relevant data. A lot depends on the completeness and reliability of the collected information: KPIs in reports, decisions on where further to direct the campaign and how much money the project will ultimately earn. Poor data handling is the first reason for losing time and money — both for the client and marketing agency.
Not sure which marketing strategy is right for you? You want your product to start growing? Let's talk.
Article is written by Julia Morozova
|AdQuantum web-site will use cookie-files and collect your personal data. By using our website, you agree with our Terms & Policies. Details||Accept|