How to measure the effectiveness of textual ASO
In order to correctly evaluate the effectiveness of textual ASO, it is important for an ASO-specialist to know which parameters to pay attention to. In this article, Artem Tkachuk, Onde.app ASO manager and ASOdesk ASO expert describes what metrics help evaluate performance and shows which mistakes are made by aspiring ASO specialists.
This is a series of articles based on ASOdesk Academy lectures, where we touched on all aspects of ASO. We’ve already told you where to start ASO, how to communicate with customers and how to work with Oriental languages in the App Store and Google Play. You can watch the original lecture on measuring the effectiveness of text ASO here:
App Store Optimization consists of the semantic core collection, metadata preparation, measuring efficiency, and working with iterations, building growth hypotheses, and analyzing competitors. The effectiveness of text ASO can be evaluated using internal and external metrics.
External metrics reflect the position of the app on the App Store or Google Play. There are 2 types of external metrics: qualitative and quantitative metrics. Quantitative external metrics include search queries and their distribution through application positions. Qualitative external metrics include the distribution of semantics as <frequency of queries — app positions>.
Let’s now consider each type separately: what it is for and how to measure it.
Quantitative metrics are search queries and distribution of application positions by search queries. In other words, by how many queries you are on the top. Analysis of position distribution makes it clear which group of queries to focus on. Let me give you an example.
Typically, in many services, the distribution of positions can be seen as a chart that shows the daily queries and the position of the app.
ASOdesk breaks down queries into 6 categories: category top 1, top 2-5, top 6-10, top 11-20, top 21-50, top 51-100.
Top 1 is the first position and guaranteed installs.
Top 2-5 is a category of queries from which you are guaranteed to get installs, these are good queries for future promotion to top 1, they cannot be lost.
Top 6-10 and 11-20 are queries that you have to work on during every iteration to promote them to the top.
Top 21-50 are queries with good growth potential to higher categories, they need to be developed. Top 51-100 are queries that need to be tracked.
The graph shows: On January 24 the app was updated. After that, the number of queries in the top 51-100 increased by 4.5 times. The number of requests in the top 21-50 also rose slightly. It seems like everything is fine, but this is a misconception: top 6-10 and top 2-5 have hardly changed. Most likely users rarely install the app.
Can you be absolutely sure about your ASO if the number of queries grows? No. Since the app can grow by irrelevant or low-frequency queries. The app can stay in low positions and be invisible to users. The complete assessment requires analyzing not only quantitative but also qualitative metrics.
Qualitative metrics show the distribution of semantics as <frequency of queries — app positions>.
Most aspiring ASO-specialists look only at quantitative metrics that don’t always show the full picture. One of the main tasks of the ASO specialist: to increase the installs from the search. Therefore, you need to understand how much the visibility of your application increases.
The visibility in the search depends directly on the popularity of the queries on which the app is at the top. Accordingly, you need to investigate not only the distribution of the number of search queries by the positions in the search results but also the volume of search traffic per query.
In ASOdesk, we use Traffic Score to evaluate traffic queries.
In this chart, you can immediately see what number of searches and what Traffic Score indicator are in the visible zone for users. In this case, it is useful to track clusters of top 5 — queries with maximum visibility, and top 5-20 — potentially good queries for further optimization, as there is a high chance to bring them to the top 5.
Indirect metrics help to create a complete picture of the effectiveness of textual ASO. If you have increasing installs from search queries and the overall level of installs grows, the application will grow in the top categories or in Overall. The optimization effect can affect your Browse traffic through ramping up positions.
Indirect metrics are app positions in a category or Overall and impressions, page views, downloads from Browse source.
Internal metrics are impressions, page views, installs, purchases, retention (user retention rate), and conversions from search to installation and from page view to installation. These metrics are available in the App Store Connect and Google Play Developer Console only to you. You can rely on the first to understand how effective your ASO is.
Mistakes in measuring the effectiveness of text ASO
Analysis of Position Distribution is not deep enough
Let’s consider an example. In the chart, iterations show growth. However, it can be seen that the dynamics are very slow. To understand this, you need to understand how queries move.
For instance, we’ll take the top 21-50. The screenshot shows that there are 190 requests, and there were 134. It seems like a good thing, but there’ve actually been 56 in and out queries added.
“Out” is when a request drops out of a category. It can fall out into an upper top, such as getting into the top 5 or falling down and getting into the top 100.
It is seen that 13 important queries that could have been promoted fell out from previous categories. You need to check where the app gaps.
Surfacial approach when analyzing the distribution of positions will not yield results: it is unclear what has changed and how to fix it.
No detailed analytics with semantic cohorts
In this graph, you see queries that contain the word “fitness”. Below you can see metadata changes from March 22 and April 22: the title of the app got changed, now it includes “fitness”.
It can be seen that the increase in queries that include “fitness” is quite substantial: +97, +155 positions, +145, and so on. If this app wanted to be positioned as a fitness app, then they’re going in the right direction.
How to understand what you need to change a semantic cohort
I’ll explain by example. When Russia introduced self-isolation, user behavior changed. People have started to actively look for apps that help exercise fitness at home. Many specialists started to situationally optimize for the “home workout” queries, for example. But now things have gotten much better, people have started running in parks and going to group work-outs. So, you need to look for a new semantic cohort in order not to lose traffic.
Incomplete analytics of external and internal metrics
If we evaluate the application only on market changes, the distribution of positions by search queries will show a positive trend.
On the chart, you can see App Units impressions on App Store Search Source and conversion. The increase in requests until mid-April did not yield into an increased number of installs.
Usually, if the conversion does not change, neither does the quality of the traffic. But on these charts, queries and positions have not changed, and the total volume of traffic score on queries started to change. To find out what the reason is, you need to analyze all external metrics, especially search queries and Traffic Score.
How to understand that textual ASO is done well
To measure the effectiveness of text ASO, you need to analyze in detail the following:
1. External metrics — search queries, application position distribution by search queries, semantics in the context of popular queries and positions in the app.
2. Internal metrics — impressions, page views, installs, purchases, retention (user retention rate), conversions.
3. Run analytics through semantic cohorts.
4. If you are lacking data, supplement analytics with indirect metrics — application positions in a category or in Overall and impressions, page views, Browse source installs.