Let's talk about metrics for a few minutes, and I don’t mean the superior system of measurement [take that confusing Imperial units! I have a new home now!]
Yes, that's right, we're talking about those often unglamourous, tracked performance measurements that data nerds appropriately love to look into and for good reason. They should be kind of unsexy at first glance, but more on that later.
If design practitioners want that coveted “seat at the table” that many are vying for, then being able to prove the impact of their work is how to get there. What results did the research insights lead to? How have the UX improvements reduced costs? Why are we not talking about this more?
For the design team to flex their problem solving magic, they need to first understand what is and is not working out well today for your users, customers, and staff. That is, supposing the business cares about saving time and money through identifying and prioritising opportunities for improvement instead of blindly changing things and hoping to see the sales go up.
This is where quantitative metrics come in; the numerical data that can point to emerging trends—positive or negative alike—so that the team can swoop in to investigate why that’s happening. Not all quantitative measures are helpful, and being helpful is the key.
Say the team has spotted a curious trend, such as a decrease in total sales over the past quarter and a rise in critical reviews of the product. As much as the business won’t want to see a result like this, it’s bound to happen to any company eventually and it’s okay because it’s actionable. If we know something is dropping, we can talk to those customers to find out why. We can look into user feedback after the most recent releases to see if a misstep was made, like removing something they liked.
Key touchpoints along a user flow can be a great place to setup quantitative metric triggers. Think of it like a float on a water reservoir. There’d be action plans in place for what to do if it signalled that there was risk of an overflow or too little water, like telling the townspeople to start taking shorter showers. The same goes for seeing surprising results like an influx of new customers seemingly out of nowhere; it’s important to know what went right so it can hopefully be repeated.
It’s the measures that don’t inspire any specific actions or help point the team towards the contributing factors that are the issue. Enter the notoriously appealing vanity metrics.
Look, Charlie! We had 10,000 customers visit our page last week!
Yes, but is that good? Is this higher or lower than how many we normally get? What percentage completed check-out? Do we have any idea why they’re visiting or if they’re finding what they’re after?
No? Okay. Thanks? I guess…
It feels like vanity metrics exist for the sole purpose of feeling like positive progress is being made at all times; something that’ll give the executives and shareholders a warm-fuzzy feeling that everything is A-OK. In other words, tracking something that is easy rather than doing the mahi (work) to track what is informative.
I understand the temptation, especially when a team feels like they have to defend their performance at all costs to make the higher-ups happy. But, if what’s being measured can be ‘gamed’ by things like giving customers free fries for a 5-star review or not reporting low scores in effort to tell tall tales, you’ve got yourself a vanity metric. Rely on them too long and it’ll only be a matter of time until the bosses are spitting fire, confounded how the numbers look great yet the business is still losing money.
Sharing what percentage of customers have had to call and ask for help with a feature isn’t as easy to falsify. Call intents are a treasure trove of insight waiting to be unlocked.
So without further ado, here are the 5 of the most common vanity metrics and their shortcomings:
Sign-ups
That’s great; people are signing up [and hopefully it’s not all friends and family]. But, did they actually use the service yet? Are they coming back after the first visit?
Apty.io recommended measuring Adoption Rates instead as (New Active Users / Sign-ups) x 100. New Active Users refers to those who’ve completed a set number of onboarding tasks in a give time period. If Adoption Rates aren’t looking good, it might be worth researching whether customers can tell what to do next or how.
Time on Page
Let me clarify up front that context is key here [as it always is]. Bloggers, news media companies, or good ol’ YouTube will probably benefit from a Time On Page metric because it could reflect how much time they’re investing to consume the content.
In most other cases though, high Time On Page counts might reflect more that users aren’t finding what they’re after or are just plain stuck. Perhaps they opened it then left to do something else. If we knew the context, Time On Page might be a useful indicator of these issues, but boasting about high numbers without this knowledge may keep the team from seeing that something is seriously wrong.
A super short Time On Page can be just as misleading. One might think the recent design changes are clearly helping customers get to the next task, when really it might be that they’re jumping ship as soon as they arrive. Tread carefully here.
Page views
Tracked in isolation [there’s a theme here…], Page Views aren’t as clear as they may appear to be on the surface.
High Page View counts could mean everyone is, again, opening the page then immediately leaving. Consider replacing these with Bounce Rate percentages to better accomodate for external factors like a recent marketing campaign bringing in a ton more customers.
Page Views might also be artificially inflated due to other sites or scripts logging a visit, such as scraping the page for content updates. Best not interpret this as thousands of new potential customers all checking out the store if it was actually a ghost town.
It’s not that Page Views shouldn’t be measured, but pair it with partner for a clearer picture. For example, Medium.com tells its writers what % of readers have opened their articles versus what % of readers have actually read the entire piece for a handy conversion calculation.
Impressions
At risk of getting redundant, yes, a lot of people might have seen the post and that feels great, but how many actually engaged with it like leaving a comment, becoming a follower, or sharing it with a friend? Better yet, how many clicked on the call to action and made a purchase or signed-up for more?
As a personal and openly disappointing example, when I released my most recent Everyday Experiences Podcast episode with Doug Collins, my tweets announcing this new episode got over 11k impressions within the first few days thanks to his network. I felt like I had made it! Blue skies here I come!
That is, until I checked how many of those people clicked the episode link and listed to the show. Seventeen. A very lonely 17 downloads out of 11,000 impressions.
But, guess how many people actually engaged, i.e. clicked the podcast link, and listened to the episode within the first week? 17; a very depressing 17 out of 11,000. Not even a 1% listener conversion. I’m doing great!
What's important to consider in relation to marketing activities like social media content creation and promotion is the cost of creating said content. How many people did that take to make? What was the total production time / opportunity cost? Brand & Marketing might consider any impressions to be important as it takes more than one interaction for the Brand to stick in a customers’ mind, but make sure it’s leading to something of tangible impact.
Net Promoter Score (NPS)
Ahh, savings the most hotly debated one for last.
Monthly reports celebrating NPS scores tied to executive or department bonuses are a sure sign of a vanity metric at play. The gold is in the verbatims, but who’s checking them? Are we counting how many times a topic or sentiment is shared to see if it’s worth checking out why?
If NPS must be used, Delta CX podcast Ep.24 suggested capturing the score and commentary via plugins like Hotjar as it will pair the feedback to screen recordings of the customers doing the things that led up to it. Was the 3 out of 10 because they struggled to navigate around the page or fill in the form? Watch to find out!
With all of these examples in mind, what vanity metrics have you seen and what better alternatives might there be? Here’s to making better informed decisions — CHEERS!
This article was originally published on 2 August 2021 on my LinkedIn page.