Vanity Metrics 101
If data is meant to guide better product decisions, why do data-rich teams still struggle to build the right things?
A lot of teams today have access to more product data than ever before; dashboards, tracking systems and analytics tools than any generation of product managers before them.
Yet the struggle to understand what’s working, what users genuinely value, and where the product should go next still persists. If anything, it’s gotten harder to see clearly.
The reason isn’t a lack of data itself. It’s that data can be accurate, real, and completely misleading all at the same time.
Some of the metrics you reference today are meaningful. They’re tied to decisions, clear about what they represent, and only move when the product actually improves.
Others look just as healthy but don’t offer the same clarity.
These are vanity metrics.
In this edition of The Product Notebook, we’re breaking down vanity metrics and what they really mean for your product.
Here’s what we’ll cover:
What Vanity metrics actually are
The Common types of vanity metrics
Why Vanity Metrics are not the enemy and what the real problem is
A Three-question test for evaluating your metrics
Let’s get into it
So, what are Vanity Metrics?
Simply put, they are metrics that give you a sense that things are working without actually confirming whether they are. They measure activity rather than progress, i.e things happening in and around your product, rather than outcomes that genuinely matter to your users or your business.
The “vanity” term comes from the idea that these metrics feed your ego, not your business. They make your product look like it’s working without proving that it actually delivers on the value it promises to customers.
Usually,
They’re easy to grow (sometimes without much effort)
They don’t clearly guide action and
They’re often disconnected from real user value or business outcomes
Here’s a simple analogy to help you understand;
Let’s say you’re trying to lose some weight in the gym, so the only thing you decide to track your progress is how many times you go to the gym.
You log each visit and the number grows.
Now, this isn’t a bad metric in the real sense, the only problem is that, that number can’t tell you:
how much weight you’ve lost
whether your workouts are effective
or if you need to adjust your intensity
It also doesn’t reflect what actually happens during those visits. You could be showing up regularly but spending time on your phone, skipping cardio, or doing the wrong kind of workouts.
And that’s the tricky part.
Gym visits measure activity, the act of showing up but they do not measure the outcome; whether showing up is actually getting you any closer to your fitness goal.
In product management, a common example of such a metric is “total number of downloads.”
Mind you, it’s not a bad metric and is pretty useful because it tells you people are interested enough to try your product and that your distribution is doing something right.
But it starts to fall short when it becomes the only or primary measure of progress in the delivery of value.
This is because downloads only tell you that someone installed the app, not what happened after.
Plus, that same metric can increase for completely different reasons.
It could mean your app store optimisation is working and you’re reaching exactly the right people
It could also mean your latest ad campaign cast too wide a net and pulled in an audience that had no real need for what you built.
It could reflect genuinely strong word of mouth from users who loved the product enough to tell someone about it. Or
it could simply be the result of a price drop, a featured placement, or a trending keyword that had nothing to do with the product itself.
Every single one of those scenarios adds to the count of that metric in exactly the same way.
The metric cannot tell them apart and because it can’t, neither can you.
No matter what’s actually happening, the count keeps increasing.
And that’s the limitation.
It doesn’t tell you what’s working, what’s broken, or where to focus.
So instead of guiding decisions, it leaves you questioning:
Where is the real issue?
What should we fix first?
It may easily look like progress but you might still not be solving what actually matters or improving the product in a meaningful way.
Common Types of Vanity Metrics
Here are the most common ones.
1. Volume metrics (Total downloads,Page views, Total sign-ups etc)
Volume metrics count arrivals. They tell you people showed up; that something drew them in and your distribution is working.
What they can’t tell you is what happened next. Whether those people stayed, found value, or came back the following day is completely outside their view. A download count looks identical whether a user activated immediately or deleted the app three days later without opening it again.
2. Engagement metrics (Session length,Time on page, Click counts etc)
Engagement metrics tell you users were active in the product. What they can’t tell you is whether that activity meant anything.
Long sessions and high click counts can reflect genuine delight or genuine confusion. A user hunting for something they can’t find looks exactly the same as a user deeply absorbed in something valuable. The metric sees the behaviour but not the reason behind it.
3. Social proof metrics (Follower counts, Likes, Shares etc)
Social metrics feel like external validation and sometimes they are.
But they only capture the users who felt strongly enough to act. Your most enthusiastic advocates are in this number. The users who found the product underwhelming and not valuable are not.
What you're left with is a number that reflects the best-case slice of your user base, not the full picture.
4. Feature usage counts (eg “Feature X was used 10,000 times this month.”)
Feature usage tells you something happened inside the product, which is why it feels more like insight than the other types. But without context it’s almost meaningless. The same number can represent deep widespread engagement or a small group of power users inflating the count.
It can reflect intentional use or a confusing flow pushing users through a step they didn’t mean to take. The number tells you something happened. It has no opinion on what that something means.
Why Vanity Metrics Aren’t the Enemy
At this point, it’s easy to come to the simple conclusion that “Vanity metrics are bad. We should stop tracking them.”
But that’s not quite right.
Most vanity metrics aren’t useless. In fact, many of them are important.
They capture a truth/state of things
Downloads tell you people are discovering your product.
Page views tell you people are finding their way to you.
Sign-ups tell you people are interested enough to take a first step.
That’s all valuable information.
The issue begins when these metrics are expected to carry more weight than they can.
On their own, they are incomplete. They tell you that something happened, but not whether it mattered.
When a single number becomes the headline; treated as proof that the product is working, it starts to shape decisions in subtle ways.
Attention shifts toward what is easy to measure and away from what actually reflects value.
A more useful way to think about vanity metrics is as early signals.
They sit at the beginning of the user journey, showing that people are arriving, exploring, or trying something out. But they don’t tell you what happens next. They don’t show whether users found value, stayed, or came back.
That part of the story lives in other metrics; activation, retention, conversion, repeat usage. Without those, the picture is incomplete.
So strong product teams don’t remove these metrics.
They contextualise and pair them up with metrics that matter.
For Example,
Don’t look at downloads in isolation, pair them with activation metrics
10k downloads in a month sounds like strong growth. But if only 800 of those users completed onboarding and reached the product's core feature, you don't have a growth story. You have an onboarding problem that a download count alone was never going to show you.
Don’t look at engagement alone, connect it to outcomes.
Average session length jumping from four minutes to nine minutes sounds like users are more engaged. But if your support tickets increased in the same period and your task completion rate dropped, users aren't spending more time because they're getting more value — they're spending more time because something got harder to use.
Don’t celebrate sign-ups without understanding what happens after.
3k sign-ups from a campaign feels like a win but if week-two retention for that cohort sits at 8% ( meaning 92% of those users never came back after their first session) it may signal that the campaign brought in the wrong audience, and nobody would know that from the sign-up number alone.
So, its in that combination that meaning and insights start to emerge.
Once you start thinking this way, You stop asking: “Is this a good metric or a bad one?”
And start asking: “What is this metric telling me and what is it missing?”
That’s a much more useful question because the goal isn’t to have fewer metrics.
It’s to have metrics that, together, tell a complete and honest story about your product’s trajectory.
The Three-question framework
So if the goal isn’t to remove vanity metrics, but to use them more intentionally
The next step is knowing how to evaluate them.
How do you look at a number on your dashboard and decide:
how much weight to give it
what it’s really telling you
and what it might be missing
A simple way to do that is with three questions.
1. What decision does this metric inform?
Start here.
If this number goes up or down, what actually changes?
Does it:
shift your priorities?
change what you build next?
trigger an investigation? Or does it just get reported?
Because a metric that doesn’t influence a decision isn’t necessarily useless, but it’s also not doing much work.
This question helps you separate: metrics that inform action from metrics that simply describe activity
And once you see that clearly, you can decide how much importance to give it.
2. What would need to change in the product for this number to improve?
This question grounds the metric in the product itself.
For a metric to be meaningful, there should be a clear link between what you build and how the number moves.
If this number goes up, what did you likely improve?
Did onboarding get better?
Did users find value faster?
Did the experience become smoother or more intuitive?
If you can’t clearly connect the metric to a product change, then it may not be reflecting product performance as closely as you think.
The best metrics are the ones that move because the product improves not just because something happened around it.
3. Could this number go up while the product gets worse?
Ask yourself: Is it possible for this number to increase even if the user experience is getting worse?
If the answer is yes, then you need to be careful how much weight you give it.
For example:
Downloads can increase even if users drop off immediately
Page views can rise when users are confused
Session time can grow because tasks are harder to complete
The number moves, but not for the right reasons.
And that’s the risk.
Because without context, it’s easy to mistake that movement for progress
If there’s one thing to take away from all of this, it’s that most metrics aren’t the problem.
The real issue is how we interpret them.
Vanity metrics don’t lie. They just don’t tell you enough on their own.
And when you rely on them without context, it’s easy to feel like you’re making progress even when the product isn’t actually improving.
The goal isn’t to track less. It’s to ask more.
More of your data, more of your dashboard, more of the numbers you’ve been trusting without question and how much weight it should carry in your decisions.
Because the moment you stop taking metrics at face value, your conversations change.
You move from reporting numbers to understanding what’s driving them, and what needs to improve.
Until Next week,









Spot on!
As a decision engineer, I look at metrics that teams have been tracking to use it in automating their decision processes and you realize that they have been weighing Sign up count more than Activation count and they wonder why profit isn’t growing as fast users are signing up, I give the one answer, THIS IS A VANITY METRIC.