Total users. Total signups. Page views. Cumulative downloads. The numbers that only ever go up.

These are the metrics that show up in board decks, marketing posts, and the slide before the all-hands. They feel like progress. They reward the team for showing up. And they quietly hide whether the product is actually working.

The problem isn't that vanity metrics are wrong. It's that they're true in a way that doesn't help you make decisions.

What makes a metric vanity

Three properties that vanity metrics share:

They only move in one direction. Cumulative numbers always go up. They can't tell you when something has gone wrong, because going down isn't a possibility. A metric that can't communicate bad news isn't communicating much.

They're decoupled from value. A user who signed up two years ago and never came back is in your total user count. A page view from a bot is a page view. A download that produced an immediate uninstall is a download. Each of these adds to the headline number without representing anything you should celebrate.

They confuse the team. The vanity metric is going up. The product isn't really working. The team feels good and works on the wrong things, because the dashboard says everything is fine. The damage isn't the metric — it's the bad decisions the metric enables.

What actually matters instead

The useful metrics share opposite properties. They can move down. They're tied directly to user value. They tell you what happened in a specific time window. A short list:

Activation. Of the users who signed up this week, what percentage hit the moment where the product becomes useful? This is the single most important early-stage metric. If activation is broken, nothing downstream of it works.

Retention. Of the users who activated this month, how many were still using the product four weeks later? Six months later? Retention curves don't lie. They're the closest thing product has to a verdict.

Engagement frequency, not totals. Daily active users tells you something. Total users tells you almost nothing. The shift from cumulative to time-bounded is what makes a metric useful.

Cohort growth. Are this month's new users behaving better, worse, or the same as last month's? The trend tells you whether the product is improving or decaying. Headline numbers can't.

How to actually use metrics

Three habits that separate the teams who use metrics from the teams who report them:

Pair every quantitative metric with a qualitative why. The metric tells you what changed. It doesn't tell you why. The why comes from talking to users, reading support tickets, watching session replays. Teams that only have the number end up running experiments against the wrong hypothesis. Teams that have the number and the why act faster and more accurately.

Set the failure threshold before you start. If you're going to claim success when activation hits 40%, decide that before launch. The threshold prevents the team from rationalising whatever number they get into a positive story.

Look at the distribution, not the average. Average activation rate is one number. The distribution shows you that 30% of users activate fully, 50% partially, and 20% abandon. The average hides that the product is working brilliantly for some users and not at all for others — which is much more actionable than "we're at 40%".

Where vanity metrics actually do harm

It's not the metric. It's the decisions:

You spend on growth instead of product. The headline numbers can be moved with marketing spend. So when product fundamentals are weak, the cheapest way to keep the dashboard green is to acquire more users — knowing most will churn. You're paying for a metric that isn't telling you what you need to know.

You over-celebrate launches. The first week of any launch is full of signal that doesn't predict anything. Vanity metrics make that week look like a result. Teams declare victory and move on. The real verdict — does this work? — comes from week six, week twelve, the cohort that joined a month later. The teams that miss that miss the actual lesson.

You misalign the team. When the metric the team is rewarded on is vanity, the work the team does optimises for the metric. You ship features that drive signups but not activation. You ship launches that drive press but not retention. The dashboard goes up. The product gets quietly worse.

The shift

Don't ban vanity metrics. They have a place — in marketing, in PR, in stakeholder updates where the audience just wants the headline.

Just don't run product on them. Build the dashboard around the metrics that can fail. Activation. Retention. Cohort behaviour. The numbers that tell you when something has gone wrong, not just when something has happened.

The metric you choose is the strategy you operate. Choose ones that can teach you something.

If you're tracking what traction really means in early product, the move away from vanity metrics is most of the work. And letting the data pick the feature only works when the data you're looking at isn't lying to you.