Net Promoter Score (NPS)


Net Promoter Score (NPS)

Net Promoter Score is one of the most recognized — and misused — metrics in customer experience. It promises a simple, universal question to gauge customer loyalty: “How likely are you to recommend us to a friend or colleague?”

But NPS is only as useful as the context and consistency behind how it’s measured, interpreted, and acted upon.


What Is NPS?

At its core, NPS asks customers to rate their likelihood to recommend a company on a scale from 0 to 10. Based on their score, customers fall into one of three categories:

  • Promoters (9–10): Loyal enthusiasts who fuel growth.
  • Passives (7–8): Satisfied but unenthusiastic customers.
  • Detractors (0–6): Unhappy customers who may damage your brand.

The Net Promoter Score is then calculated by subtracting the percentage of detractors from the percentage of promoters:

NPS = % Promoters - % Detractors

This gives you a score between -100 and +100.


Simplicity. NPS compresses sentiment into a single number that executives love to rally around. It creates a perceived north star for customer loyalty and retention — without the complexity of parsing through dozens of satisfaction scores.

But this simplicity comes at a cost.


The Problem with NPS (As It’s Commonly Used)

1. It’s detached from conversation.
You often get the score, but not the why. Most NPS programs are post-interaction email blasts. You know they gave you a 6, but you don’t know what happened in the call, the chat, or the product experience that led to that score.

2. It assumes all 6s are created equal.
A 6 from a long-time loyal customer isn’t the same as a 6 from a first-time buyer. Without context like tenure, issue type, or rep behavior, the number floats in isolation.

3. It misleads as a leading indicator.
NPS is a lagging metric. By the time the survey goes out and results trickle in, the window for proactive recovery may have passed.


Vitalogy’s Take: Make NPS Conversational

If you’re using NPS inside a contact center, it should be calibrated to the conversation, not just the post-call survey. Here’s how:

1. Correlate score with transcript signals.
Track what conversational elements correlate with promoters vs. detractors. Was the agent interrupting? Was resolution clear? Was sentiment declining across the call? These cues can act as early predictors of NPS outcomes.

2. Segment by customer intent and context.
Don’t group a billing question with a technical outage. NPS should be sliced by reason for contact, customer segment, and channel to make the number actionable.

3. Don’t just display it — do something with it.
If someone gives a 3, how quickly can you trigger a follow-up or alert? Is there a playbook for recovery? Are trends sent to training or product teams? Insight without action is just reporting.


When to Use NPS — And When Not To

✅ Good Use Cases🚫 Misleading Use Cases
Comparing loyalty across segmentsMeasuring agent performance
Tracking trendlines across timeReacting to individual NPS drops in a silo
Benchmarking customer trustUsing it without qualitative follow-up

Alternatives & Complements

  • CSAT (Customer Satisfaction Score): Better for measuring resolution-level feedback.
  • Customer Effort Score (CES): Useful for measuring friction.
  • Resolution Confidence or Clarity: Better for real-time insight.

NPS isn’t obsolete. But it’s not a standalone compass, either. It’s a headline metric — not the story. Vital teams treat it as the tip of an insight iceberg, not the whole picture.


References

Would you like a companion post breaking down CSAT or CES next?