Where Net Promoter Score Goes Wrong

To help clients who were struggling to understand their Net Promoter Scores, a customer agency conducted a new study that compared the NPS ratings of 2,000 consumers with answers to questions about whether they had, in fact, actively promoted brands or urged others to avoid them. The data showed that people’s behavior often didn’t line up with their NPS categorizations–suggesting that asking about actual advocacy, rather than inclination to recommend, may produce more useful data.

Since its introduction in Harvard Business Review, 16 years ago, the Net Promoter Score, or NPS, has become a foundational business metric. Based on a single question—On a scale of 0 to 10, how likely are you to recommend our company?—it’s a simple way to get a quick read on consumer sentiment, and it’s been widely embraced in the marketplace. Many leading companies have embedded it into their operations.

Yet at my agency, C Space, which helps businesses build a customer focus into the way they work, we’ve come to believe that NPS offers mostly broad strokes, akin to a compass pointing companies in the right direction. This is not to say we think it’s irrelevant. A compass is still a helpful tool. But sometimes, you need a more-detailed topographical map to navigate a rough or uncertain landscape. Sometimes, what the compass indicates is the best direction to follow actually isn’t once you take into account the on-the-ground terrain.

Over the years doubts about NPS have been raised in the business press and in academic studies. At the same time, we at C Space began to have questions about whether it was really the gold standard. Increasingly, companies across industries approached us because they were struggling to understand their NPS, laboring to move it, or unable to align it with data on customer behavior and wondering why. Many had been experimenting with other metrics. Some had turned to strictly behavioral data; others had found that likelihood to purchase additional products was more closely tied to growth than NPS was. But for all of them, the promise of NPS was elusive.

So we decided to launch our own study of consumer advocacy, taking a fresh look at NPS, free of any suppositions about customer behavior. In May 2018 we surveyed over 2,000 consumers across the United States and the United Kingdom. We asked them to rate up to three of these 10 large global brands: Twitter, Burger King, American Express, Gap, Uber, Netflix, Microsoft, Airbnb, Amazon, and Starbucks. Participants were allowed to rate only brands they used or bought. The result was more than 5,000 ratings and approximately 500 ratings per brand.

Our survey first asked the standard NPS question about how likely people were to recommend each brand. But it posed two more questions: “Have you recommended this brand?” and “Have you discouraged anyone from choosing this brand?” The goal was to capture not just what consumers expected to do in the future but what they’d actually done. Research suggests that intention is not a reliable metric—because intentions often stay just that. Consider that a 2007 study of 16,000 consumers showed only about half the people who expressed an intention to recommend specific firms actually did so. To learn more about what was really driving advocacy, we also followed up each of those two questions by asking why survey takers had advised others to choose or avoid the brand and inviting them to share stories about those conversations.

Some academics have suggested that the NPS classification system, which groups consumers into three buckets—promoters (those giving scores of 9 or 10), passives (7 or 8), and detractors (0 to 6)— is arbitrary and eliminates potentially useful information. And in both our client work and our study, we’ve observed that the base presumptions of NPS classifications didn’t align with customer data.

For instance, NPS presumes that someone can’t be both a promoter and a detractor. But humans are complex and often contradictory. Our minds are always operating on multiple planes—placing facts, feelings, and experiences within contextual and subjective frameworks. An oversimplified model can ignore this natural human state.

Indeed, our survey revealed that 52% of all people who actively discouraged others from using a brand had also actively recommended it. And across the NPS scale, we found consumers who had both actively promoted and actively criticized the same brand.

Intrigued? We were and wanted to know more. So we did a second survey, in which we asked an additional 500 people to identify a single brand that they’d both actively urged people to avoid and actively recommended, and then asked them what their NPS rating for that brand was, why they had pushed others toward or away from it, and if they would share stories about the context.

We discovered that when, where, why, and to whom consumers praised or criticized a brand was fluid and appeared independent of their NPS ratings. When giving advice on a brand, consumers, like all good matchmakers, consider whether a pairing is right. For example, one person in our study (an NPS passive) recommended Spotify to his friends for its ease of use and customizability but discouraged his parents from trying it because he felt it was too complicated and too expensive for them.

While enthusiasts are happy to extol the virtues of a brand they love, that doesn’t mean they like every product in the lineup. These people are very vocal if their favorite brands sell something they consider inferior. One NPS promoter in our study was, in her own words, “obsessed” with Mrs. Meyer’s products, but she despised one item in the line and made sure others knew it.I first bought Mrs. Meyer’s Clean Day products about a year ago,” she told us. “I’ve talked about them so much that my sister considered getting me several for my birthday. Guess what, I had already ordered a six-month supply! I only like the soaps and all-purpose cleaner, though. I hate, absolutely hate, the Mrs. Meyer’s Clean Day window cleaner. Leaves streaks on my windows, which means double the work. I will NEVER purchase that product again and have let others know about the poor quality as well.”

Another consumer hated Walmart, giving the company a zero on the NPS scale. “The stores are filthy and overcrowded,” he noted. But, he continued, “I recommended Walmart to a friend because they had an inexpensive desk that would be perfect for the friend’s room.”

Understanding the nuances behind the encouragement and discouragement of product purchases is vital for any company looking for growth. And none of this duality was captured in NPS.

We also found that the NPS scale offers only a rough guide to active consumer advocacy. According to their NPS ratings, 50% of customers in our first survey were promoters, but 69% of customers had actually recommended a brand. So the NPS categorization missed a big chunk of the actual promoters.

The biggest disjuncture, however, was with detractors—a group many companies obsess over. According to their NPS ratings, 16% of consumers in the survey were detractors, yet only 4% of all the respondents had actually told others to avoid a brand. In fact, we found that detractors were seven times more likely to have either recommended a brand or said nothing at all than to have disparaged it. Time and again, actual behavioral patterns didn’t align with expectations created by the NPS ratings.

Let’s look at Twitter. In our study it garnered an overall NPS (calculated by subtracting the percentage of detractors from the percentage of promoters) of 4. This closely matched other studies at the time that put Twitter’s overall NPS at 3.

But did Twitter’s NPS really reflect the advice consumers were giving others about the platform? The answer was no. Only 57% of people who’d actively recommended Twitter were promoters according to their NPS ratings.  So the other 43% were mischaracterized by NPS as passives or detractors.

In contrast, the NPS detractor numbers for Twitter were bloated. In our study NPS ratings indicated that 33% of its consumers were detractors. But only 3% had actively discouraged others from using the platform. This means Twitter—as well as other companies—may be wasting lots of time, energy, and resources trying to understand people who aren’t truly criticizing them to others.

There’s an elegance to a single question. It’s easy to ask and easy to benchmark and track the responses. But that doesn’t mean it’s easy to understand NPS results—or improve them.

We think that companies would do better with a framework that focuses on customers’ actual behavior. By taking the percentage of active advocates and subtracting the percentage of active discouragers, they can calculate something we call an earned advocacy score, an approach that we believe provides clearer, more-detailed, and more-actionable data.

Let’s look at two brands that perform relatively on par with traditional NPS: American Express and Burger King. In our survey data 51% of American Express customers were promoters and 19% were detractors, giving the firm an overall NPS of 32. Forty-five percent of Burger King customers were promoters and 15% were detractors, giving the chain an NPS of 30.

However, when we dug down into consumers’ actual behavior, we found that the two brands didn’t look nearly as similar. Fifty-seven percent of American Express customers were active advocates and 5% were active discouragers. Seventy-nine percent of Burger King customers had actively recommended the chain, and only 2% had actively urged others to avoid it. Burger King’s earned advocacy score was 77 and American Express’s was 52.

The companies’ performance differed significantly in two other areas. First, Burger King was much better at turning an inclination to recommend into an actual recommendation. It had been able to convert 94% of its NPS promoters into active advocates, while American Express had a 79% conversion rate. Overall, Burger King was also more talked about; 77% of its customers had recommended the brand, discouraged others from interacting with it, or done both, in contrast to 58% of Amex customers.

We believe earned advocacy scores are more representative of actual consumer behavior and, when paired with open-ended questions about why people recommended or disparaged a brand, provide crucial topographical insights to companies. Since last May, we’ve used this approach in our client work, correlating data on actual recommendation and discouragement and the underlying reasons for it with company data on transactions. We found that more granular view this information created gave our clients a contoured map, which clearly outlined for them immediate actions that could boost their growth.

Christina Stahlkopf is the associate director of research and analytics at C Space, a customer agency.

Where Net Promoter Score Goes Wrong

Research & References of Where Net Promoter Score Goes Wrong|A&C Accounting And Tax Services
Source

error: Content is protected !!