Why Customer Experience Measurement is Biased and How to Fix It

May 15, 2018

Why Customer Experience Measurement is Biased and How to Fix It

by Christophe Cais
Founder and CEO at CXG and Forbes Council member

The article below is originally published on Customer Think.


My car recently came due for servicing, which ultimately delivered a rather interesting experience. I took the vehicle to the service center on the designated day, but several things went wrong, and I didn’t get it back as planned. Needless to say, I was anything but happy. Why am I telling you this? Well, things got intriguing after I finally collected my vehicle. At that point, the service advisor informed me I’d receive a survey, and could I be so kind as to provide a score of 9 or 10 since anything else would be detrimental to the team? What was I supposed to do? Report accurately on my experience while knowing it would impact someone, provide the score requested, or simply ignore the survey? I went with the last option.

Whether my choice was right or wrong is beside the point. The important thing is this episode made me realize something was terribly wrong with the way customer experience (CX) was measured.

When retail brands started measuring CX through emails or SMS, they thought they would be rewarded with several benefits. For one, they would constantly monitor the pulse of their CX and react quickly to solve customer problems. Besides, CX conversations would start to happen across the organization and brands would have access to a benchmark. Customers would also be rewarded as they would be offered a new way to highlight issues or pass compliments. And, to a certain extent, some of those benefits did materialize.

It was the time when some software vendors were claiming CX would improve if companies simply launched a CX measurement program (be it NPS or something else, as long as it used their software) that encompasses those metrics across the organization.

However, the reality was way more complicated, and brands that followed this advice inevitably suffered disappointment. Measuring something doesn’t mean you fix it.

1 - In retail, the bulk of the feedback comes from clients who have bought a product or a service. It stands to reason that if you visit a store and end up buying something, you are pretty happy with your experience. Let’s also keep in mind that 90% of the customers entering a store leave empty-handed, so I would argue it is incredibly dangerous to take the feedback of the “happy” 10% and treat it as representative of the overall CX you deliver.

2 - Another issue we have is over-saturation where feedback requests are concerned: you cannot do anything nowadays without being asked for evaluation or comments. As a result, the only people who end up providing feedback are either the brand aficionados or clients really unhappy with their experience. Therefore, the results are extremely polarized and fail to pinpoint anything but the most critical issues.

3 - The story I shared at the beginning illustrates that front-line teams are not shy to ask customers for high scores. This situation is exacerbated when companies link a bonus to an NPS score.

4 - Feedback can also be problematic when clients know theirs will be shared. A number of studies have demonstrated that such knowledge immediately pushes customers into giving much higher scores. This bias results in an artificially inflated score of your CX.

5 - Immediacy has become the norm. Teams are often bombarded with feedback and expected to react on the spot instead of being allowed to step back, reflect, and devise a plan to address the root cause of the problem. As a result, teams grow increasingly disengaged and critical of the tools in use.

6 - It is also disheartening that the score has become the goal. Provided the NPS is high, no one seems to care about the actual CX, and it hardly matters how you get there as long as you do. My car service experience demonstrates the type of behaviors this promotes.

7-Last but not least, the human dimension often becomes underestimated, sometimes even completely ignored, when an IT solution is implemented.

What’s to be done?

Past and present transgressions aside, the fact remains that improving CX is more important than ever, and measuring it is a must. In order to succeed, brands need to realize that buying some software with all sorts of bells and whistles is not THE solution but only part of it.

To begin with, brands need to put in place not one but several methodologies to capture CX: a Voice of the Customer (VOC) survey underpinned by solid software is important but far from enough. Regularly interviewing both buyers and non-buyers as they exit the store is incredibly powerful, as are several other methodologies.

Brands also need to have clarity on the mechanism that will help their teams leverage the data and transform them into actions. This goes beyond calling back an unhappy customer: the aim must be to foster behaviors that will exert a positive and memorable impact on the experience. This is precisely where many initiatives fall short – the data is available but not used to drive change in the organization. In this area, technology alone will not be enough to do the job.

Finally, when a Retail Excellence Program is launched, front-line teams need to get drawn into the conversation early on. At the end of the day,you need their engagement if they are to embrace the program you intend to roll out. Anything less poses the risk of their making a travesty of your plans.