Our Fan Value Score metric was inspired by the Plus/Minus hockey statistic which measures the performance of a team broken down by individual player contribution.
Here's the story...
Years ago, we spent a lot of time consulting for consumer packaged goods companies on their product lines. A couple of the research techniques we used were TURF and Shapley value analysis. Goals of these techniques are to figure out which combinations of products in a line- like flavors for an ice cream company, for instance, would attract the most customers.
So one evening after work back in the day, over a few adult beverages- and while a hockey game was playing on a nearby tv, a few of us got into a discussion about how these research techniques might be applied in another area of our business - guest and customer satisfaction measurement. We arrived at the idea that the customers of a business, or the guests of a hotel or restaurant could be looked at like we looked at the flavors of an ice cream company OR the way each hockey player is looked at as contributing to their team through a statistic called their plus/minus number. For example, a player with a Plus 2 rating means that while he (or she) is on the ice, his or her team has scored 2 more goals than the opposing team. The plus/minus calculation is looked at both on an individual game basis, and in aggregate over the course of a season.
And so, on that night our Fan Value Score was born. The idea being that based on how a guest or customer viewed their experience at, say a hotel, we could translate that into a score that would help determine how that experience -and consequently the guest- could impact the hotel's success, going forward.
From the hockey plus/minus stat, the simple polarity of a positive or negative score appealed to our sensibility and so we did a lot of math with all of the customer and guest satisfaction data we had collected over the years and came up with the algorithm. It's pretty simple: each guest's viewpoint of their experience earns a score from +2 (best) to -2 (worst).
So what specific future impacts on the hotel or business are we looking at?
Well, on the positive side, the guest who expressed that they had a positive experience might return over and over, as well as tell their friends and relatives, write great reviews online, and post nice things on their social media accounts.
And on the negative side, a guest that had a negative experience wouldn't likely come back, and might also tell their friends and relatives, and possibly post not so nice things in reviews and on social media.
2 useful ways for businesses to look at their Fan Value Scores
On a macro level, businesses can look at Fan Value Scores over time periods. If, say the average Fan Value Score is trending down, or down in a specific time period, or repeatedly on a certain day of the week for instance, it could possibly be highlighting a personnel or other operational issues.
On a micro level, the Fan Value Score assigned to each guest is, in essence, a predictive loyalty segmentation. As such, our clients can make marketing offers to guests based on their Score. For example aggressive offers can be made to neutral FVS guests to try to entice a return visit and attractive deals may be offered to high FVS guests to cultivate brand allegiance.
In this episode of the podcast, we take a look at the number of marketing emails being sent around Black Friday and Cyber Monday, as well as discuss some email communication best practices.
#feedback #emailmarketing #blackfriday #cybermonday #cyberweek #givingtuesday #smallbusinesssaturday #bestpractices #followthrough #transactionalemail
If you could only ask your customers/guests 1 question, what would it be?
While we'd all love to sit down with all of our customers or guests and have a long chat about what they think of the experience we provide, it just can't happen. Because we all have so much to do and are often pulled in many directions, time is our most valuable resource. So...in order to get feedback from as many people as possible, perhaps we should limit what we ask to just one question. But what would it be? While NPS has been asking one question for a very long time, is it the right one?
Watch or listen to this episode of our podcast see how we weigh in on this.
#feedback #cx #guestexperience #customerexperience
A discussion about guest/customer expectations and the 1 word you don't want to see in your online reviews.
A quick look at the lighter side of online reviews; bad coffee, Schrute Farms from The Office, and the best restaurant in London...that didn't exist.
A discussion about the rating scales used in the big online review platforms.
Continuing the discussion from the last episode (Feedback Matters Podcast 21-08-01), we dive into a deeper discussion about online reviews, including the positivity problem; because so many reviews skew to the positive, it's difficult to understand what's really going on with a business, product, or service.
A recent story and Tweet by New Jersey News Channel 12 about a business owner who fires back at people leaving him negative online reviews got us talking about how the it's still the wild west out there in the world of online reviews- a topic we first addressed 15 years ago (https://www.databasesciences.com/blog...).
The election reminded me of something I learned long ago.
Is there anything we can take away from the 2016 election process that can be instructive to our businesses? I believe so. Don’t worry, this post is completely non-political so feel free to read on without having to be concerned about whether you’ll want to hug me or hit me.
Early in my career, I moderated a focus group for a consumer packaged goods company that didn’t turn out as the client had planned. The subject of the research was a new product that the company was introducing and the goal of the session was to validate the likely success of the product with the target audience. The participants in the group had been recruited based on very specific demographic and product usage criteria; specifications that had been set by the client. After about 30 minutes, it was clear that these people really did not like the new product. Out of the 12 people in the room, only 1 had had anything positive to say up to that point. Just then, there was a knock on the door and the receptionist came in and handed me a note- it asked me to excuse myself and step outside for a moment, which I did.
Waiting for me outside was the lead client; the man who had commissioned the research and who was watching with a sizable group of colleagues from behind the two-way mirror. He dressed me down, asking where we found this group of “morons” and told me that I need to make sure that the one dissenter who liked the product got more talk time for the balance of the session. I returned to the group room and did my best to try to make the client happy, yet also salvage some semblance of the principles of good research practice. It was not a shining moment in my career, but one that stuck with me as important and instructive.
Why tell that story here? The client from that focus group wasn’t interested in research; he only wanted validation of decisions that had already been made. He only wanted to hear what he wanted to hear. In the election, as we all know, there was a big divide in how the country felt about the 2 major party candidates. It was an ugly, super-negative campaign season, with a lot of bad feelings, news stories, and disinformation to go around. However, because of the way our society now consumes news, many voters were not exposed to fair and balanced news reporting; they heard what they wanted to hear.
The concept of a news source or outlet in 2016 is vastly different than it was just a few years ago. Not only have many mainstream/traditional media sources changed to reflect more editorial bias, but there now exists a plethora of digital media outlets, some serving up fake and/or misleading news, and most from polarized viewpoints masked by thinly veiled news wrappers.
The proliferation of these sites has occurred because of the incredible reach of social media and the mechanisms social media outlets employ to display content to users. According to a study earlier this year by the Pew Research Center, 62% of US adults consume news on social media. The largest social media outlet is Facebook, which is used by 67% of American adults daily, of whom 44% use it to get news. Think about this- more adults in the US use Facebook every day than voted in the election. Facebook’s reach is staggering; it is more pervasive than any other media outlet in history, by far. But here’s where it gets tricky with regard to news; each user’s newsfeed is filled algorithmically with content based on their Facebook and other web using habits; posts they’ve liked and/or shared, articles they’ve read, web sites they’ve visited, things they’ve shopped for, ads they’ve clicked, etc. Certainly one of the goals of this practice is to sell users more products and services that they may be interested in, but users are also seeing a preponderance of information, news, and fake news that is similar (both in content and from sources) to that which they’ve already seen and the algorithm has determined they may be interested in. The result is that users see and hear more of what they’ve demonstrated what they’ve wanted to see and hear in the past. Clearly, this had an impact on the election, as a significant portion of the electorate was informed through a biased lens, knowingly or not. Measuring that impact is extremely difficult and beyond the scope of this post- Google “social media impact on election” if you’d like to go down that rabbit hole.
Ok, so where’s the business lesson in this? Well, go back to the focus group story. The end of that story is that the product was introduced…and failed spectacularly. Had the client been fully invested in doing true research- and listened to everyone in that room, the outcome could have been different.
My point is that it’s useful and important to hear what you may not want to hear in order to be best informed. For any business, managers should understand the perspective of all of its customers, whether that view is favorable or not. I continue to be surprised by business operators who don’t continually seek the customer viewpoint, instead either relying on sporadic anecdotal feedback, testimonials, or taking a look at the deeply flawed sampling of online review sites and applying their own calculus to filter who’s worth listening to. In essence, these managers are purposefully hearing only what they want to hear.
And so the business takeaway, for me, from the election is that we should proactively gather information across a spectrum of sources in order to make better informed decisions . All businesses should be asking every customer to provide their feedback, respectfully and concisely. Being offered the opportunity to share feedback is an expected part of the customer experience in 2016. There are now tools available to make it simple and affordable for every sized business to seamlessly and effortlessly make collecting, tracking, and deriving actionable data from customer feedback part of the daily routine.
As always, please feel free to contact me to share your opinion on this or to discuss further. Happy Thanksgiving!
As data nerds for the last 30+ years, we get all worked up about all kinds of survey and polling stories that get released in the media. Whenever we come across interesting or suspect methodology, we generally refer to it as a #surveyscience issue and sometimes note it in a tweet through our our @listenkeenly account or on LinkedIn.
Clearly, the focus of many of these stories right now is the presidential election. Every day we get to look at new polling data, generated from a variety of sources and with a surprisingly wide array of results. What does it all mean? Which polls do the best job of describing current voter sentiment? How might they predict, or even impact the result of the election?
One of our favorite sites that weighs in on all of the data is FiveThirtyEight.com; their election coverage and poll tracking may be found here. And, if you're interested in learning a bit about what goes into designing a poll and the consequences of design choices, take a look at this article from Wednesday's New York Times.
In a sense, the problem they describe is similar to what often occurs on public review sites- sort of. On a certain level, the idea that an outlier respondent is projected to skew the representation of a subsegment of a population (or customer base) is an everyday occurrence for many businesses in online reviews, especially those that have diverse customer bases and a limited number of reviews. Of course, the polling problem described in the Times was designed into the study, whereas "the review problem" is inherent to the online review process- and should be mitigated with a proper feedback strategy.
The blog of Database Sciences and its CX platform, GuestInsight