DATABASE SCIENCES - SURVEY SCIENCE
  • CX
  • Data Science Services
  • Market Research
  • About Us
    • Technology
    • Blog
  • Contact

Feedback Matters

new podcast episode - disappointment

10/15/2021

0 Comments

 
A discussion about guest/customer expectations and the 1 word you don't want to see in your online reviews.
​
0 Comments

new podcast episode - promoting terrible coffee & fictional venues

9/30/2021

0 Comments

 
A quick look at the lighter side of online reviews; bad coffee, Schrute Farms from The Office, and the best restaurant in London...that didn't exist.
​

0 Comments

New podcast episode - the fault in the stars

9/10/2021

0 Comments

 
A discussion about the rating scales used in the big online review platforms.

​
0 Comments

Continuing down the rabbit hole of online reviews - How can everything be so fantastic

8/31/2021

0 Comments

 
Continuing the discussion from the last episode (Feedback Matters Podcast 21-08-01), we dive into a deeper discussion about online reviews, including the positivity problem; because so many reviews skew to the positive, it's difficult to understand what's really going on with a business, product, or service.
0 Comments

It's still the wild west out there in review land

8/19/2021

0 Comments

 
A recent story and Tweet by New Jersey News Channel 12 about a business owner who fires back at people leaving him negative online reviews got us talking about how the it's still the wild west out there in the world of online reviews- a topic we first addressed 15 years ago (https://www.databasesciences.com/blog...).
0 Comments

Tell Me What I Want To Hear 

11/17/2016

0 Comments

 

The election reminded me of something I learned long ago.

​Is there anything we can take away from the 2016 election process that can be instructive to our businesses? I believe so. Don’t worry, this post is completely non-political so feel free to read on without having to be concerned about whether you’ll want to hug me or hit me.

Early in my career, I moderated a focus group for a consumer packaged goods company that didn’t turn out as the client had planned. The subject of the research was a new product that the company was introducing and the goal of the session was to validate the likely success of the product with the target audience. The participants in the group had been recruited based on very specific demographic and product usage criteria; specifications that had been set by the client. After about 30 minutes, it was clear that these people really did not like the new product. Out of the 12 people in the room, only 1 had had anything positive to say up to that point. Just then, there was a knock on the door and the receptionist came in and handed me a note- it asked me to excuse myself and step outside for a moment, which I did.

Waiting for me outside was the lead client; the man who had commissioned the research and who was watching with a sizable group of colleagues from behind the two-way mirror. He dressed me down, asking where we found this group of “morons” and told me that I need to make sure that the one dissenter who liked the product got more talk time for the balance of the session. I returned to the group room and did my best to try to make the client happy, yet also salvage some semblance of the principles of good research practice. It was not a shining moment in my career, but one that stuck with me as important and instructive.

Why tell that story here? The client from that focus group wasn’t interested in research; he only wanted validation of decisions that had already been made. He only wanted to hear what he wanted to hear. In the election, as we all know, there was a big divide in how the country felt about the 2 major party candidates. It was an ugly, super-negative campaign season, with a lot of bad feelings, news stories, and disinformation to go around. However, because of the way our society now consumes news, many voters were not exposed to fair and balanced news reporting; they heard what they wanted to hear.

The concept of a news source or outlet in 2016 is vastly different than it was just a few years ago. Not only have many mainstream/traditional media sources changed to reflect more editorial bias, but there now exists a plethora of digital media outlets, some serving up fake and/or misleading news, and most from polarized viewpoints masked by thinly veiled news wrappers. 

The proliferation of these sites has occurred because of the incredible reach of social media and the mechanisms social media outlets employ to display content to users. According to a study earlier this year by the Pew Research Center, 62% of US adults consume news on social media. The largest social media outlet is Facebook, which is used by 67% of American adults daily, of whom 44% use it to get news. Think about this- more adults in the US use Facebook every day than voted in the election. Facebook’s reach is staggering; it is more pervasive than any other media outlet in history, by far. But here’s where it gets tricky with regard to news; each user’s newsfeed is filled algorithmically with content based on their Facebook and other web using habits; posts they’ve liked and/or shared, articles they’ve read, web sites they’ve visited, things they’ve shopped for, ads they’ve clicked, etc. Certainly one of the goals of this practice is to sell users more products and services that they may be interested in, but users are also seeing a preponderance of information, news, and fake news that is similar (both in content and from sources) to that which they’ve already seen and the algorithm has determined they may be interested in. The result is that users see and hear more of what they’ve demonstrated what they’ve wanted to see and hear in the past. Clearly, this had an impact on the election, as a significant portion of the electorate was informed through a biased lens, knowingly or not. Measuring that impact is extremely difficult and beyond the scope of this post- Google “social media impact on election” if you’d like to go down that rabbit hole.

Ok, so where’s the business lesson in this? Well, go back to the focus group story. The end of that story is that the product was introduced…and failed spectacularly. Had the client been fully invested in doing true research- and listened to everyone in that room, the outcome could have been different.

My point is that it’s useful and important to hear what you may not want to hear in order to be best informed. For any business, managers should understand the perspective of all of its customers, whether that view is favorable or not. I continue to be surprised by business operators who don’t continually seek the customer viewpoint, instead either relying on sporadic anecdotal feedback, testimonials, or taking a look at the deeply flawed sampling of online review sites and applying their own calculus to filter who’s worth listening to. In essence, these managers are purposefully hearing only what they want to hear.

And so the business takeaway, for me, from the election is that we should proactively gather information across a spectrum of sources in order to make better informed decisions . All businesses should be asking every customer to provide their feedback, respectfully and concisely. Being offered the opportunity to share feedback is an expected part of the customer experience in 2016. There are now tools available to make it simple and affordable for every sized business to seamlessly and effortlessly make collecting, tracking, and deriving actionable data from customer feedback part of the daily routine.

As always, please feel free to contact me to share your opinion on this or to discuss further. Happy Thanksgiving!

JR
0 Comments

Survey Science Matters

10/14/2016

0 Comments

 
As data nerds for the last 30+ years, we get all worked up about all kinds of survey and polling stories that get released in the media. Whenever we come across interesting or suspect methodology, we generally refer to it as a #surveyscience issue and sometimes note it in a tweet through our our @listenkeenly account or on LinkedIn.

Clearly, the focus of many of these stories right now is the presidential election. Every day we get to look at new polling data, generated from a variety of sources and with a surprisingly wide array of results. What does it all mean? Which polls do the best job of describing current voter sentiment? How might they predict, or even impact the result of the election?
One of our favorite sites that weighs in on all of the data is FiveThirtyEight.com; their election coverage and poll tracking may be found here. And, if you're interested in learning a bit about what goes into designing a poll and the consequences of design choices, take a look at this article from Wednesday's New York Times.

In a sense, the problem they describe is similar to what often occurs on public review sites- sort of. On a certain level, the idea that an outlier respondent is projected to skew the representation of a subsegment of a population (or customer base) is an everyday occurrence for many businesses in online reviews, especially those that have diverse customer bases and a limited number of reviews. Of course, the polling problem described in the Times was designed into the study, whereas "the review problem" is inherent to the online review process- and should be mitigated with a proper feedback strategy.
0 Comments

A Closer Look At Guest Expectations

6/28/2016

0 Comments

 
Recently, one of our hotel clients' guests posted the following comment as part of their guest satisfaction survey:

"We come yearly and I have never been disappointed."
In response to the survey question whether their experience exceeded, met, or did not meet expectation, the guest ticked "met". Finally, our scoring algorithm computed 94 (max = 100) for this guest's overall evaluation of their stay.

The point of bringing this up is because it is a real world illustration of a position we put out a few years ago on the idea of expectations. We recently republished the complete text on our blog, but the main idea is that managing expectations can be complicated. On first blush, one might think that if you are not consistently exceeding expectations you are doing something wrong. Think about this though- if your customers expect you to always exceed their expectations, how can you? At the end of the day, our position is that meeting expectations is ok (and our scoring algorithm accounts for this), but understanding what drives those expectations is more important.
0 Comments

Guest Expectation- Tricky Business

4/5/2016

0 Comments

 
In our guest satisfaction surveying model, the first question we ask guests is how their stay met with their expectations.  There are 3 response choices; exceeded, met, or did not meet.

So let's flip things around for a second: what's reasonable for you, the hotelier, to expect in terms of responses to this question?  

The knee-jerk response to that question is that you'd like to exceed the expectations of every guest. On a theoretical level I suppose that's true, but practically speaking I'll make the case below as to why you really don't want that result. To begin, let's lump your guests into 2 categories; those who have stayed in your hotel before and those who have not.

For those guests who have previously stayed with you, expectations are pretty well set.  It's a tall order to significantly exceed expectations for this group.  Anytime you can exceed the expectations of a return guest you really have things clicking.

For your first time guests, the expectation equation gets pretty complicated.  How are those guests' expectations set?  There are many factors at play, including how they came to choose your hotel, recommendations/opinions of friends/relatives, marketing materials and messages, guidebooks, online reviews, etc.  In addition, there is a value component-- that is, expectation fluctuates with the price paid for a hotel room.

So, let's say you are able to exceed the expectations of every first time guest.  Is this a good thing?  I would argue that it is not.  Why?  If you're exceeding everyones' expectations, then it's clear that those expectations are unrealistically too low.  You would then be faced with the tough task of determining why and trying to change all of the variables in the equation mentioned above.  In addition, the implications of too-low guest expectations also extends to missed opportunity- that is, travelers who choose another hotel because they are looking for something "more" during the decision making process.

Getting back to the expectation question in your guest survey, what results should you be looking for? Clearly, the first objective is to have the percentage of guests who answer "not met" as low as possible. This is a key metric to monitor over time.  Among the remaining guests, the split of "exceeded" and "met" is dependent on the percentage of your business that is made up of repeat guests.  Again, the more repeat guests you have, the more difficult it becomes to exceed expectations.  
0 Comments

A Takeaway From The Iowa Caucuses & How It Relates To Your Business

2/4/2016

0 Comments

 
We tweeted this on Tuesday morning: "Lesson from the #IowaCaucus: Political polling is hard. Despite advances in technology, accurate sampling remains the challenge."  We were referring to the Republican race, where the ideas of actual results vs. expectations have became media spin topics in the wake of pollsters missing the boat.

Early in my career, I was involved in some political polling. It's hard. It's harder than most other types of opinion surveying. The biggest problem lies within the keystone tenet of inferential statistics (i.e. trying to infer from a sample what a population thinks); in an election, we don't know what the population will be (actual voter turnout and the mix of views among those that vote), so accurately trying to sample that population is virtually impossible. The best that can be done is to look at a range of polls that model likely voters in different ways. Advances in technology allow us to more easily analyze a variety of models and scenarios. One of my favorite sites that does this is FiveThirtyEight which looks at a wide range of topics, including politics, through a data lens.

So the title of this post teases about how what happened in Iowa relates to your business- what's the story with that? Simply this- look at the online reviews of your property(ies). Are all the reviewers a representative sample of the complete population of your guest base? Will a potential new guest properly infer what their experience might be like based on those reviews? That's the point of our services; ask all guests to give feedback, thus providing a more accurate sampling of that population both for your internal use and to push more reviews to sites where potential new customers are researching new places to go.

JR

0 Comments
<<Previous

    The blog of Database Sciences and its GuestInsight & ListenKeenly services.


    News and commentary relating to opinion & market research, feedback, analytics and data by Jeff Robbins, Founder and Managing Director

    ​Jeff's Bio

    Picture

    Archives

    October 2021
    September 2021
    August 2021
    November 2016
    October 2016
    June 2016
    April 2016
    February 2016
    January 2016
    December 2015
    October 2015
    August 2015
    July 2015
    June 2015
    March 2015

    Categories

    All

    RSS Feed

Study Bids/Request Info
1999-2022  All Rights Reserved - Database Sciences Inc.
  • CX
  • Data Science Services
  • Market Research
  • About Us
    • Technology
    • Blog
  • Contact