As data nerds for the last 30+ years, we get all worked up about all kinds of survey and polling stories that get released in the media. Whenever we come across interesting or suspect methodology, we generally refer to it as a #surveyscience issue and sometimes note it in a tweet through our our @listenkeenly account or on LinkedIn.
Clearly, the focus of many of these stories right now is the presidential election. Every day we get to look at new polling data, generated from a variety of sources and with a surprisingly wide array of results. What does it all mean? Which polls do the best job of describing current voter sentiment? How might they predict, or even impact the result of the election?
One of our favorite sites that weighs in on all of the data is FiveThirtyEight.com; their election coverage and poll tracking may be found here. And, if you're interested in learning a bit about what goes into designing a poll and the consequences of design choices, take a look at this article from Wednesday's New York Times.
In a sense, the problem they describe is similar to what often occurs on public review sites- sort of. On a certain level, the idea that an outlier respondent is projected to skew the representation of a subsegment of a population (or customer base) is an everyday occurrence for many businesses in online reviews, especially those that have diverse customer bases and a limited number of reviews. Of course, the polling problem described in the Times was designed into the study, whereas "the review problem" is inherent to the online review process- and should be mitigated with a proper feedback strategy.
The blog of Database Sciences and its CX platform, GuestInsight