We often ask people to answer questions in the form of a number.
“Please answer from one to ten, ten being the best:” Anything from how much does it hurt, to whether your customer service rep was polite, to your patient’s triage ranking, to an employee’s performance, to the quality of a proposal. And given a number, the presumption is that we’re well on the path to objectivity and analytics outcomes. After all, numbers are quantitative. And they can be compared, added, sorted, and averaged. Besides, users are more likely to answer with a number than to craft a scintillating essay on the topic of their pain, or someone’s politeness or performance. Who wants to read that stuff anyway? If the rating was above average, that’s good enough.
We want to treat these numbers exactly, but we also know that the numbers aren’t really all that exact. I’ve had the opportunity to study numerical survey results analytically, and the outcomes closely mirror our empirical and anecdotal experiences. Given a lot of questions, people often don’t really read them, and rank everything about the same – good or bad. Some people don’t give one’s or ten’s – on principle. People can respond to something other than the immediate question – such as how they feel about the survey itself. People often don’t give the same rating to the same question minutes later. Certain questions have average ranks well above “five,” when they shouldn’t. Intelligent respondees can “game” their responses in anticipation of a likely interpretation. The responses people give depend both on whether the question is early or late in a survey, and which other questions are close by. And, there is the well-known adage that the response depends on how the question was asked.
While that might confirm that crafting a good survey is not easy, my immediate point is that having a number and the ability to manipulate it does not guarantee that the numbers are meaningful, or that analytics based on those numbers is valid. The context that produces our data can be dominant – and surveys are essentially little social experimments, with a context separate from the numbers we are evaluating.
Statistical assessment of the responses could be suggested as a remedy. This can deliver part of the answer, but an statistical analysis of one question can ignore the possibility the responses are biased, and the relation of the responses to other nearby questions (a problem I observed in my work). A valid analysis might depend on a randomized set of surveys that aren’t available.
A numerical measure doesn’t guarantee we’ve capture the context of the measure, and that means even if our numbers respond to analysis we might obtain results that are suspect. We commonly hear that analytics can resolve questions involving complex entities – like people – and I don’t doubt that there is a role for analytics work. That said, a key component of the analytics process is getting to a valid number in the first place – a challenge for surveys, social and policial questions, medicine, human resources, veterinary practice, amongst others. Understandably, our subject expert friends aren’t always aware of these concerns. To me, it makes a great deal of sense for us to help people understand the context of their numerical measures, and when that context might limit outcomes, rather than focus on processing those measures with statistical and analytics algorithms.