Questions are certainly the means to an answer, but they can also be revealing in their own right.
Large data systems often provide ways to investigate the queries that are being run against their data. That information can be used to determine the data a user community finds most important, as well as whether we might be too focused on a particular kind of information. We might be asking questions about which sales groups are selling the most and who the best salespeople are, without exploring related questions: what customers are now buying our products and the types of products they are buying. An underlying “why” can easily be missed when don’t use all available information. Reviewing the questions we ask of our information systems is one tool for assessing if we’re optimally using the available information.
Likewise, examining survey questions can validate whether our survey approach is adequate or whether limits, context, and bias are skewing the answers we receive, or our interpretation of those answers.
I was thinking again about the “asking side” of questions when I saw a recent Pew Research Center poll, which shows that about half of Democrats are “afraid” of the Republican party, and vice versa. The Pew Research Center poses many interesting social policy questions, and polls people for the answers. The “afraid” poll is a true conversation starter, but as I read the report I also began to wonder “What conversation should we now be having?”
For if we step back for a moment, it’s remarkable that people understood and responded to a question about negative emotions generated by an opposing political party. (The options, of which respondents could choose more than one, were “angry,” “frustrated,” and “afraid”). Why? Because political parties are an aggregation of people with different jobs, religions, policies, and beliefs. Nonetheless, at least half of Pew respondents replied without hesitation to a question about a political party in aggregate, ignoring the underlying diversity of people and policies.
If responding to a question about a broad aggregate of people seems normal and routine (and I’ll agree it’s almost become routine) let me ask a formally similar question: “Please select one or more of the following. Do the people living at a latitude north of your house making you a) angry, b) frustrated, or c) afraid?” Reading this, you might respond with d) The question doesn’t make any sense.
And my question doesn’t make sense – I arbitrarily picked a group of people about which we’re unlikely of have any preconceived notions. A sensible response will probably be along the lines of Who could possibly answer a question like that, about a group of people we don’t even know?
But respondents answered the Pew question, also about people they don’t know, based on existing notions of how a group of people will act en masse. In asking the “afraid” question Pew not only received some remarkable answers, they confirmed that people respond to political parties and their members in aggregate. However, it is not axiomatic that an aggregate question is readily perceived and answered – I posed my variant of the Pew question to suggest that aggregate questions and responses are not automatic. How we’ve arrived at a point where Pew’s question about politcal party aggregates is considered unremarkable might be as interesting as the remarkable answers the question generated.
Separately from the aggregate nature of the survey question, we might also inquire whether the responses to the Pew question comprise “good data.” For survey questions frequently yield information different than what we expect.
In this case the survey responses are undoubtedly interesting, but the survey question has ambiguities that might make interpretation of the data difficult.
For one thing, we can’t be sure what people mean by subjective terms like “afraid,” which group of people answered the survey, or much about their background. In short, however intriguing this survey is, there isn’t much to compare it to, other than the responses to the same question in a prior survey.
We might also ask about why respondents reported three different emotions in about equal measure – within probable error about 50%. That number might be real, but could also be an artifact related to this group of respondents, or that people just didn’t distinguish much between three different emotions. Without baseline information it’s hard to know for sure.
Finally, there is the question of the questions themselves. Survey answers depend strongly on what questions are asked, and even the ordering of the questions. To illustrate, here are two different approaches to asking the “afraid” question. In the first approach there is a series of 11 questions. Five about the people in our party, five about the people in the opposed party, and for question 11 we’re asked: are you afraid of them? In the second approach, we are just asked question 11: are you afraid of them?
We’re likely to get quite different responses to these two surveys, as the first survey encouraged us to set aside aggregate thinking before asking a question about aggregates. In the second, we fire off without thinking, simultaneously worried about losing health insurance or the overreach of the deep state (take your pick).
These results might be a precursor to good data – they suggest a line of inquiry but I don’t know they are the final answer. Are people experiencing negative emotions about opposed political philosophies? I don’t doubt it. As for who and how many are experiencing negative emotions, what emotions those are, and whether the emotions are considered or entirely visceral, I don’t think we know yet. That said, I see the Pew Center as trying primarily to start and continue discussion – and in this they’ve succeeded admirably.
Is this Pew Center survey interesting, useful, a great conversation starter, and hopefully a catalyst to additional inquiry? Doubtless on all counts. At the same time, surveys like this deliver answers that cannot be separated from their orignating questions. The significance of the results can have as much to do with the questions as the answers.