3.3.2 Measurement

Measurement is about inferring what your respondents think and do from what they say.

In addition to problems of representation, the total survey error framework shows that the second major source of errors is measurement: how we make inferences from the answers that respondents give to our questions. It turns out that the answers we receive, and therefore the inferences we make, can depend critically—and in sometimes surprising ways—on exactly how we ask. Perhaps nothing illustrates this important point better than a joke in the wonderful book Asking Questions by Norman Bradburn, Seymour Sudman, and Brian Wansink (2004):

Two priests, a Dominican and a Jesuit, are discussing whether it is a sin to smoke and pray at the same time. After failing to reach a conclusion, each goes off to consult his respective superior. The Dominican says, “What did your superior say?”

The Jesuit responds, “He said it was alright.”

“That’s funny” the Dominican replies, “My supervisor said it was a sin.”

The Jesuit said, “What did you ask him?” The Dominican replies, “I asked him if it was alright to smoke while praying.” “Oh” said the Jesuit, “I asked if it was OK to pray while smoking.”

Beyond this specific joke, survey researchers have documented many systematic ways that what you learn depends on how you ask. In fact, the very issue at the root of this joke has a name in the survey research community: question form effects (Kalton and Schuman 1982). To see how question form effects might impact real surveys, consider these two very similar-looking survey questions:

  • “How much do you agree with the following statement: Individuals are more to blame than social conditions for crime and lawlessness in this country.”
  • “How much do you agree with the following statement: Social conditions are more to blame than individuals for crime and lawlessness in this country.”

Although both questions appear to measure the same thing, they produced different results in a real survey experiment (Schuman and Presser 1996). When asked one way, about 60% of respondents reported that individuals were more to blame for crime, but when asked the other way, about 60% reported that social conditions were more to blame (figure 3.3). In other words, the small difference between these two questions could lead researchers to a different conclusion.

Figure 3.3: Results from a survey experiment showing that researchers can get different answers depending on exactly how they asked the question. A majority of respondents agreed that individuals are more to blame than social conditions for crime and lawlessness. And a majority of respondents agreed with the opposite: that social conditions are more responsible than individuals. Adapted from Schuman and Presser (1996), table 8.1.

Figure 3.3: Results from a survey experiment showing that researchers can get different answers depending on exactly how they asked the question. A majority of respondents agreed that individuals are more to blame than social conditions for crime and lawlessness. And a majority of respondents agreed with the opposite: that social conditions are more responsible than individuals. Adapted from Schuman and Presser (1996), table 8.1.

In addition to the structure of the question, respondents can also give different answers, depending on the specific words used. For example, in order to measure opinions about governmental priorities, respondents were read the following prompt:

“We are faced with many problems in this country, none of which can be solved easily or inexpensively. I’m going to name some of these problems, and for each one I’d like you to tell me whether you think we’re spending too much money on it, too little money, or about the right amount.”

Next, half of the respondents were asked about “welfare” and half were asked about “aid for the poor.” While these might seem like two different phrases for the same thing, they elicited very different results (figure 3.4); Americans report being much more supportive of “aid to the poor” than “welfare” (Smith 1987; Rasinski 1989; Huber and Paris 2013).

Figure 3.4: Results from a survey experiments showing that respondents are much more supportive of aid to the poor than welfare. This is an example of a question wording effect whereby the answers that researchers receive depend on exactly which words they use in their questions. Adapted from Huber and Paris (2013), table A1.

Figure 3.4: Results from a survey experiments showing that respondents are much more supportive of “aid to the poor” than “welfare.” This is an example of a question wording effect whereby the answers that researchers receive depend on exactly which words they use in their questions. Adapted from Huber and Paris (2013), table A1.

As these examples about question form effects and wording effects show, the answers that researchers receive can be influenced by how they ask their questions. These examples sometimes lead researchers to wonder about the “correct” way to ask their survey questions. While I think there are some clearly wrong ways to ask a question, I don’t think there is always one single correct way. That is, it is not obviously better to ask about “welfare” or “aid for the poor”; these are two different questions that measure two different things about respondents’ attitudes. These examples also sometimes lead researchers to conclude that surveys should not be used. Unfortunately, sometimes there is no choice. Instead, I think the right lesson to draw from these examples is that we should construct our questions carefully and we should not accept responses uncritically.

Most concretely, this means that if you are analyzing survey data collected by someone else, make sure that you have read the actual questionnaire. And if you are creating your own questionnaire, I have four suggestions. First, I suggest that you read more about questionnaire design (e.g., Bradburn, Sudman, and Wansink (2004)); there is more to this than I’ve been able to describe here. Second, I suggest that you copy—word for word—questions from high-quality surveys. For example, if you want to ask respondents about their race/ethnicity, you could copy the questions that are used in large-scale government surveys, such as the census. Although this might sound like plagiarism, copying questions is encouraged in survey research (as long as you cite the original survey). If you copy questions from high-quality surveys, you can be sure that they have been tested, and you can compare the responses to your survey to responses from some other surveys. Third, if you think your questionnaire might contain important question wording effects or question form effects, you could run a survey experiment where half the respondents receive one version of the question and half receive the other version (Krosnick 2011). Finally, I suggest that you pilot-test your questions with some people from your frame population; survey researchers call this process pre-testing (Presser et al. 2004). My experience is that survey pre-testing is extremely helpful.