3.3.2 Measurement

Measurement is about making inferences from what your respondents say to what your respondents think and do.

The second category of the total survey error framework is measurement; it deals with how we can make inferences from the answers that respondents give to our questions. It turns out that the answers we receive, and therefore the inferences we make, can depend critically—and in sometimes surprising ways—on exactly how we ask. Perhaps nothing illustrates this important point better than a joke in the wonderful book Asking Questions by Norman Bradburn, Seymour Sudman, and Brian Wansink (2004):

Two priests, a Dominican and a Jesuit, are discussing whether it is a sin to smoke and pray at the same time. After failing to reach a conclusion, each goes off to consult his respective superior. The Dominican says, “What did your superior say?”

The Jesuit responds, “He said it was alright.”

“That’s funny” the Dominican replies, “My supervisor said it was a sin.”

The Jesuit said, “What did you ask him?” The Dominican replies, “I asked him if it was alright to smoke while praying.” “Oh” said the Jesuit, “I asked if it was OK to pray while smoking.”

There are many examples of anomalies like the one experienced by the two priests. In fact, the very issue at the root of this joke has a name in the survey research community: question form effects (Kalton and Schuman 1982). To see how question form effects might impact real surveys, consider these two very similar looking survey questions:

  • “How much do you agree with the following statement: Individuals are more to blame than social conditions for crime and lawlessness in this country.”
  • “How much do you agree with the following statement: Social conditions are more to blame than individuals for crime and lawlessness in this country.”

Although both questions appear to measure the same thing, they produced different results in a real survey experiment (Schuman and Presser 1996). When asked one way, about 60% of respondents reported that individuals were more to blame for crime, but when asked the other way about 60% reported that social conditions were more to blame (Figure 3.2). In other words, the small difference between the two questions could lead researchers to a different conclusion.

Figure 3.2: Results from a survey experiment , Table 8.1(Schuman and Presser 1996, Table 8.1). Researchers can get different answers depending on exactly how they asked the question. This is an example of a question form effect (Kalton and Schuman 1982).

Figure 3.2: Results from a survey experiment (Schuman and Presser 1996, Table 8.1). Researchers can get different answers depending on exactly how they asked the question. This is an example of a question form effect (Kalton and Schuman 1982).

In addition to the structure of the question, respondents also can give different answers based on the specific words used. For example, in order to measure opinions about governmental priorities, respondents were read the following prompt:

“We are faced with many problems in this country, none of which can be solved easily or inexpensively. I’m going to name some of these problems, and for each one I’d like you to tell me whether you think we’re spending too much money on it, too little money, or about the right amount.”

Next, half of the respondents were asked about “welfare” and half were asked about “aid for the poor.” While these might seem like two different phrases for the same thing, they elicited very different results (Figure 3.3); Americans report being much more supportive of “aid to the poor” than “welfare” (Smith 1987; Rasinski 1989; Huber and Paris 2013). While survey researchers consider these wording effects to be anomalies, they could also consider them research findings. That is, we have learned something about public opinion from this result.

Figure 3.3: Results from Huber and Paris (2013). Respondents are much more supportive of aid to the poor than welfare. This is an example of a question wording effect whereby the answers that researchers receive depend on exactly which words they use in their questions.

Figure 3.3: Results from Huber and Paris (2013). Respondents are much more supportive of “aid to the poor” than “welfare.” This is an example of a question wording effect whereby the answers that researchers receive depend on exactly which words they use in their questions.

As these examples about question form effects and wording effects show, the answers that researchers receive can be influenced in subtle ways based on how they ask their questions. This does not mean that surveys should not be used; often there is no choice. Rather, the examples illustrate that we should construct our questions carefully and we should not accept responses uncritically.

Most concretely, this means that if you are analyzing survey data collected by someone else, make sure that you have read the actual questionnaire. And, if you are creating your own questionnaire, I have three suggestions. First, I suggest you read more about questionnaire design (e.g., Bradburn, Sudman, and Wansink (2004)); there is more to it than I’ve been able to describe here. Second, I suggest that you copy—word for word—questions from high-quality surveys. Although this sounds like plagiarism, copying questions is encouraged in survey research (as long as you cite the original survey). If you copy questions from high-quality surveys, you can be sure that they have been tested and you can compare the responses to your survey to responses from some other survey. Finally, I suggest you pre-test your questions with some people from your frame population (Presser et al. 2004); my experience is that pre-testing always reveals surprising issues.