3.5.2 Wiki surveys

Wiki surveys enable new hybrids of closed and open questions.

In addition to asking questions at more natural times and in more natural contexts, new technology also allows us to change the form of the questions. Most survey questions are closed, with respondents choosing from a fixed set of choices written by researchers. This is a process that one prominent survey researcher calls “putting words in people’s mouths.” For example, here’s a closed survey question:

“This next question is on the subject of work. Would you please look at this card and tell me which thing on this list you would most prefer in a job?

  1. High income
  2. No danger of being fired
  3. Working hours are short, lots of free time
  4. Chances for advancement
  5. The work is important, and gives a feeling of accomplishment."

But are these the only possible answers? Might researchers be missing something important by limiting the responses to these five? The alternative to closed questions is an open-ended survey question. Here’s the same question asked in an open form:

“This next question is on the subject of work. People look for different things in a job. What would you most prefer in a job?”

Although these two questions appear quite similar, a survey experiment by Howard Schuman and Stanley Presser (1979) revealed that they can produce very different results: nearly 60% of the responses to the open question are not included in the five researcher-created responses (figure 3.9).

Figure 3.9: Results from a survey experiment showing that responses can depend on whether the question is asked in closed or open form. Adapted from Schuman and Presser (1979), table 1.

Figure 3.9: Results from a survey experiment showing that responses can depend on whether the question is asked in closed or open form. Adapted from Schuman and Presser (1979), table 1.

Although open and closed questions can yield quite different information and both were popular in the early days of survey research, closed questions have come to dominate the field. This domination is not because closed questions have been proven to provide better measurement, but rather because they are much easier to use; the process of analyzing open-ended questions is error-prone and expensive. The move away from open questions is unfortunate because it is precisely the information that researchers did not know ahead of time that can be the most valuable.

The transition from human-administered to computer-administered surveys, however, suggests a new way out of this old problem. What if we could now have survey questions that combine the best features of both open and closed questions? That is, what if we could have a survey that both is open to new information and produces easy-to-analyze responses? That’s exactly what Karen Levy and I (2015) have tried to create.

In particular, Karen and I thought that websites that collect and curate user-generated content might be able to inform the design of new types of surveys. We were particularly inspired by Wikipedia—a wonderful example of an open, dynamic system driven by user-generated content—so we called our new survey a wiki survey. Just as Wikipedia evolves over time based on the ideas of its participants, we imagined a survey that evolves over time based on the ideas of its participants. Karen and I developed three properties that wiki surveys should satisfy: they should be greedy, collaborative, and adaptive. Then, with a team of web developers, we created a website that could run wiki surveys_: www.allourideas.org.

The data collection process in a wiki survey is illustrated by a project we did with the New York City Mayor’s Office in order to integrate residents’ ideas into PlaNYC 2030, New York’s citywide sustainability plan. To begin the process, the Mayor’s Office generated a list of 25 ideas based on their previous outreach (e.g., “Require all big buildings to make certain energy efficiency upgrades” and “Teach kids about green issues as part of school curriculum”). Using these 25 ideas as seeds, the Mayor’s Office asked the question “Which do you think is a better idea for creating a greener, greater New York City?” Respondents were presented with a pair of ideas (e.g., “Open schoolyards across the city as public playgrounds” and “Increase targeted tree plantings in neighborhoods with high asthma rates”), and were asked to choose between them (figure 3.10). After choosing, respondents were immediately presented with another randomly selected pair of ideas. They were able to continue contributing information about their preferences for as long as they wished either by voting or by choosing “I can’t decide.” Crucially, at any point, respondents were able to contribute their own ideas, which—pending approval by the Mayor’s Office—became part of the pool of ideas to be presented to others. Thus, the questions that participants received were both open and closed simultaneously.

Figure 3.10: Interface for a wiki survey. Panel (a) shows the response screen and panel (b) shows the result screen. Reproduced by permission from Salganik and Levy (2015), figure 2.

Figure 3.10: Interface for a wiki survey. Panel (a) shows the response screen and panel (b) shows the result screen. Reproduced by permission from Salganik and Levy (2015), figure 2.

The Mayor’s Office launched its wiki survey in October 2010 in conjunction with a series of community meetings to obtain resident feedback. Over about four months, 1,436 respondents contributed 31,893 responses and 464 new ideas. Critically, 8 of the top 10 scoring ideas were uploaded by participants rather than being part of the set of seed ideas from the Mayor’s Office. And, as we describe in our paper, this same pattern, with uploaded ideas scoring better than seed ideas, happens in many wiki surveys. In other words, by being open to new information, researchers are able to learn things that would have been missed using more closed approaches.

Beyond the results of these specific surveys, our wiki survey project also illustrates how the cost structure of digital research means that researchers can now engage with the world in somewhat different ways. Academic researchers are now able to build real systems that can be used by many people: we have hosted more than 10,000 wiki surveys and have collected more than 15 million responses. This ability to create something that can be used at scale comes from the fact that once the website has been built, it costs basically nothing to make it freely available to everyone in the world (of course, this would not be true if we had human-administered interviews). Further, this scale enables different kinds of research. For example, these 15 million responses, as well as our stream of participants, provide a valuable test-bed for future methodological research. I’ll describe more about other research opportunities that are created by digital-age cost structures—particularly zero variable cost data—when I discuss experiments in chapter 4.