Thanks! We'll let you know when the book is available for purchase.

Close this message

Let me know when the book is for sale

Thanks! We'll let you know when the book is available for purchase.

Close this message

Let me know when the book is finished

3.4 Who to ask

Probability samples and non-probability samples are not that different in practice; in both cases, it’s all about the weights.

Sampling is fundamental to survey research. Researchers almost never ask their questions to everyone in their target population. In this regard, surveys are not unique. Most research, in one way or another, involves sampling. Sometimes this sampling is done explicitly by the researcher; other times it happens implicitly. For example, a researcher that runs a laboratory experiment on undergraduate students in her university has also taken a sample. Thus, sampling is a problem that comes up throughout this book. In fact, one of the most common concerns that I hear about digital age sources of data is “they are not representative.” As we will see in this Section, this concern is both less serious and more subtle than many skeptics realize. In fact, I will argue that the whole concept of “representativeness” is not helpful for thinking about probability and non-probability samples. Instead, the key is to think about how the data was collected and how any biases in that data collection can be undone when making estimates.

Currently, the dominant theoretical approach to representation is probability sampling. When data are collected with a probability sampling method that has been perfectly executed, researchers are able to weight their data based on the way that they were collected to make unbiased estimates about the target population. However, perfect probability sampling basically never happens in the real world. There are typically two main problems 1) differences between the target population and the frame population and 2) non-response (these are exactly the problems that wrecked the Literary Digest poll). Thus, rather than thinking of probability sampling as a realistic model of what actually happens in the world, it is better to think of probability sampling as a helpful, abstract model, much like the way physicists think about a frictionless ball rolling down an infinitely long ramp.

The alternative to probability sampling is non-probability sampling. The main difference between probability and non-probability sampling is that with probability sampling everyone in the population has a known probability of inclusion. There are, in fact, many varieties of non-probability sampling, and these methods of data collection are becoming increasingly common in the digital age. But, non-probability sampling has a terrible reputation among social scientists and statisticians. In fact, non-probability sampling is associated with some of the most dramatic failures of survey researchers, such as the Literary Digest fiasco (discussed earlier) and the incorrect prediction about the US presidential elections of 1948 (“Dewey Defeats Truman”) (Mosteller 1949; Bean 1950; Freedman, Pisani, and Purves 2007).

However, the time is right to reconsider non-probability sampling for two reasons. First, as probability samples have become increasingly difficult to do in practice, the line between probability samples and non-probability samples is blurring. When there are high rates of non-response (as there are in real surveys now), the actual probability of inclusions for respondents are not known, and thus, probability samples and non-probability samples are not as different as many researchers believe. In fact, as we will see below, both approaches basically rely on the same estimation method: post-stratification. Second, there have been many developments in the collection and analysis of non-probability samples. These methods are different enough from the methods that caused problems in the past that I think it makes sense to think of them as “non-probability sampling 2.0.” We should not have an irrational aversion to non-probability methods because of errors that happened a long time ago.

Next, in order to make this argument more concrete, I’ll review standard probability sampling and weighting (Section 3.4.1). The key idea is that how you collected your data should impact how you make estimates. In particular, if everyone does not have the same probability of inclusion, then everyone should not have the same weight. In other words, if your sampling is not democratic, then your estimations should not be democratic. After reviewing weighting, I’ll describe two approaches to non-probability sampling: one that focuses on weighting to deal with the problem of haphazardly collected data (Section 3.4.2), and one that tries to place more control over how the data is collected (Section 3.4.3). The arguments in the main text will be explained below with words and pictures; readers who would like a more mathematical treatment should also see the technical appendix.