3.5.1 Ecological momentary assessments

Researchers can chop up big surveys and sprinkle them into peoples’ lives.

Ecological momentary assessments (EMA) involves taking traditional surveys, chopping them up into pieces, and sprinkling them into the lives of participants. Thus, survey questions can be asked at an appropriate time and place, rather than in a long interview weeks after the events have occurred.

EMA is characterized by four features: (1) collection of data in real-world environments; (2) assessments that focus on individuals’ current or very recent states or behaviors; (3) assessments that may be event-based, time-based, or randomly prompted (depending on the research question); and (4) completion of multiple assessments over time (Stone and Shiffman 1994). EMA is an approach to asking that is greatly facilitated by smart phones that people interact with frequently throughout the day. Further, because smart phones are packed with sensors—such as GPS and accelerometers—it is increasingly possible to trigger measurements based on activity. For example, a smart phone could be programmed to trigger a survey question if a respondent goes into a particular neighborhood.

The promise of EMA is nicely illustrated by the dissertation research of Naomi Sugie. Since the 1970s the United States has dramatically increased the number of people that it imprisons. As of 2005, about 500 in every 100,000 Americans were in prison, a rate of incarceration higher than anywhere else in the world (Wakefield and Uggen 2010). The surge in the number of people entering prison has also produced a surge in the number of people leaving prison; about 700,000 people leave prison each year (Wakefield and Uggen 2010). These ex-offenders face severe challenges upon leaving prison, and unfortunately many end up back in prison. In order to understand and reduce recidivism, social scientists and policy makers need to understand the experience of ex-offenders as they re-enter society. However, these data are hard to collect with standard survey methods because ex-offenders tend to be difficult to study and their lives are extremely unstable. Measurement approaches that deploy surveys every few months miss enormous amounts of the dynamics in their lives (Sugie 2016).

In order to study the re-entry process of ex-offenders with much greater precision, Sugie took a standard probability sample of 131 people from the complete list of individuals leaving prison in Newark, New Jersey. She provided each participant with a smart phone that became a rich data collection platform. Sugie used the phones to administer two kinds of surveys. First, she sent an “experience sampling survey” at a randomly selected time between 9am and 6pm asking participants about their current activities and feelings. Second, at 7pm, she sent a “daily survey” asking about all the activities of that day. Together these two surveys provide detailed, longitudinal data about the lives of these ex-offenders.

In addition to these surveys, the phones recorded their geographic location at regular intervals and kept encrypted records of call and text meta-data. All of this data collection, particularly the passive data collection, raises some ethical questions, but Sugie’s design handled them well. Sugie received meaningful informed consent from each participant for this data collection, used appropriate security protections, and enabled participants to turn off the geographic tracking. Further, in order to minimize the risk of forced disclosure of data (e.g., a subpoena from the police), Sugie obtained a Certificate of Confidentiality from the federal government before any data was collected (Beskow, Dame, and Costello 2008; Wolf et al. 2012). Sugie’s procedures were reviewed by a third-party (her university’s Institutional Review Board), and they went far beyond what is required by existing regulations. As such, I think her work provides a valuable model for other researchers facing these same challenges; see Sugie (2014) and Sugie (2016) for a more detailed discussion.

The ability to secure and hold a stable job is important for a successful reentry process. However, Sugie found that her participants’ work experiences were informal, temporary, and sporadic. Further, within her participant pool, there were four distinct patterns: “early exit” (those who start searching for work but then drop out of the labor market), “persistent search” (those who spend much of the period searching for work), “recurring work” (those who spend much of the period working), and “low response” (those who do not respond to the surveys regularly). Further, Sugie wanted to understand more about the people who stop searching for jobs. One possibility is that these searchers become discouraged and depressed and eventually drop out of the labor market. Aware of this possibility, Sugie used her surveys to collect data about the emotional state of participants, and she found that the “early exit” group did not report higher levels of stress or unhappiness. Rather, the opposite was the case: those who continued to search for work reported more feelings of emotional distress. All of this fine-grained, longitudinal detail about the behavior and emotional state of the ex-offenders is important for understanding the barriers they face and easing their transition back into society. Further, all of this fine-grained detail would have been missed in a standard survey.

There are three general lessons from Sugie’s work. First, new approaches to asking are completely compatible with traditional methods of sampling; recall, that Sugie took a standard probability sample from a well-defined frame population. Second, high-frequency, longitudinal measurements can be particularly valuable for studying social experiences that are irregular and dynamic. Third, when survey data collection is combined with digital traces, additional ethical issues can arise. I’ll treat research ethics in more detail in Chapter 6, but Sugie’s work shows that these issues are addressable by conscientious and thoughtful researchers.