scispace - formally typeset
Journal ArticleDOI

Data collection in a flat world: the strengths and weaknesses of mechanical turk samples

TLDR
The authors compared Mechanical Turk participants with community and student samples on a set of personality dimensions and classic decision-making biases and found that MTurk participants are less extraverted and have lower self-esteem than other participants, presenting challenges for some research domains.
Abstract
Mechanical Turk (MTurk), an online labor system run by Amazon.com, provides quick, easy, and inexpensive access to online research participants. As use of MTurk has grown, so have questions from behavioral researchers about its participants, reliability, and low compensation. In this article, we review recent research about MTurk and compare MTurk participants with community and student samples on a set of personality dimensions and classic decision-making biases. Across two studies, we find many similarities between MTurk participants and traditional samples, but we also find important differences. For instance, MTurk participants are less likely to pay attention to experimental materials, reducing statistical power. They are more likely to use the Internet to find answers, even with no incentive for correct responses. MTurk participants have attitudes about money that are different from a community sample’s attitudes but similar to students’ attitudes. Finally, MTurk participants are less extraverted and have lower self-esteem than other participants, presenting challenges for some research domains. Despite these differences, MTurk participants produce reliable results consistent with standard decision-making biases: they are present biased, risk-averse for gains, risk-seeking for losses, show delay/expedite asymmetries, and show the certainty effect—with almost no significant differences in effect sizes from other samples. We conclude that MTurk offers a highly valuable opportunity for data collection and recommend that researchers using MTurk (1) include screening questions that gauge attention and language comprehension; (2) avoid questions with factual answers; and (3) consider how individual differences in financial and social domains may influence results. Copyright © 2012 John Wiley & Sons, Ltd.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Inside the Turk Understanding Mechanical Turk as a Participant Pool

TL;DR: The characteristics of Mechanical Turk as a participant pool for psychology and other social sciences, highlighting the traits of the MTurk samples, why people become Mechanical Turk workers and research participants, and how data quality on Mechanical Turk compares to that from other pools and depends on controllable and uncontrollable factors as mentioned in this paper.
Journal ArticleDOI

Beyond the Turk: Alternative platforms for crowdsourcing behavioral research

TL;DR: This article found that participants on both platforms were more naive and less dishonest compared to MTurk participants, and ProA and CrowdFlower participants produced data quality that was higher than CF's and comparable to M-Turk's.
Journal ArticleDOI

Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants

TL;DR: In three online studies, participants from MTurk and collegiate populations participated in a task that included a measure of attentiveness to instructions (an instructional manipulation check: IMC), and MTurkers were more attentive to the instructions than were college students, even on novel IMCs.
Journal ArticleDOI

Reputation as a sufficient condition for data quality on Amazon Mechanical Turk.

TL;DR: It is concluded that sampling high-reputation workers can ensure high-quality data without having to resort to using attention check questions (ACQs), which may lead to selection bias if participants who fail ACQs are excluded post-hoc.
References
More filters
Book ChapterDOI

Prospect theory: an analysis of decision under risk

TL;DR: In this paper, the authors present a critique of expected utility theory as a descriptive model of decision making under risk, and develop an alternative model, called prospect theory, in which value is assigned to gains and losses rather than to final assets and in which probabilities are replaced by decision weights.
Book

Judgment Under Uncertainty: Heuristics and Biases

TL;DR: The authors described three heuristics that are employed in making judgements under uncertainty: representativeness, availability of instances or scenarios, and adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Journal ArticleDOI

The Framing of Decisions and the Psychology of Choice

TL;DR: The psychological principles that govern the perception of decision problems and the evaluation of probabilities and outcomes produce predictable shifts of preference when the same problem is framed in different ways.
Journal ArticleDOI

Advances in prospect theory: cumulative representation of uncertainty

TL;DR: Cumulative prospect theory as discussed by the authors applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses, and two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting function.
Related Papers (5)