The category size bias: A mere misunderstanding
Journal Title: Judgment and Decision Making - Year 2018, Vol 13, Issue 2
Abstract
Redundant or excessive information can sometimes lead people to lean on it unnecessarily. Certain experimental designs can sometimes bias results in the researcher’s favor. And, sometimes, interesting effects are too small to be studied, practically, or are simply zero. We believe a confluence of these factors led to a recent paper (Isaac & Brough, 2014, JCR). This initial paper proposed a new means by which probability judgments can be led astray: the category size bias, by which an individual event coming from a large category is judged more likely to occur than an event coming from a small one. Our work shows that this effect may be due to instructional and mechanical confounds, rather than interesting psychology. We present eleven studies with over ten times the sample size of the original in support of our conclusion: We replicate three of the five original studies and reduce or eliminate the effect by resolving these methodological issues, even significantly reversing the bias in one case (Study 6). Studies 7–8c suggest the remaining two studies are false positives. We conclude with a discussion of the subtleties of instruction wording, the difficulties of correcting the record, and the importance of replication and open science.
Authors and Affiliations
Hannah Perfecto, Leif D. Nelson and Don A. Moore
Intuitive decisions on the fringes of consciousness: Are they conscious and does it matter?
Decision making research often dichotomises between more deliberative, cognitive processes and more heuristic, intuitive and emotional processes. We argue that within this two-systems framework (e.g., Kahneman, 2002) the...
Response time and decision making: An experimental study
Response time is used here to interpret choice in decision problems. I first establish that there is a close connection between short response time and choices that are clearly a mistake. I then investigate whether a cor...
A shift in strategy or “error”? Strategy classification over multiple stochastic specifications
We present a classification methodology that jointly assigns to a decision maker a best-fitting decision strategy for a set of choice data as well as a best-fitting stochastic specification of that decision strategy. Our...
Dissecting the risky-choice framing effect: Numeracy as an individual-difference factor in weighting risky and riskless options
Using five variants of the Asian Disease Problem, we dissected the risky-choice framing effect by requiring each participant to provide preference ratings for the full decision problem and also to provide attractiveness...
Multi-attribute utility models as cognitive search engines
In optimal stopping problems, decision makers are assumed to search randomly to learn the utility of alternatives; in contrast, in one-shot multi-attribute utility optimization, decision makers are assumed to have perfec...