Research
Publications
Rational Inattention in Games: Experimental Evidence (with Daniel Martin), Experimental Economics, 27 (2024), No. 4, 715–742.
To investigate whether attention responds rationally to strategic incentives, we experimentally implement a buyer-seller game in which a fully informed seller makes a take-it-or-leave-it offer to a buyer who faces cognitive costs to process information about the offer's value. We isolate the impact of seller strategies on buyer attention by exogenously varying the seller's outside option, which leads sellers to price high more often. We find that buyers respond by making fewer mistakes conditional on value, which suggests that buyers exert higher attentional effort in response to the increased strategic incentives for paying attention. We show that a standard model of rational inattention based on Shannon mutual information cannot fully explain this change in buyer behavior. However, we identify another class of rational inattention models consistent with this behavioral pattern. [Data and Analysis Files] [Appendix]
Working papers
Human Responses to AI Oversight: Evidence from Centre Court (with Romain Gauriot, Lionel Page, and Daniel Martin)
Selected Coverage: The Economist - Kellogg Insight - CBC Radio - Communications ACM - Novigi - Social Warming
Extended abstract at EC'24, 15-Minute Presentation at Wharton [Video]
Powered by the increasing predictive capabilities of machine learning algorithms, artificial intelligence (AI) systems have begun to be used to overrule human mistakes in many settings. We provide the first field evidence this AI oversight carries psychological costs that can impact human decision-making. We investigate one of the highest visibility settings in which AI oversight has occurred: the Hawk-Eye review of umpires in top tennis tournaments. We find that umpires lowered their overall mistake rate after the introduction of Hawk-Eye review, in line with rational inattention given psychological costs of being overruled by AI. We also find that umpires increased the rate at which they called balls in, which produced a shift from making Type II errors (calling a ball out when in) to Type I errors (calling a ball in when out). We structurally estimate the psychological costs of being overruled by AI using a model of rational inattentive umpires, and our results suggest that because of these costs, umpires cared twice as much about Type II errors under AI oversight.
Work in progress
Heuristics, Similarity and Experimentation: Evidence from Chess (with Yuval Salant and Jörg Spenkuch)
AI vs. Human Oversight: Experimental Evidence on Task Performance (with Lucas Lippman and Daniel Martin)