Abstract
People increasingly rely on AI-advice when making decisions. At times, such advice can promote selfish behavior. When individuals abide by selfishness-promoting AI advice, how are they perceived and punished? To study this question, we build on theories from social psychology and combine machine-behavior and behavioral economic approaches. In a pre-registered, financially-incentivized experiment, evaluators could punish real decision-makers who (i) received AI, human, or no advice. The advice (ii) encouraged selfish or prosocial behavior, and decision-makers (iii) behaved selfishly or, in a control condition, behaved prosocially. Evaluators further assigned responsibility to decision-makers and their advisors. Results revealed that (i) prosocial behavior was punished very little, whereas selfish behavior was punished much more. Focusing on selfish behavior, (ii) compared to receiving no advice, selfish behavior was penalized more harshly after prosocial advice and more leniently after selfish advice. Lastly, (iii) whereas selfish decision-makers were seen as more responsible when they followed AI compared to human advice, punishment between the two advice sources did not vary. Overall, behavior and advice content shapes punishment, whereas the advice source does not.
| Original language | English |
|---|---|
| Article number | 108709 |
| Journal | Computers in Human Behavior |
| Volume | 171 |
| DOIs | |
| State | Published - 1 Oct 2025 |
| Externally published | Yes |
Keywords
- Advice
- Artificial intelligence
- Costly punishment
- Machine behavior
- Selfish behavior
ASJC Scopus subject areas
- Arts and Humanities (miscellaneous)
- General Psychology
- Human-Computer Interaction