TY - JOUR
T1 - Judging One’s Own or Another Person’s Responsibility in Interactions With Automation
AU - Douer, Nir
AU - Meyer, Joachim
N1 - Funding Information:
The research is part of the first author’s PhD dissertation at the Department of Industrial Engineering at Tel Aviv University. The research was partly funded by Israel Science Foundation grant 2019/19 to the second author. The experimental platform was developed by Samuel C. Cherna.
Publisher Copyright:
© Copyright 2020, The Author(s).
PY - 2022/3/1
Y1 - 2022/3/1
N2 - Objective: We explore users’ and observers’ subjective assessments of human and automation capabilities and human causal responsibility for outcomes. Background: In intelligent systems and advanced automation, human responsibility for outcomes becomes equivocal, as do subjective perceptions of responsibility. In particular, actors who actively work with a system may perceive responsibility differently from observers. Method: In a laboratory experiment with pairs of participants, one participant (the “actor”) performed a decision task, aided by an automated system, and the other (the “observer”) passively observed the actor. We compared the perceptions of responsibility between the two roles when interacting with two systems with different capabilities. Results: Actors’ behavior matched the theoretical predictions, and actors and observers assessed the system and human capabilities and the comparative human responsibility similarly. However, actors tended to relate adverse outcomes more to system characteristics than to their own limitations, whereas the observers insufficiently considered system capabilities when evaluating the actors’ comparative responsibility. Conclusion: When intelligent systems greatly exceed human capabilities, users may correctly feel they contribute little to system performance. They may interfere more than necessary, impairing the overall performance. Outside observers, such as managers, may overweigh users’ contribution to outcomes, holding users responsible for adverse outcomes when they rightly trusted the system. Application: Presenting users of intelligent systems and others with performance measures and the comparative human responsibility may help them calibrate subjective assessments of performance, reducing users’ and outside observers’ biases and attribution errors.
AB - Objective: We explore users’ and observers’ subjective assessments of human and automation capabilities and human causal responsibility for outcomes. Background: In intelligent systems and advanced automation, human responsibility for outcomes becomes equivocal, as do subjective perceptions of responsibility. In particular, actors who actively work with a system may perceive responsibility differently from observers. Method: In a laboratory experiment with pairs of participants, one participant (the “actor”) performed a decision task, aided by an automated system, and the other (the “observer”) passively observed the actor. We compared the perceptions of responsibility between the two roles when interacting with two systems with different capabilities. Results: Actors’ behavior matched the theoretical predictions, and actors and observers assessed the system and human capabilities and the comparative human responsibility similarly. However, actors tended to relate adverse outcomes more to system characteristics than to their own limitations, whereas the observers insufficiently considered system capabilities when evaluating the actors’ comparative responsibility. Conclusion: When intelligent systems greatly exceed human capabilities, users may correctly feel they contribute little to system performance. They may interfere more than necessary, impairing the overall performance. Outside observers, such as managers, may overweigh users’ contribution to outcomes, holding users responsible for adverse outcomes when they rightly trusted the system. Application: Presenting users of intelligent systems and others with performance measures and the comparative human responsibility may help them calibrate subjective assessments of performance, reducing users’ and outside observers’ biases and attribution errors.
KW - decision making
KW - human-automation interaction
KW - warning compliance
KW - warning systems
UR - http://www.scopus.com/inward/record.url?scp=85089008452&partnerID=8YFLogxK
U2 - 10.1177/0018720820940516
DO - 10.1177/0018720820940516
M3 - Article
C2 - 32749166
AN - SCOPUS:85089008452
VL - 64
SP - 359
EP - 371
JO - Human Factors
JF - Human Factors
SN - 0018-7208
IS - 2
ER -