TY - JOUR
T1 - Decision control and explanations in human-AI collaboration
T2 - Improving user perceptions and compliance
AU - Westphal, Monika
AU - Vössing, Michael
AU - Satzger, Gerhard
AU - Yom-Tov, Galit B.
AU - Rafaeli, Anat
N1 - Funding Information:
This research was supported by the Federal Ministry for Economic Affairs and Climate Action of Germany (BMWK) through the Smart Design and Construction project (Project ID: 01MK20016F ), the Karlsruhe Institute of Technology , and the Technion–Israel Institute of Technology . We thank David Heigl for the support in the initial data curation and formal analysis. We also thank Wes Cowley for English language editing and proofreading.
Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/7/1
Y1 - 2023/7/1
N2 - Human-AI collaboration has become common, integrating highly complex AI systems into the workplace. Still, it is often ineffective; impaired perceptions – such as low trust or limited understanding – reduce compliance with recommendations provided by the AI system. Drawing from cognitive load theory, we examine two techniques of human-AI collaboration as potential remedies. In three experimental studies, we grant users decision control by empowering them to adjust the system's recommendations, and we offer explanations for the system's reasoning. We find decision control positively affects user perceptions of trust and understanding, and improves user compliance with system recommendations. Next, we isolate different effects of providing explanations that may help explain inconsistent findings in recent literature: while explanations help reenact the system's reasoning, they also increase task complexity. Further, the effectiveness of providing an explanation depends on the specific user's cognitive ability to handle complex tasks. In summary, our study shows that users benefit from enhanced decision control, while explanations – unless appropriately designed for the specific user – may even harm user perceptions and compliance. This work bears both theoretical and practical implications for the management of human-AI collaboration.
AB - Human-AI collaboration has become common, integrating highly complex AI systems into the workplace. Still, it is often ineffective; impaired perceptions – such as low trust or limited understanding – reduce compliance with recommendations provided by the AI system. Drawing from cognitive load theory, we examine two techniques of human-AI collaboration as potential remedies. In three experimental studies, we grant users decision control by empowering them to adjust the system's recommendations, and we offer explanations for the system's reasoning. We find decision control positively affects user perceptions of trust and understanding, and improves user compliance with system recommendations. Next, we isolate different effects of providing explanations that may help explain inconsistent findings in recent literature: while explanations help reenact the system's reasoning, they also increase task complexity. Further, the effectiveness of providing an explanation depends on the specific user's cognitive ability to handle complex tasks. In summary, our study shows that users benefit from enhanced decision control, while explanations – unless appropriately designed for the specific user – may even harm user perceptions and compliance. This work bears both theoretical and practical implications for the management of human-AI collaboration.
KW - Decision control
KW - Explanations
KW - Human-AI collaboration
KW - Task complexity
KW - User compliance
KW - User trust
UR - http://www.scopus.com/inward/record.url?scp=85150255936&partnerID=8YFLogxK
U2 - 10.1016/j.chb.2023.107714
DO - 10.1016/j.chb.2023.107714
M3 - Article
AN - SCOPUS:85150255936
SN - 0747-5632
VL - 144
JO - Computers in Human Behavior
JF - Computers in Human Behavior
M1 - 107714
ER -