Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-assisted Decision Making
Abstract
AI is increasingly used in high-stakes decision-making scenarios, but full automation is often undesirable.
Human experts can complement AI models with their domain knowledge.
AI-assisted Decision Making: Combines strengths of AI and human experts to optimize outcomes.
Trust Calibration: Key to success, involves determining when to trust or distrust AI to improve decision-making.
Study examines if confidence scores and local explanations can calibrate trust and enhance performance.
Key Findings
Confidence scores can help calibrate trust in AI models.
Trust calibration alone may not improve decision-making; dependent on the human's ability to complement AI errors.
Issues identified with using local explanations in AI-assisted decisions.
Invites further exploration into new approaches for AI explainability.
References
Amina Adadi & Mohammed Berrada (2018): Survey on Explainable AI.
Gagan Bansal et al. (2019): Human-AI team performance and compatibility.
Carrie J. Cai et al. (2019): Example-based explanations in machine learning interfaces.
Diogo V. Carvalho et al. (2019): Survey on machine learning interpretability.
Hao Fei Cheng et al. (2019): UI strategies for explaining decision-making algorithms.
M. L. Cummings (2004): Automation bias in decision support systems.
Berkeley J Dietvorst et al. (2015): Algorithm aversion.
Jonathan Dodge et al. (2019): Impact of explanations on fairness judgment.
Finale Doshi-Velez & Been Kim (2017): Interpretable machine learning.
Dheeru Dua & Casey Graff (2017): UCI Machine Learning Repository.
Riccardo Guidotti et al. (2018): Methods for explaining black box models.
D Harrison McKnight et al. (1998): Trust measures for e-commerce.
Publication Information
Published in: FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
Date: January 2020
Publisher: Association for Computing Machinery
Location: New York, NY, USA*
Bibliometrics & Citations
Total Citations: 448
Total Downloads: 9,482
Conference
FAT* '20*
Contributors
Various contributors in AI, human-computer interaction, and related fields.
Author Tags
Confidence
Decision Support
Explainable AI
Trust
Conclusion
The research highlights the importance of trust calibration in AI-assisted decision making.
It challenges the effectiveness of local explanations and encourages the development of better explainability methods to enhance the human-AI collaboration.