🤖

Trust Calibration in AI-assisted Decision Making

May 5, 2025

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-assisted Decision Making

Abstract

  • AI is increasingly used in high-stakes decision-making scenarios, but full automation is often undesirable.
  • Human experts can complement AI models with their domain knowledge.
  • AI-assisted Decision Making: Combines strengths of AI and human experts to optimize outcomes.
  • Trust Calibration: Key to success, involves determining when to trust or distrust AI to improve decision-making.
  • Study examines if confidence scores and local explanations can calibrate trust and enhance performance.

Key Findings

  • Confidence scores can help calibrate trust in AI models.
  • Trust calibration alone may not improve decision-making; dependent on the human's ability to complement AI errors.
  • Issues identified with using local explanations in AI-assisted decisions.
  • Invites further exploration into new approaches for AI explainability.

References

  1. Amina Adadi & Mohammed Berrada (2018): Survey on Explainable AI.
  2. Gagan Bansal et al. (2019): Human-AI team performance and compatibility.
  3. Carrie J. Cai et al. (2019): Example-based explanations in machine learning interfaces.
  4. Diogo V. Carvalho et al. (2019): Survey on machine learning interpretability.
  5. Hao Fei Cheng et al. (2019): UI strategies for explaining decision-making algorithms.
  6. M. L. Cummings (2004): Automation bias in decision support systems.
  7. Berkeley J Dietvorst et al. (2015): Algorithm aversion.
  8. Jonathan Dodge et al. (2019): Impact of explanations on fairness judgment.
  9. Finale Doshi-Velez & Been Kim (2017): Interpretable machine learning.
  10. Dheeru Dua & Casey Graff (2017): UCI Machine Learning Repository.
  11. Riccardo Guidotti et al. (2018): Methods for explaining black box models.
  12. D Harrison McKnight et al. (1998): Trust measures for e-commerce.

Publication Information

  • Published in: FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
  • Date: January 2020
  • Publisher: Association for Computing Machinery
  • Location: New York, NY, USA*

Bibliometrics & Citations

  • Total Citations: 448
  • Total Downloads: 9,482

Conference

  • FAT* '20*

Contributors

  • Various contributors in AI, human-computer interaction, and related fields.

Author Tags

  1. Confidence
  2. Decision Support
  3. Explainable AI
  4. Trust

Conclusion

  • The research highlights the importance of trust calibration in AI-assisted decision making.
  • It challenges the effectiveness of local explanations and encourages the development of better explainability methods to enhance the human-AI collaboration.