User Interface Interpretability and Human-AI Interaction
Barnabas Joel
Ladoke Akintola University of Technology
Abstract
For a successful Human-AI Interaction (HAI), AI systems should be understandable and easy to handle. This is because, gradually, these systems are becoming part of everyday decisions made. This article discusses how UI design and interpretability are tied together in a way that AI applications become more usable, trustworthy, and comprehensible. We demonstrate how AI functionalities should be integrated into those parts of the user interface that make sense based on clear context, interactive feedback, and supportive visual explanations that help users gain insights. A case study and user testing illustrate that users are much more likely to trust and use AI-generated outcomes when the user interface is more comprehensible. The findings contribute to the creation of AI systems that are easy to use and effective in practical applications, while also expanding research in the field of explainable artificial intelligence.
Keywords
Human-AI Interaction, Interpretability, Explainable AI (XAI), User Interface Design, Trust in AI, Human-Centered AI, Visual Explanations, Interactive Systems, Usability, AI Transparency.
References
[1] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
[2] Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43.
[3] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.
[4] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144).
[5] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (pp. 4765–4774).
[6] Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., & Horvitz, E. (2019). Guidelines for human-AI interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
[7] Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). ‘It's reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.
[8] Gunning, D. (2017). Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), Program Information.
[9] Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, S., & Butz, A. (2018). Bringing transparency design into practice. Proceedings of the 23rd International Conference on Intelligent User Interfaces.
[10] Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015). Principles of explanatory debugging to personalize interactive machine learning. Proceedings of the 20th International Conference on Intelligent User Interfaces.
[11] Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.
[12] Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? Artificial Intelligence in Medicine, 71, 101–113.
[13] Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.
[14] Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.
[15] Zhang, Y., Liao, Q. V., Bellamy, R. K. E., & Singh, M. (2020). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.