PhD Supervision: FRAMEWORK FOR USER-CENTRIC EXPLANATION DESIGN FOR AI RISK PREDICTIVE MODEL
๐๏ธ Timeline
January 2019 โ January 2024
๐ฅ Contributors
- Ali Hassan โ PhD Candidate
- Mansoor Abdulhak โ Co-Supervisor
๐ Abstract
Interpreting predictions from high-performing machine learning models poses considerable challenges when applied to healthcare contexts. Despite the growing body of research on model interpretability, there is a notable gap in aligning explanation approaches with end-user perspectives, particularly in healthcare settings. Consequently, the lack of user engagement hinders the development of explanations that facilitate healthcare providersโ understanding of model predictions effectively. To address this issue, this study seeks to incorporate healthcare professionalsโ perspectives into the design of explanations for an AI-based risk prediction model. A framework for designing user-centric explanations for AI-based models is proposed and applied to an AI risk predictive model aiming at enhancing transparency and interpretability for end-users. This framework seeks to encourage explanations that are not only understandable and trustworthy but also relevant for healthcare professionals. The developed framework is practically implemented and validated through the creation of an AI-based risk predictive model in a real-world healthcare application. Through literature studies and user feedback, key factors for constructing contextualized explanations in AI-based models are identified. By integrating the user-centric explanation framework into the model development process, its effectiveness and impact on healthcare professionalsโ understanding are evaluated. Results indicate an improvement in the healthcare professionalsโ perceptions regarding the use of the predictive model with the user-centric explanations. However, there is no significant effect observed on provider privacy concerns or decision-making efficiency. Limitations in the study design, such as a small sample size, might have restricted the detection of significant effects on decision-making. However, the favorable response from healthcare providers to the predictive model enhanced with user-centric explanations indicates a promising path for explaining machine learning model predictions in healthcare settings. This research introduces a novel framework for user-centric explanation design for AI-based models, which offers potential applicability beyond healthcare domains. Furthermore, it offers important perspectives on the importance of model interpretability and explanation within healthcare, promoting discussions on how to effectively convey information from machine learning models to end-users.
๐ Publication
- Hassan, A., Abdulhak, M. A. A., Bin Sulaiman, R., & Kahtan, H. (2021). User centric explanations: A breakthrough for explainable models. In 2021 International Conference on Information Technology (ICIT) (pp. 702โ707). IEEE. https://doi.org/10.1109/ICIT52682.2021.9491641
- H Hassan, A., bin Sulaiman, R., Abdulhak, M., & Al-Ani, H. K. (2023). Balancing technological advances with user needs: User-centered principles for AI-driven smart city healthcare monitoring.
- Hassan, A. H., bin Sulaiman, R., Abdulhak, M., & Kahtan, H. (2025). Bridging Data and Clinical Insight: Explainable AI for ICU Mortality Risk Prediction. International Journal of Advanced Computer Science & Applications, 16(2).
- Hassan, A. H., Ahmed, E. M., Hussien, J. M., Bin Sulaiman, R., Abdulhak, M., & Kahtan, H. (2025). A cyber physical sustainable smart city framework toward Society 5.0: Explainable AI for enhanced SDGs monitoring. Research in Globalization, 10, 100275. https://doi.org/10.1016/j.resglo.2025.100275