Articles
| Open Access | Bridging the Cognitive Gap: A Comparative Analysis of Contrastive and Feature-Based Explainability in High-Stakes Artificial Intelligence Systems
Abstract
Background: As Artificial Intelligence (AI) systems, particularly Deep Neural Networks (DNNs), achieve superhuman performance in medical diagnostics and financial risk assessment, their inherent opacity—the "Black Box" problem—remains a critical barrier to adoption. Stakeholders in high-stakes domains require not just accurate predictions, but intelligible justifications that align with human cognitive reasoning.
Methods: This study provides a comparative evaluation of prominent Explainable AI (XAI) frameworks, specifically focusing on the dichotomy between feature-attribution methods (LIME, SHAP) and contrastive explanation approaches. We analyze these methodologies against a framework of "explanation effectiveness," assessing criteria such as local fidelity, consistency, and cognitive alignment with human decision-makers.
Results: Our analysis suggests that while feature-additive models like SHAP provide mathematical consistency in attributing contribution scores to input variables, they often fail to provide the causal intuition required in clinical settings. Conversely, contrastive explanations, which highlight "pertinent negatives" (what is missing but should be present for a different outcome), demonstrate higher efficacy in facilitating user trust and actionable insight, despite higher computational costs.
Conclusion: The transition from Black Box to "Glass Box" models is not merely a technical challenge but a socio-technical one. We conclude that for XAI to succeed in high-stakes environments, future architectures must prioritize contrastive reasoning that mirrors the differential diagnosis process used by human experts, moving beyond simple feature highlighting toward semantic intelligibility.
Keywords
Explainable AI (XAI), Medical Artificial Intelligence, Black Box Models, SHAP
References
Sheu, R.-K.; Pardeshi, M.S. A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. Sensors 2022, 22, 8068.
Tjoa, E.; Guan, C. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4793–4813.
Jung, J.; Lee, H.; Jung, H.; Kim, H. Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review. Heliyon 2023, 9, e16110.
[Rai, A. Explainable AI: From black box to glass box. J. Acad. Mark. Sci. 2020, 48, 137–141.
Loyola-Gonzalez, O. Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE Access 2019, 7, 154096–154113.
Ribeiro, M.T.; Singh, S.; Guestrin, C. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144.
Gerlings, J.; Jensen, M.S.; Shollo, A. Explainable AI, but explainable to whom? An exploratory case study of xAI in healthcare. In Handbook of Artificial Intelligence in Healthcare: Practicalities and Prospects; Lim, C.-P., Chen, Y.-W., Vaidya, A., Mahorkar, C., Jain, L.C., Eds.; Springer International Publishing: Cham, Switzeralnd, 2022; Volume 2, pp. 169–198.
Dhurandhar, A., Chen, P.Y., Luss, R., Tu, C.C., Ting, P., Shanmugam, K., and Das, P. 2018. Explanations based on the missing: towards contrastive explanations with pertinent negatives. In Neural Information Processing Systems.
Došilović, F. K., Brčić, M., and Hlupić, N. 2018. Explainable artificial intelligence: a survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, 0210–0215.
Ferreira, A., Madeira, S. C., Gromicho, M., Carvalho, M. d., Vinga, S., and Carvalho, A. M. 2021. Predictive medicine using interpretable recurrent neural networks. In International Conference on Pattern Recognition.
Yashika Vipulbhai Shankheshwaria, & Dip Bharatbhai Patel. (2025). Explainable AI in Machine Learning: Building Transparent Models for Business Applications. Frontiers in Emerging Artificial Intelligence and Machine Learning, 2(08), 08–15.
Gramegna, A. and Giudici, P. 2021. Shap and lime: an evaluation of discriminative power in credit risk. In Frontiers in Artificial Intelligence.
Article Statistics
Copyright License
Copyright (c) 2025 Dr. Melorina V. Strakhovskaya, Dr. Kostelia P. Vorontsenko

This work is licensed under a Creative Commons Attribution 4.0 International License.