Unveiling explainability in artificial intelligence: a step to-wards transparent AI
-
2025-01-31 https://doi.org/10.14419/f2agrs86
-
Explainable AI; Transparency; Post-Hoc Explanations; Causality-Based Explanations; Neuro-Symbolic AI; Ethics In AI; AI Accountability; Trustworthy AI; AI Interpretability; Autonomous Systems. -
Abstract
Explainability in artificial intelligence (AI) is an essential factor for building transparent, trustworthy, and ethical systems, particularly in high-stakes domains such as healthcare, finance, justice, and autonomous systems. This study examines the foundations of AI explainability, its critical role in fostering trust, and the current methodologies used to interpret AI models, such as post-hoc techniques, intrinsically inter-pretable models, and hybrid approaches. Despite these advancements, challenges persist, including trade-offs between accuracy and inter-pretability, scalability, ethical risks, and transparency gaps. The paper explores emerging trends like causality-based explanations, neuro-symbolic AI, and personalized frameworks, while emphasizing the integration of ethics and the need for automation in explainability. Future directions stress the importance of collaboration among researchers, practitioners, and policymakers to establish industry standards and regulations, ensuring that AI systems align with societal values and expectations.
-
References
- Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
- Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). https://doi.org/10.1145/2939672.2939778.
- Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable ma-chine learning. Proceedings of the National Academy of Sciences, 116(44), 22071-22080. https://doi.org/10.1073/pnas.1900654116.
- S. M. Abdulrahman, R. R. Asaad, H. B. Ahmad, A. Alaa Hani, S. R. M. Zeebaree, and A. B. Sallow, “Machine Learning in Nonlinear Material Physics,” Journal of Soft Computing and Data Mining, vol. 5, no. 1, Jun. 2024, https://doi.org/10.30880/jscdm.2024.05.01.010.
- A. B. Sallow, R. R. Asaad, H. B. Ahmad, S. Mohammed Abdulrahman, A. A. Hani, and S. R. M. Zeebaree, “Machine Learning Skills To K–12,” Journal of Soft Computing and Data Mining, vol. 5, no. 1, Jun. 2024, https://doi.org/10.30880/jscdm.2024.05.01.011.
- S. M. Almufti et al., “INTELLIGENT HOME IOT DEVICES: AN EXPLORATION OF MACHINE LEARNING-BASED NETWORKED TRAFFIC INVESTIGATION,” Jurnal Ilmiah Ilmu Terapan Universitas Jambi, vol. 8, no. 1, pp. 1–10, May 2024, doi: 10.22437/jiituj.v8i1.32767. https://doi.org/10.22437/jiituj.v8i1.32767.
- S. M. Almufti and S. R. M. Zeebaree, “Leveraging Distributed Systems for Fault-Tolerant Cloud Computing: A Review of Strategies and Frameworks,” Academic Journal of Nawroz University, vol. 13, no. 2, pp. 9–29, May 2024, https://doi.org/10.25007/ajnu.v13n2a2012.
- H. B. Ahmad, R. R. Asaad, S. M. Almufti, A. A. Hani, A. B. Sallow, and S. R. M. Zeebaree, “SMART HOME ENERGY SAVING WITH BIG DATA AND MACHINE LEARNING,” Jurnal Ilmiah Ilmu Terapan Universitas Jambi, vol. 8, no. 1, pp. 11–20, May 2024, https://doi.org/10.22437/jiituj.v8i1.32598.
- T. Thirugnanam et al., “PIRAP: Medical Cancer Rehabilitation Healthcare Center Data Maintenance Based on IoT-Based Deep Federated Collaborative Learning,” Int J Coop Inf Syst, vol. 33, no. 01, Mar. 2024, https://doi.org/10.1142/S0218843023500053.
- R. Boya Marqas, S. M. Almufti, and R. Rajab Asaad, “FIREBASE EFFICIENCY IN CSV DATA EXCHANGE THROUGH PHP-BASED WEBSITES,” Academic Journal of Nawroz University, vol. 11, no. 3, pp. 410–414, Aug. 2022, https://doi.org/10.25007/ajnu.v11n3a1480.
- S. M. Almufti, R. B. Marqas, Z. A. Nayef, and T. S. Mohamed, “Real Time Face-mask Detection with Arduino to Prevent COVID-19 Spreading,” Qubahan Academic Journal, vol. 1, no. 2, pp. 39–46, Apr. 2021, https://doi.org/10.48161/qaj.v1n2a47.
- Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st Conference on Neural Information Processing Systems (pp. 4765-4774).
- Zhang, J., & Harman, M. (2021). Interpretable machine learning: A survey. arXiv preprint arXiv:2103.11251.
- Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841-887. https://doi.org/10.2139/ssrn.3063289.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
- Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (XAI): Towards medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793-4813. https://doi.org/10.1109/TNNLS.2020.3027314.
- Gunning, D., & Aha, D. W. (2019). DARPA's explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44-58. https://doi.org/10.1609/aimag.v40i2.2850.
- Shapley, L. S. (1953). A value for n-person games. In Contributions to the Theory of Games (pp. 307-317). Princeton University Press. https://doi.org/10.1515/9781400881970-018.
- Marqas, R. B., Mousa, A., Özyurt, F., & Salih, R. (2023). A machine learning model for the prediction of heart attack risk in high-risk patients utilizing real-world data. Academic Journal of Nawroz University, 12(4), 286-301. https://doi.org/10.25007/ajnu.v12n4a1974.
- Abdalla Mohammed Abubakr, A., Khan, F., Alhag Ali Mohammed, A., Abdelbagi Abdalla, Y., Abd Alla Mohammed, A., & Ahmad, Z. (2024). Impact of AI applications on corporate financial reporting quality: Evidence from UAE corporations. Qubahan Academic Jour-nal, 4(3), 782–792. https://doi.org/10.48161/qaj.v4n3a860.
-
Downloads
-
How to Cite
Boya Marqas, R., Almufti, S. M. ., & Azad Yusif , R. . (2025). Unveiling explainability in artificial intelligence: a step to-wards transparent AI. International Journal of Scientific World, 11(1), 13-20. https://doi.org/10.14419/f2agrs86