Publicação
Explainable AI: Enhancing Machine Learning Model Interpretability with Generative AI
| datacite.subject.fos | Ciências Naturais::Ciências da Computação e da Informação | pt_PT |
| dc.contributor.advisor | Agostinho, Nuno Filipe Rosa | |
| dc.contributor.advisor | Baptista, Márcia Lourenço | |
| dc.contributor.author | Castelhano, Inês Dinis | |
| dc.date.accessioned | 2025-11-12T12:11:08Z | |
| dc.date.available | 2025-11-12T12:11:08Z | |
| dc.date.issued | 2025-10-29 | |
| dc.description | Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business Analytics | pt_PT |
| dc.description.abstract | In today’s data-driven world, machine learning (ML) powers critical decisions across sectors such as healthcare and finance, but the opacity of complex “black-box” models undermines trust, accountability, and adoption. This thesis tackles the urgent challenge of explainability by exploring how Large Language Models (LLMs) can transform post-hoc explanations into clear, actionable insights for diverse stakeholders. Moving beyond traditional techniques like SHAP and Counterfactuals, which often overwhelm with complexity, this research introduces a dynamic framework that integrates LLMs as narrative explainers. The methodology combines robust ML pipelines with post-hoc interpreters, enhanced through prompt engineering, to generate explanations in both technical and business-friendly formats. Experiments on real-world datasets, including emergency healthcare and bank fraud detection, benchmarked leading LLMs such as GPT-4o, Claude 3, LLaMA 3, and DeepSeek. Results show that GPT-4o consistently delivers the most accurate, fluent, and stakeholderaligned explanations, while local open-weight models offer competitive, privacy-preserving alternatives. The evaluation, comprising linguistic heuristics, semantic similarity metrics, and human judgment, demonstrated significant gains in clarity, completeness, and trustworthiness over conventional explainers. Crucially, counterfactual-based narratives proved highly intuitive for decision-making, while SHAP-based explanations achieved greater technical depth. By reframing LLMs as interpretable mediators rather than mere translators, this study provides empirical evidence that generative AI can close the gap between ML performance and human understanding. The contributions extend beyond academic insight, offering practical guidelines for deploying Explainable AI in high-stakes domains where transparency is not optional but essential for fairness, accountability, and trust. | pt_PT |
| dc.identifier.tid | 204072573 | |
| dc.identifier.uri | http://hdl.handle.net/10362/190584 | |
| dc.language.iso | eng | pt_PT |
| dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | pt_PT |
| dc.subject | Explainable AI (XAI) | pt_PT |
| dc.subject | Machine Learning | pt_PT |
| dc.subject | Large Language Models | pt_PT |
| dc.subject | Post Hoc Explainers | pt_PT |
| dc.subject | Interpretability | pt_PT |
| dc.subject | SDG 9 - Industry, innovation and infrastructure | pt_PT |
| dc.subject | SDG 16 - Peace, justice and strong institutions | pt_PT |
| dc.title | Explainable AI: Enhancing Machine Learning Model Interpretability with Generative AI | pt_PT |
| dc.type | master thesis | |
| dspace.entity.type | Publication | |
| rcaap.rights | openAccess | pt_PT |
| rcaap.type | masterThesis | pt_PT |
| thesis.degree.name | Mestrado em Ciência de Dados e Métodos Analíticos Avançados, especialização em Business Analytics | pt_PT |
