Logo do repositório
 
Publicação

Explainable AI: Enhancing Machine Learning Model Interpretability with Generative AI

datacite.subject.fosCiências Naturais::Ciências da Computação e da Informaçãopt_PT
dc.contributor.advisorAgostinho, Nuno Filipe Rosa
dc.contributor.advisorBaptista, Márcia Lourenço
dc.contributor.authorCastelhano, Inês Dinis
dc.date.accessioned2025-11-12T12:11:08Z
dc.date.available2025-11-12T12:11:08Z
dc.date.issued2025-10-29
dc.descriptionDissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business Analyticspt_PT
dc.description.abstractIn today’s data-driven world, machine learning (ML) powers critical decisions across sectors such as healthcare and finance, but the opacity of complex “black-box” models undermines trust, accountability, and adoption. This thesis tackles the urgent challenge of explainability by exploring how Large Language Models (LLMs) can transform post-hoc explanations into clear, actionable insights for diverse stakeholders. Moving beyond traditional techniques like SHAP and Counterfactuals, which often overwhelm with complexity, this research introduces a dynamic framework that integrates LLMs as narrative explainers. The methodology combines robust ML pipelines with post-hoc interpreters, enhanced through prompt engineering, to generate explanations in both technical and business-friendly formats. Experiments on real-world datasets, including emergency healthcare and bank fraud detection, benchmarked leading LLMs such as GPT-4o, Claude 3, LLaMA 3, and DeepSeek. Results show that GPT-4o consistently delivers the most accurate, fluent, and stakeholderaligned explanations, while local open-weight models offer competitive, privacy-preserving alternatives. The evaluation, comprising linguistic heuristics, semantic similarity metrics, and human judgment, demonstrated significant gains in clarity, completeness, and trustworthiness over conventional explainers. Crucially, counterfactual-based narratives proved highly intuitive for decision-making, while SHAP-based explanations achieved greater technical depth. By reframing LLMs as interpretable mediators rather than mere translators, this study provides empirical evidence that generative AI can close the gap between ML performance and human understanding. The contributions extend beyond academic insight, offering practical guidelines for deploying Explainable AI in high-stakes domains where transparency is not optional but essential for fairness, accountability, and trust.pt_PT
dc.identifier.tid204072573
dc.identifier.urihttp://hdl.handle.net/10362/190584
dc.language.isoengpt_PT
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/pt_PT
dc.subjectExplainable AI (XAI)pt_PT
dc.subjectMachine Learningpt_PT
dc.subjectLarge Language Modelspt_PT
dc.subjectPost Hoc Explainerspt_PT
dc.subjectInterpretabilitypt_PT
dc.subjectSDG 9 - Industry, innovation and infrastructurept_PT
dc.subjectSDG 16 - Peace, justice and strong institutionspt_PT
dc.titleExplainable AI: Enhancing Machine Learning Model Interpretability with Generative AIpt_PT
dc.typemaster thesis
dspace.entity.typePublication
rcaap.rightsopenAccesspt_PT
rcaap.typemasterThesispt_PT
thesis.degree.nameMestrado em Ciência de Dados e Métodos Analíticos Avançados, especialização em Business Analyticspt_PT

Ficheiros

Principais
A mostrar 1 - 1 de 1
A carregar...
Miniatura
Nome:
TCDMAA4341.pdf
Tamanho:
5.39 MB
Formato:
Adobe Portable Document Format
Licença
A mostrar 1 - 1 de 1
Miniatura indisponível
Nome:
license.txt
Tamanho:
348 B
Formato:
Item-specific license agreed upon to submission
Descrição: