Logo do repositório
 
A carregar...
Miniatura
Publicação

Explainable AI: Enhancing Machine Learning Model Interpretability with Generative AI

Utilize este identificador para referenciar este registo.
Nome:Descrição:Tamanho:Formato: 
TCDMAA4341.pdf5.39 MBAdobe PDF Ver/Abrir

Resumo(s)

In today’s data-driven world, machine learning (ML) powers critical decisions across sectors such as healthcare and finance, but the opacity of complex “black-box” models undermines trust, accountability, and adoption. This thesis tackles the urgent challenge of explainability by exploring how Large Language Models (LLMs) can transform post-hoc explanations into clear, actionable insights for diverse stakeholders. Moving beyond traditional techniques like SHAP and Counterfactuals, which often overwhelm with complexity, this research introduces a dynamic framework that integrates LLMs as narrative explainers. The methodology combines robust ML pipelines with post-hoc interpreters, enhanced through prompt engineering, to generate explanations in both technical and business-friendly formats. Experiments on real-world datasets, including emergency healthcare and bank fraud detection, benchmarked leading LLMs such as GPT-4o, Claude 3, LLaMA 3, and DeepSeek. Results show that GPT-4o consistently delivers the most accurate, fluent, and stakeholderaligned explanations, while local open-weight models offer competitive, privacy-preserving alternatives. The evaluation, comprising linguistic heuristics, semantic similarity metrics, and human judgment, demonstrated significant gains in clarity, completeness, and trustworthiness over conventional explainers. Crucially, counterfactual-based narratives proved highly intuitive for decision-making, while SHAP-based explanations achieved greater technical depth. By reframing LLMs as interpretable mediators rather than mere translators, this study provides empirical evidence that generative AI can close the gap between ML performance and human understanding. The contributions extend beyond academic insight, offering practical guidelines for deploying Explainable AI in high-stakes domains where transparency is not optional but essential for fairness, accountability, and trust.

Descrição

Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business Analytics

Palavras-chave

Explainable AI (XAI) Machine Learning Large Language Models Post Hoc Explainers Interpretability SDG 9 - Industry, innovation and infrastructure SDG 16 - Peace, justice and strong institutions

Contexto Educativo

Citação

Projetos de investigação

Unidades organizacionais

Fascículo