Logo do repositório
 
A carregar...
Miniatura
Publicação

How Unstable Is LIME? Evaluating Sensitivity and Trust in Explainable AI

Utilize este identificador para referenciar este registo.
Nome:Descrição:Tamanho:Formato: 
TCDMAA4979.pdf2.27 MBAdobe PDF Ver/Abrir

Resumo(s)

As machine learning systems permeate high-stakes fields such as healthcare, finance, and public policy, the demand for transparent and interpretable algorithms has intensified, giving rise to Explainable Artificial Intelligence (XAI) techniques that shed light on opaque “black‐ box” models. Among these, LIME (Local Interpretable Model‐Agnostic Explanations) stands out for generating simplified surrogate models around individual predictions, yet its dependence on randomized perturbations compromises its reliability: slight variations in random seeds or sampling strategies can produce divergent explanations for the same input. To quantify this instability, we designed a unified experimental framework evaluating LIME’s numerical consistency on four tabular datasets, Titanic, Iris, Wine Quality, and California Housing, using Random Forests for classification and regression tasks. We applied LIME repeatedly to identical instances under controlled conditions, assessing stability through both visual diagnostics (feature‐weight trajectories, boxplots, and frequency histograms) and quantitative measures (feature‐weight standard deviations, Top‐1 feature recurrence rate, and local approximation errors). Our results indicate that while LIME delivers stable, coherent feature attributions in simple classification scenarios, its explanations fluctuate markedly in complex regression contexts, with high variability in feature weights and approximation fidelity despite recurring top‐ranked predictors. By systematically documenting LIME’s context‐dependent variability and proposing a reproducible evaluation protocol, this thesis highlights a critical limitation of a popular XAI tool and underscores the necessity of rigorous stability checks before deploying local explanation methods in sensitive or regulated environments.

Descrição

Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data Science

Palavras-chave

Explainable Artificial Intelligence (XAI) LIME Model interpretability Stability of explanations Tabular data Local surrogate models Feature attribution Machine learning evaluation

Contexto Educativo

Citação

Projetos de investigação

Unidades organizacionais

Fascículo