Utilize este identificador para referenciar este registo: http://hdl.handle.net/10362/190021
Título: The Expectancy-Capability Gap in Generative AI: When Privacy protection Fails to Follow Knowledge
Autor: Torres, Patrícia Braga
Orientador: Helaly, Yasser Mohamed Megahed Youssef Al
Palavras-chave: Generative AI
Privacy Protection Behaviour
Expectancy-Capability Gap
Dual-Knowledge Paradigm
Self-Efficacy
Information-Motivation-Behavioural Model
Trust
Motivational Expectations
Privacy-Confidence Paradox
SDG 16 - Peace, justice and strong institutions
Data de Defesa: 29-Out-2025
Resumo: Generative Artificial Intelligence (Gen-AI) systems are reshaping digital interaction, but they raise urgent concerns about data privacy due to their opaque design and dynamic learning processes. Despite growing interest in algorithmic accountability, existing privacy behaviour models fail to account for how users manage privacy in environments where system logic is non-transparent and control options are limited. To address this gap, this study develops an integrated model to examine how knowledge, motivation, and behavioural skills influence privacy protection in Gen-AI, drawing on the IMB framework, Self-Efficacy Theory, and Expectancy-Value Theory. We conducted a survey with 298 participants recruited from universities and professional networks. Structural Equation Modelling using Partial Least Squares (PLS-SEM) was employed to analyse both direct and conditional pathways, including moderated mediation effects. Findings reveal two novel patterns: the expectancy–capability gap, in which users hold high expectations for AI privacy protections but feel limited in their ability to act; and the privacy-confidence paradox, where greater AI experience reduces users perceived self-efficacy due to awareness of system limitations. The study also introduces a dual-knowledge framework separating technical use knowledge from privacy-specific knowledge, each of which independently predicts privacy self-efficacy. Challenging traditional privacy calculus, this research reveals perceived benefits don't drive Gen-AI privacy protection. Instead, user privacy settings preferences and confidence in tool use are pivotal. It offers a theory-driven model for Gen-AI privacy dynamics, with practical implications for transparent, skill-fostering, and trust-calibrated AI systems. This research challenges traditional privacy models by showing that user confidence and privacy preferences, rather than perceived benefits, drive protective behaviour in Gen-AI. It offers a theory-based model highlighting motivational, cognitive, and contextual factors, with practical implications for designing user-centred, transparent, and trust-calibrated AI systems.
Descrição: Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data Science
URI: http://hdl.handle.net/10362/190021
Designação: Mestrado em Ciência de Dados e Métodos Analíticos Avançados, especialização em Data Science
Aparece nas colecções:NIMS - Dissertações de Mestrado em Ciência de Dados e Métodos Analíticos Avançados (Data Science and Advanced Analytics)

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
TCDMAA4381.pdf1,3 MBAdobe PDFVer/Abrir    Acesso Restrito. Solicitar cópia ao autor!


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpace
Formato BibTex MendeleyEndnote 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.