Logo do repositório
 

NIMS - Dissertações de Mestrado em Gestão da Informação (Information Management)

URI permanente para esta coleção:

Navegar

Entradas recentes

A mostrar 1 - 10 de 1263
  • Comparative Evaluation of Deep Learning Architectures for Electricity Price Forecasting: An Empirical Analysis of Model Behavior Under Input and Configuration Variations
    Publication . Ye, Lisa; Scott, Ian James
    This thesis conducts a comparative evaluation of five forecasting models: ARIMA, LSTM, S Mamba, Temporal Fusion Transformer (TFT), and ETSFormer, using four years of hourly data from the Portuguese electricity market. While these models have been applied in various time series contexts, their direct comparison in electricity price forecasting remains limited, particularly for newer architectures such as S Mamba, TFT, and ETSFormer. The Portuguese market is marked by strong seasonality, irregular volatility, and dependence on multiple exogenous variables, which provides a challenging setting for assessing forecasting performance. The results show that ARIMA captures long term trends but struggles with nonlinear and seasonal dynamics. LSTM improves medium range temporal learning yet smooths intraday fluctuations and underestimates extreme price spikes. S Mamba demonstrates stronger short term responsiveness and better alignment with observed volatility, though it still moderates extreme events. TFT offers robust forecasts through its attention based variable selection but at a higher computational cost, while ETSFormer performs noticeably worse than the other deep learning models, showing limited adaptability and higher overall error levels. Across all models, sharp intraday peaks, rare price collapses, and event driven deviations remain difficult to predict, reflecting the mix of structural regularities and unpredictable shocks that characterize electricity prices. Feature selection and hyperparameter tuning further show that forecasting accuracy depends heavily on model configuration and input design. This study provides a direct benchmarking of these emerging architectures on a common electricity market dataset and highlights opportunities for future research.
  • Modeling the Effects of Generative AI Use in Higher Education: The Role of Competence and Self-Regulation in Shaping Autonomy and Critical Thinking
    Publication . Mateus, Carolina Xavier; Neves, Maria de Fátima dos Santos Trindade
    The debate over the impact of Generative Artificial Intelligence (GenAI) on students’ 21st-century skills has intensified as these technologies have rapidly spread across higher education. A central concern is whether such tools may substitute essential competencies rather than foster their development. Existing research has predominantly focused on technology adoption and technical capabilities, paying comparatively less attention to the motivational and cognitive mechanisms through which AI may influence meaningful learning. Drawing on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), Self-Determination Theory (SDT), and Self-Regulated Learning (SRL) theory, this study proposes and tests an integrated model examining both the determinants of GenAI adoption and its implications for students’ autonomy and critical thinking. A quantitative research design was employed, yielding 209 valid responses from higher education students via an online survey. The conceptual model was assessed using Partial Least Squares Structural Equation Modeling (PLS-SEM). Results indicate that performance expectancy is the strongest predictor of behavioural intention, whereas effort expectancy is not significant. Behavioural intention, in turn, strongly predicts actual use. GenAI use positively influences perceived competence and self-regulation, with competence significantly enhancing autonomy. The effect on critical thinking emerges indirectly through selfregulation. Overall, the findings suggest that GenAI does not inherently promote higher-order skills; rather, such development depends on its intentional, reflective, and pedagogically guided integration into students’ learning processes. By integrating technology adoption, motivational, and self-regulatory perspectives into a single explanatory framework, this study advances understanding of how GenAI use translates into higher-order learning outcomes in higher education.
  • Enhancing Fraud Detection: The Role of Causal Inference in Identifying Key Drivers and Improving Model Transparency
    Publication . Ayad, Kiroles Fady; Damásio, Bruno Miguel Pinto
    Financial fraud represents a persistent and evolving threat to global financial systems. As transactions migrate to digital channels, fraudulent tactics evolve more rapidly than traditional rule-based detection systems can accommodate. Machine learning (ML) has emerged as the standard defense mechanism, demonstrating superior capability in identifying subtle behavioral patterns across high-dimensional transaction datasets. However, these models frequently operate as opaque decision systems, identifying correlations without elucidating the underlying causal mechanisms. In high-stakes financial environments, this opacity creates regulatory compliance challenges and undermines the defensibility of automated decisions. This thesis proposes integrating causal inference directly into ML architectures to move beyond pattern recognition toward identifying the actual behavioral drivers of fraud. We developed an Enhanced Causal Neural Network (ECNN) and trained it on a causally structured synthetic dataset of 50,000 transactions. The methodology employs the DoWhy framework to estimate causal effects for behavioral and security features, which are then embedded into the network through a custom costsensitive loss function. This approach forces the model to learn relationships that are stable and causally significant, rather than merely statistically convenient. Experimental results demonstrate that causal priors provide practical operational benefits beyond theoretical improvements. Within the controlled experimental environment, the ECNN matches the discriminative performance of Random Forest classifiers while outperforming them in critical operational metrics, including probability calibration and balanced error handling. The model provides clear, structural explanations for risk scores rather than vague feature importances. These findings suggest that causal inference offers a viable pathway toward fraud detection systems that are simultaneously accurate, transparent, and defensible under regulatory scrutiny.
  • Operationalising Intelligent Augmentation: A multi-layer analysis of LLM use in organizations
    Publication . Andrade, Miguel Filipe Jacinto; Malta, Pedro Manuel Carqueijeiro Espiga da Maia
    Large Language Models have rapidly become part of everyday organisational work by enabling employees to perform a broad range of knowledge oriented tasks with greater speed, structure and cognitive support. Their accessibility has made them useful for drafting, synthesizing information, shaping early stage reasoning and exploring alternative perspectives. Although these capabilities create clear productivity gains, they also introduce challenges related to reliability, completeness of reasoning, data exposure and the potential misuse of outputs that appear coherent but are not fully accurate. Organisations must therefore understand not only how LLMs can accelerate work but also how they can be integrated into decision making in a dependable and accountable way. This dissertation examines how employees and organisations operationalize Intelligent Augmentation in LLM supported work and identifies the capabilities needed to ensure that this use remains reliable and responsible in practice. Intelligent Augmentation is understood as a collaborative model in which human judgment and contextual understanding guide the use of AI-generated content. However, the study shows that the quality of augmentation is strongly shaped by how individuals frame tasks, guide model behaviour, interpret responses and decide when verification is necessary. It is also shaped by the presence of organisational structures that provide guidance, training and clarity on acceptable use. In addition, methodological routines such as cross-checking information, consulting experts and documenting assumptions play a critical role in ensuring the defensibility of outputs used in decision support contexts. The study follows a qualitative research design based on eighteen semi-structured interviews with frequent LLM users working in large organisations. The findings reveal substantial variation in prompting techniques, interpretation habits, validation routines and awareness of governance rules. Participants consistently reported benefits in efficiency and reasoning support but also expressed concerns related to factual accuracy, data protection, uneven access to training and uncertainty about roles and responsibilities when using AI-generated content. Based on these insights, the dissertation proposes the Enterprise LLM Augmentation Framework which explains augmentation quality as the result of alignment across three interdependent layers. These layers include individual interaction competence, organisational enablement and governance, and methodological discipline in the validation and documentation of outputs. The framework clarifies why similar technological tools lead to different outcomes across organisations and provides a structured foundation for consistent, safe, and scalable LLM supported work. Overall, the study offers both a conceptual lens and a practical blueprint for dependable Intelligent Augmentation at scale.
  • Optimizing Demand Management in the Insurance Industry through Power BI Dashboards: A Case Study of AGEAS
    Publication . Fernandes, Bruna Cerqueira Bento Soares; Neves, Maria de Fátima dos Santos Trindade; Côrte-Real, Nadine Evangelista de Pinho
    Operational dashboards determine whether daily insurance processes run smoothly or spiral into costly failures. This thesis develops and evaluates an interactive Power BI dashboard for a Portuguese insurance company that reframes demand management from reactive to proactive control. Without a user-centered operational dashboard, insurers can face noisy or irrelevant metrics, duplicated tracking, and misprioritization– problems that can then translate into delayed regulatory filings, slower claims handling, and amplified compliance and financial risks. This research addresses a concrete gap: while existing frameworks emphasize strategic analytics, few provide practical guidance or artifacts aimed at continuous, operational-level monitoring of incoming demands, their prioritization, allocation, and lifecycle progression. Grounded in stakeholder-defined requirements and iteratively refined, the dashboard consolidates intake monitoring, lifecycle status, workload distribution, resource allocation, and compliance with agreed timelines into a single, actionable view. By making demand flows and bottlenecks visible in real time, it supports early risk detection, evidencebased reprioritization, and fairer workload balancing. Evaluation indicates that the resulting artifact improved situational awareness, accelerated the identification of stagnation points, and enhanced transparency and coordination, thereby reinforcing disciplined, data-driven decision-making in day-to-day operations.
  • Dashboards for Urban Incident Reporting: Improving Decision Support with Power BI Using Na Minha Rua Lx Data
    Publication . Ramos, João Pedro Brito; Neves, Maria de Fátima dos Santos Trindade
    This dissertation addresses the translation gap between raw citizen-reported urban incident data and structured decision support in municipal governance. Focusing on Lisbon’s Na Minha Rua LX platform, it investigates how Business Intelligence dashboards, grounded in disciplined dimensional modelling, can enhance decision support for managing urban incident reporting. Guided by the Design Science Research paradigm and informed by the Kimball methodology, the study designs and implements a dimensional data warehouse and a governed Power BI semantic layer. A declared analytical grain and a clear bus matrix with conformed dimensions (Date, Location, Incident Type, and Channel) ensure semantic consistency across dashboards. Data pipelines implemented in Microsoft Fabric follow a Medallion (Bronze–Silver–Gold) architecture to guarantee lineage, data quality, and traceability. At the core, a star schema supports interpretable measures for daily incident volumes, seasonal baselines, anomaly detection, territorial pressure, and weather sensitivity. Empirical analysis reveals structured regularities in Lisbon’s incident reporting dynamics, including stable seasonal cycles, persistent territorial concentration, category-level asymmetries, and directional associations with rainfall in selected domains. These patterns are interpreted descriptively rather than causally and illustrate how dimensional discipline enables consistent detection and contextualisation of analytical signals. Evaluation combines scenario-based walkthroughs and artefact assessment criteria to examine business coverage, interpretability, traceability, semantic consistency, and usability. Findings indicate that improvements in decision support arise primarily from structural clarity rather than analytical complexity. The study demonstrates that a governed semantic layer and a conformant dimensional architecture can transform open citizen-reporting data into reproducible, interpretable, and action-oriented dashboards without relying on opaque predictive models. The dissertation contributes a reusable dimensional representation of urban incident reporting, documented implementation patterns that connect governance principles to analytical design, and empirical evidence that semantic discipline is a foundational enabler of reliable decision support in smart-city contexts.
  • Unlocking History: Evaluating State-of-the-Art Language Models on Historical Texts
    Publication . Maksimov, Evgenii; Pinheiro, Flávio Luís Portas; Sturm, Niclas Frederic
    As machine learning systems—particularly language models—become increasingly central to working with data and modern text processing, their applicability to historical and pre-internet texts remains underexplored. Most models are trained on contemporary internet-era corpora, raising questions about their performance on older linguistic data and their ability to capture semantic drift and diachronic change. This study investigates the diachronic robustness of BERT-based architectures across English and Russian, evaluating how well modern models handle language variation over time. Through comparative analysis of monolingual and multilingual BERT variants, we examine accuracy decay across centuries, assess the impact of orthographic reform and lexical shift, attempt to capture semantic shift and explore strategies for improving historical text understanding. Findings highlight significant performance drops of prediction of randomly masked tokens on archaic texts, variable by language, and demonstrate that multilingual pre-training and using larger models offer viable paths toward diachronic resilience.
  • How generative AI tools shape decision outcomes
    Publication . Silva, João Ricardo Trindade Pereira da; Oliveira, Tiago André Gonçalves Félix de
    This study looks at how generative artificial intelligence tools influence decision outcomes in knowledge based work, with a specific focus on decision quality and decision efficiency. While much of the literature has focused on adoption and usage intentions, much less attention has been given to how generative artificial intelligence affects the outcomes of decisions themselves. To address this gap this paper develops and tests a research model that integrates key constructs from technology acceptance model (TAM), Trust and heuristic systematic model. The data was collected through an online questionnaire, the 303 valid responses were then analysed using partial least squares structural equation modelling. The results indicate that the model explains a substantial proportion of variance in both decision quality and decision efficiency. Decision quality is significantly influenced by perceived usefulness, trust in AI, systematic processing and heuristic processing, whereas perceived ease of use does not have a significant direct effect on decision quality. Decision efficiency is significantly influenced by perceived ease of use, perceived usefulness and trust in AI while heuristic processing shows a weaker positive effect and systematic processing is not significant. The model also shows that trust in AI moderates the relationship between TAM beliefs and decision outcomes, strengthening the effects of perceived ease of use and weakening the effects of perceived usefulness on both outcomes. Overall, this paper extends AI adoption literature by focusing on the decision outcomes and highlights the combined importance of usability, usefulness trust and cognitive processing in AI assisted decision making.
  • The Dual Role of Technology in Climate Change across the European Union: How EU countries respond to and use technological evolution for (or against) the environment
    Publication . Faria, Bruna Faustino de; Neves, Catarina Paisana Pires Costa das
    As the climate crisis deepens and digital infrastructure expands, understanding how European Union (EU) countries balance technological development with climate objectives is essential. It is imperative to examine whether technological innovations are being leveraged to support environmental sustainability and mitigate climate change or encourage ecological degradation. To capture this relationship, a two-fold descriptive approach combining factor and cluster analyses was applied to manufacturing and information and communications technology (ICT), two sectors where technological development affects environmental performance, both positively and negatively. This analysis resulted in the identification of a climate change driver construct and two mitigating efforts dimensions, one direct and one indirect, as well as the classification of EU member states across four different profiles, reflecting inequalities among nations. The study also examines how these imbalances have evolved from 2021 to 2023 and shows that, although strategies vary, there was a general trend toward a committed sustainability approach, with emission reductions as the prevailing trend. Laggard countries show the strongest improvement in greenhouse gas (GHG) emission reductions, led by Slovenia, the Czech Republic, and Slovakia, while Denmark and Austria maintain leadership through consistent reductions and sustained direct measures to address climate change. No country improved or worsened across all three dimensions, reflecting trade-offs between initiatives. This reinforces the idea that progress depends on how countries approach investment, policymaking, and aligned initiatives, and that the EU’s climate goals rely not only on adopting cleaner technologies but also on preventing them from increasing emissions.
  • Understanding Malware-as-a-Service: The Role of Cloud Computing and Technology Opportunism in Cyber Threat Evolution
    Publication . Serrado, André Filipe Virtuoso; Helaly, Yasser Mohamed Megahed Youssef Al
    Cloud computing has transformed the delivery of malicious tools through the rise of Malware as a Service, a subscription-based cybercrime model that automates attacks and reduces the technical expertise required to conduct them. While prior research has examined cloud security and the technical evolution of MaaS, little is known about why individuals perceive cloud infrastructures as actionable opportunities for misuse and how these perceptions translate into malicious intentions. This study addresses this gap by integrating Cloud Computing Technological Opportunism with Cybersecurity Routine Activity Theory to explain how offenders evaluate cloud-based environments, form motivation, and develop intentions to misuse MaaS tools. Using survey data from 377 participants and analysing the structural model with SmartPLS, the results show that technological opportunism shapes offender cognition across all three C-RAT mechanisms: it increases offender motivation and perceived target suitability while reducing perceived guardianship capability. These three mechanisms fully mediate the effect of cloud opportunism on malicious intention, indicating that cloud affordances influence behavior only when filtered through offenders’ cognitive assessments of opportunity and control. The analysis further demonstrates that these pathways are conditioned by individual predispositions, including self-control, moral beliefs, hacking efficacy, usability perceptions, and perceived deterrence, identifying boundary conditions that explain when cloud-enabled opportunities become criminogenic. Theoretically, the study reframes cyber offending in cloud environments as a socio-technical process in which distributed technological affordances and heterogeneous psychological filters jointly shape malicious intent. Practically, the findings highlight the need for cloud governance and security interventions that alter perceptual opportunity structures, strengthen visible guardianship, reduce usability of attack interfaces, and increase friction in MaaS-enabled workflows.