| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 1.24 MB | Adobe PDF |
Orientador(es)
Resumo(s)
This research investigates the ethical challenges associated with Artificial Intelligence (AI),
focusing on the role of regulation as a potential enabler of trust. It addresses the central research
question: Can regulation be considered a value of trust in AI? Based on a comparative analysis
of existing AI ethical guidelines and empirical findings from experts, this study investigated
which legal frameworks support ethical principles guidelines. The research adopts a mixedmethod approach, combining bibliometric analysis, literature review, and normative evaluation.
Findings suggest that regulation not only reinforces ethical compliance but also serves as a
stabilizing mechanism that fosters legitimacy and confidence in AI systems. This study clarifies
the connection between ethics, regulation, and trust, on AI systems, by identifying key ethical
values most aligned with current regulatory mechanisms.
Descrição
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and Management
Palavras-chave
Artificial Intelligence Ethic Regulation Transparency Accountability Trust Guidelines Governance Algorithm Responsible SDG 4 - Quality education SDG 8 - Decent work and economic growth SDG 9 - Industry, innovation and infrastructure SDG 10 - Reduced inequalities SDG 16 - Peace, justice and strong institutions SDG 17 - Partnerships for the goals
