| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 2.7 MB | Adobe PDF |
Autores
Orientador(es)
Resumo(s)
As artificial intelligence (AI) is given more power in many decisions, potential resulting biases in respect to gender, race, and other minorities have to be analyzed and reduced to a minimum. Machine learning (ML) models are implemented in various areas and can decide who gets invited to an interview, granted a loan, gets the right cancer treatment, or goes to prison. Consequently, biases can have a crucial negative impact on people’s life. This thesis highlights previous research in this field, shows its limitations and breaks down the content into its core components in s systematic manner. Therefore, types of existing biases, and areas where AI bias is most components in a systematic manner. Therefore, types of existing biases, and areas where AI is most prevalent are defined. Further, root causes for discriminating algorithms are analyzed according to the AI model creation chain: data, coder, model, usage. An abundance of fairness measurements is classified and elaborated in a tabular format. Thereafter, bias mitigation techniques naming pre-processing, in-processing, and post-processing for ML algorithms are summarized, critically analyzed and limitations of research for unsupervised learning fairness measures are indicated.
Descrição
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies Management
Palavras-chave
Artificial intelligence Biased artificial intelligence Machine learning Algorithmic fairness Algorithmic perception Discrimination
