| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 645.21 KB | Adobe PDF |
Autores
Orientador(es)
Resumo(s)
Os sistemas de aprendizagem automĂĄtica estĂŁo a ser utlizados para otimizar,
aperfeiçoar e inovar segmentos empresariais por diferentes setores económicos em
todo o mundo, inclusive pelo setor segurador. Os algoritmos, tal como um vĂrus,
podem proliferar informação a uma escala massiva e a um ritmo acelerado de controlo
difĂcil. AtravĂ©s de uma amostra de dados, o algoritmo treina-se a aprender cada vez
mais com a informação que recebe e processa ao longo do tempo. Os algoritmos, por
si sĂł, desenvolvem-se e propagam-se tĂŁo rapidamente quanto Ă© necessĂĄrio para um
simples duplo clique na Internet.
Atualmente existem ainda questÔes que apresentam entraves à implementação de
sistemas de IA no cĂĄlculo do prĂ©mio do seguro focadas ao nĂvel dos dados, da sua
pouca representatividade em relação à realidade subjacente, da escolha do modelo
a utilizar e das suas caraterĂsticas/variĂĄveis a considerar e, por fim, ao nĂvel da
supervisĂŁo e interpretação humana das decisĂ”es algorĂtmicas. Se estas questĂ”es nĂŁo
forem acauteladas, cada caso que se desvie da norma algorĂtmica serĂĄ mais difĂcil de
avaliar e, consequentemente, conduzirĂĄ a uma decisĂŁo discriminatĂłria ou, em alguns
casos, Ă exclusĂŁo da cobertura do seguro pretendido. Estamos perante um setor que
prevĂȘ medidas de mitigação da discriminação, mas que, de momento, proporciona
abertura ao incumprimento das suas próprias normas na conversão da utilização dos
métodos tradicionais existentes para a utilização da IA.
Dado o impacto direto do setor segurador nas nossas vidas na esfera individual e
coletiva, esta Ă© uma atividade que deve ser classificada de alto risco com a
implementação da IA nos seus processos. São, por isso, necessårias medidas que
promovam a certeza jurĂdica no cumprimento da legislação atual aplicĂĄvel, assim
como a adaptação e/ou criação de um enquadramento legislativo que permita
controlar o risco de proliferação de pråticas de discriminação proibidas pelo setor. De
modo a evitar resultados discriminatĂłrios nos sistemas de aprendizagem automĂĄtica
no cĂĄlculo do prĂ©mio do seguro, a identificação de preconceitos incluĂdos nos dados
recolhidos, a auditoria aos modelos utilizados e softwares existentes, assim como a
total compreensão humana das decisÔes automatizadas conduzirão a uma IA
comercialmente confiĂĄvel do ponto de vista utilizador, legalmente segura e eticamente
aceitĂĄvel.
Onde uma utilização Ă©tica destes sistemas nĂŁo for possĂvel e os direitos fundamentais
estiverem em jogo â como atualmente estĂŁo -, deve ser questionado o seu uso. Caso
contrårio, a implementação destes sistemas contribuirå para a criação de um ciclo
tĂłxico que sustentarĂĄ os preconceitos antigos e desenvolverĂĄ novos preconceitos.
Machine learning systems are being used to optimise, enhance and innovate business segments across different economic sectors worldwide, including the insurance sector. Algorithms, just like a virus, can proliferate information on a massive scale and at a rapid pace that is difficult to control. Through a data sample, the algorithm trains itself to learn more and more from the information it receives and processes over time. Algorithms themselves develop and spread as quickly as it is necessary for a simple double-click on the Internet. Currently, there are still issues that bring barriers to the implementation of AI systems in the calculation of the insurance premium. These issues are mainly related to the data itself, the low representativeness in relation to the underlying reality, the choice of the model to be used and the characteristics/variables to be considered and, finally, to the level of supervision and human interpretation of algorithmic decisions. If these issues are not taken into account, each case which deviates from the algorithmic norm will be more difficult to assess and will consequently lead to a discriminatory decision or, in some cases, to the exclusion of the desired insurance cover. We are speaking about a sector that has already measures in place to mitigate discrimination, but for the time being provides now openness to non-compliance with its own standards by converting the use of its existing traditional methods to the use of AI. Given the direct impact of the insurance industry on our individual and collective lives, this is an activity that should be classified as high risk with the implementation of AI in its processes. Therefore, measures are needed to promote legal certainty in compliance with the current legislation, as well as the revision and/or creation of a legislative framework to control the risk of proliferation of discrimination practices prohibited by the industry. In order to avoid discriminatory results in machine learning systems in the calculation of the insurance premium, it is important to identify bias included in the dataset, to audit the models used and existing software, as well as a full understanding of machine learning decisions in order to have a commercially reliable AI from the user's point of view, which is legally secure and ethically acceptable.
Machine learning systems are being used to optimise, enhance and innovate business segments across different economic sectors worldwide, including the insurance sector. Algorithms, just like a virus, can proliferate information on a massive scale and at a rapid pace that is difficult to control. Through a data sample, the algorithm trains itself to learn more and more from the information it receives and processes over time. Algorithms themselves develop and spread as quickly as it is necessary for a simple double-click on the Internet. Currently, there are still issues that bring barriers to the implementation of AI systems in the calculation of the insurance premium. These issues are mainly related to the data itself, the low representativeness in relation to the underlying reality, the choice of the model to be used and the characteristics/variables to be considered and, finally, to the level of supervision and human interpretation of algorithmic decisions. If these issues are not taken into account, each case which deviates from the algorithmic norm will be more difficult to assess and will consequently lead to a discriminatory decision or, in some cases, to the exclusion of the desired insurance cover. We are speaking about a sector that has already measures in place to mitigate discrimination, but for the time being provides now openness to non-compliance with its own standards by converting the use of its existing traditional methods to the use of AI. Given the direct impact of the insurance industry on our individual and collective lives, this is an activity that should be classified as high risk with the implementation of AI in its processes. Therefore, measures are needed to promote legal certainty in compliance with the current legislation, as well as the revision and/or creation of a legislative framework to control the risk of proliferation of discrimination practices prohibited by the industry. In order to avoid discriminatory results in machine learning systems in the calculation of the insurance premium, it is important to identify bias included in the dataset, to audit the models used and existing software, as well as a full understanding of machine learning decisions in order to have a commercially reliable AI from the user's point of view, which is legally secure and ethically acceptable.
Descrição
Relatório com vista à obtenção do grau de Mestre em Direito e Mercados Financeiros
Palavras-chave
InteligĂȘncia artificial Atividade seguradora Sistemas de aprendizagem automĂĄtica Discriminação Igualdade de tratamento Avaliação do risco PrĂ©mio de seguro Artificial intelligence Insurance industry Machine learning systems Discrimination Equal treatment Risk assessment Insurance premium
