Cargando…

Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use

Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific...

Descripción completa

Detalles Bibliográficos
Autores principales: Baumann, Joachim, Loi, Michele
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10279561/
https://www.ncbi.nlm.nih.gov/pubmed/37346393
http://dx.doi.org/10.1007/s13347-023-00624-9
_version_ 1785060618525474816
author Baumann, Joachim
Loi, Michele
author_facet Baumann, Joachim
Loi, Michele
author_sort Baumann, Joachim
collection PubMed
description Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what “fairness” might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity) and separation (also known as equalized odds), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.
format Online
Article
Text
id pubmed-10279561
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer Netherlands
record_format MEDLINE/PubMed
spelling pubmed-102795612023-06-21 Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use Baumann, Joachim Loi, Michele Philos Technol Research Article Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what “fairness” might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity) and separation (also known as equalized odds), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems. Springer Netherlands 2023-06-19 2023 /pmc/articles/PMC10279561/ /pubmed/37346393 http://dx.doi.org/10.1007/s13347-023-00624-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Research Article
Baumann, Joachim
Loi, Michele
Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use
title Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use
title_full Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use
title_fullStr Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use
title_full_unstemmed Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use
title_short Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use
title_sort fairness and risk: an ethical argument for a group fairness definition insurers can use
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10279561/
https://www.ncbi.nlm.nih.gov/pubmed/37346393
http://dx.doi.org/10.1007/s13347-023-00624-9
work_keys_str_mv AT baumannjoachim fairnessandriskanethicalargumentforagroupfairnessdefinitioninsurerscanuse
AT loimichele fairnessandriskanethicalargumentforagroupfairnessdefinitioninsurerscanuse