Cargando…
When proxy-driven learning is no better than random: The consequences of representational incompleteness
Machine learning is widely used for personalisation, that is, to tune systems with the aim of adapting their behaviour to the responses of humans. This tuning relies on quantified features that capture the human actions, and also on objective functions—that is, proxies – that are intended to represe...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9278777/ https://www.ncbi.nlm.nih.gov/pubmed/35830451 http://dx.doi.org/10.1371/journal.pone.0271268 |
_version_ | 1784746259017367552 |
---|---|
author | Zobel, Justin Vázquez-Abad, Felisa J. Lin, Pauline |
author_facet | Zobel, Justin Vázquez-Abad, Felisa J. Lin, Pauline |
author_sort | Zobel, Justin |
collection | PubMed |
description | Machine learning is widely used for personalisation, that is, to tune systems with the aim of adapting their behaviour to the responses of humans. This tuning relies on quantified features that capture the human actions, and also on objective functions—that is, proxies – that are intended to represent desirable outcomes. However, a learning system’s representation of the world can be incomplete or insufficiently rich, for example if users’ decisions are based on properties of which the system is unaware. Moreover, the incompleteness of proxies can be argued to be an intrinsic property of computational systems, as they are based on literal representations of human actions rather than on the human actions themselves; this problem is distinct from the usual aspects of bias that are examined in machine learning literature. We use mathematical analysis and simulations of a reinforcement-learning case study to demonstrate that incompleteness of representation can, first, lead to learning that is no better than random; and second, means that the learning system can be inherently unaware that it is failing. This result has implications for the limits and applications of machine learning systems in human domains. |
format | Online Article Text |
id | pubmed-9278777 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-92787772022-07-14 When proxy-driven learning is no better than random: The consequences of representational incompleteness Zobel, Justin Vázquez-Abad, Felisa J. Lin, Pauline PLoS One Research Article Machine learning is widely used for personalisation, that is, to tune systems with the aim of adapting their behaviour to the responses of humans. This tuning relies on quantified features that capture the human actions, and also on objective functions—that is, proxies – that are intended to represent desirable outcomes. However, a learning system’s representation of the world can be incomplete or insufficiently rich, for example if users’ decisions are based on properties of which the system is unaware. Moreover, the incompleteness of proxies can be argued to be an intrinsic property of computational systems, as they are based on literal representations of human actions rather than on the human actions themselves; this problem is distinct from the usual aspects of bias that are examined in machine learning literature. We use mathematical analysis and simulations of a reinforcement-learning case study to demonstrate that incompleteness of representation can, first, lead to learning that is no better than random; and second, means that the learning system can be inherently unaware that it is failing. This result has implications for the limits and applications of machine learning systems in human domains. Public Library of Science 2022-07-13 /pmc/articles/PMC9278777/ /pubmed/35830451 http://dx.doi.org/10.1371/journal.pone.0271268 Text en © 2022 Zobel et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Zobel, Justin Vázquez-Abad, Felisa J. Lin, Pauline When proxy-driven learning is no better than random: The consequences of representational incompleteness |
title | When proxy-driven learning is no better than random: The consequences of representational incompleteness |
title_full | When proxy-driven learning is no better than random: The consequences of representational incompleteness |
title_fullStr | When proxy-driven learning is no better than random: The consequences of representational incompleteness |
title_full_unstemmed | When proxy-driven learning is no better than random: The consequences of representational incompleteness |
title_short | When proxy-driven learning is no better than random: The consequences of representational incompleteness |
title_sort | when proxy-driven learning is no better than random: the consequences of representational incompleteness |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9278777/ https://www.ncbi.nlm.nih.gov/pubmed/35830451 http://dx.doi.org/10.1371/journal.pone.0271268 |
work_keys_str_mv | AT zobeljustin whenproxydrivenlearningisnobetterthanrandomtheconsequencesofrepresentationalincompleteness AT vazquezabadfelisaj whenproxydrivenlearningisnobetterthanrandomtheconsequencesofrepresentationalincompleteness AT linpauline whenproxydrivenlearningisnobetterthanrandomtheconsequencesofrepresentationalincompleteness |