Cargando…
The cyclical ethical effects of using artificial intelligence in education
Our synthetic review of the relevant and related literatures on the ethics and effects of using AI in education reveals five qualitatively distinct and interrelated divides associated with access, representation, algorithms, interpretations, and citizenship. We open our analysis by probing the ethic...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer London
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9513289/ https://www.ncbi.nlm.nih.gov/pubmed/36185064 http://dx.doi.org/10.1007/s00146-022-01497-w |
_version_ | 1784798026886283264 |
---|---|
author | Dieterle, Edward Dede, Chris Walker, Michael |
author_facet | Dieterle, Edward Dede, Chris Walker, Michael |
author_sort | Dieterle, Edward |
collection | PubMed |
description | Our synthetic review of the relevant and related literatures on the ethics and effects of using AI in education reveals five qualitatively distinct and interrelated divides associated with access, representation, algorithms, interpretations, and citizenship. We open our analysis by probing the ethical effects of algorithms and how teams of humans can plan for and mitigate bias when using AI tools and techniques to model and inform instructional decisions and predict learning outcomes. We then analyze the upstream divides that feed into and fuel the algorithmic divide, first investigating access (who does and does not have access to the hardware, software, and connectivity necessary to engage with AI-enhanced digital learning tools and platforms) and then representation (the factors making data either representative of the total population or over-representative of a subpopulation’s preferences, thereby preventing objectivity and biasing understandings and outcomes). After that, we analyze the divides that are downstream of the algorithmic divide associated with interpretation (how learners, educators, and others understand the outputs of algorithms and use them to make decisions) and citizenship (how the other divides accumulate to impact interpretations of data by learners, educators, and others, in turn influencing behaviors and, over time, skills, culture, economic, health, and civic outcomes). At present, lacking ongoing reflection and action by learners, educators, educational leaders, designers, scholars, and policymakers, the five divides collectively create a vicious cycle and perpetuate structural biases in teaching and learning. However, increasing human responsibility and control over these divides can create a virtuous cycle that improves diversity, equity, and inclusion in education. We conclude the article by looking forward and discussing ways to increase educational opportunity and effectiveness for all by mitigating bias through a cycle of progressive improvement. |
format | Online Article Text |
id | pubmed-9513289 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer London |
record_format | MEDLINE/PubMed |
spelling | pubmed-95132892022-09-27 The cyclical ethical effects of using artificial intelligence in education Dieterle, Edward Dede, Chris Walker, Michael AI Soc Original Paper Our synthetic review of the relevant and related literatures on the ethics and effects of using AI in education reveals five qualitatively distinct and interrelated divides associated with access, representation, algorithms, interpretations, and citizenship. We open our analysis by probing the ethical effects of algorithms and how teams of humans can plan for and mitigate bias when using AI tools and techniques to model and inform instructional decisions and predict learning outcomes. We then analyze the upstream divides that feed into and fuel the algorithmic divide, first investigating access (who does and does not have access to the hardware, software, and connectivity necessary to engage with AI-enhanced digital learning tools and platforms) and then representation (the factors making data either representative of the total population or over-representative of a subpopulation’s preferences, thereby preventing objectivity and biasing understandings and outcomes). After that, we analyze the divides that are downstream of the algorithmic divide associated with interpretation (how learners, educators, and others understand the outputs of algorithms and use them to make decisions) and citizenship (how the other divides accumulate to impact interpretations of data by learners, educators, and others, in turn influencing behaviors and, over time, skills, culture, economic, health, and civic outcomes). At present, lacking ongoing reflection and action by learners, educators, educational leaders, designers, scholars, and policymakers, the five divides collectively create a vicious cycle and perpetuate structural biases in teaching and learning. However, increasing human responsibility and control over these divides can create a virtuous cycle that improves diversity, equity, and inclusion in education. We conclude the article by looking forward and discussing ways to increase educational opportunity and effectiveness for all by mitigating bias through a cycle of progressive improvement. Springer London 2022-09-27 /pmc/articles/PMC9513289/ /pubmed/36185064 http://dx.doi.org/10.1007/s00146-022-01497-w Text en © Educational Testing Service, under exclusive license to Springer-Verlag London Ltd., part of Springer Nature 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Original Paper Dieterle, Edward Dede, Chris Walker, Michael The cyclical ethical effects of using artificial intelligence in education |
title | The cyclical ethical effects of using artificial intelligence in education |
title_full | The cyclical ethical effects of using artificial intelligence in education |
title_fullStr | The cyclical ethical effects of using artificial intelligence in education |
title_full_unstemmed | The cyclical ethical effects of using artificial intelligence in education |
title_short | The cyclical ethical effects of using artificial intelligence in education |
title_sort | cyclical ethical effects of using artificial intelligence in education |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9513289/ https://www.ncbi.nlm.nih.gov/pubmed/36185064 http://dx.doi.org/10.1007/s00146-022-01497-w |
work_keys_str_mv | AT dieterleedward thecyclicalethicaleffectsofusingartificialintelligenceineducation AT dedechris thecyclicalethicaleffectsofusingartificialintelligenceineducation AT walkermichael thecyclicalethicaleffectsofusingartificialintelligenceineducation AT dieterleedward cyclicalethicaleffectsofusingartificialintelligenceineducation AT dedechris cyclicalethicaleffectsofusingartificialintelligenceineducation AT walkermichael cyclicalethicaleffectsofusingartificialintelligenceineducation |