Cargando…
Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It
Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) l...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7874145/ https://www.ncbi.nlm.nih.gov/pubmed/33584394 http://dx.doi.org/10.3389/fpsyg.2020.513474 |
_version_ | 1783649530361675776 |
---|---|
author | Bishop, J. Mark |
author_facet | Bishop, J. Mark |
author_sort | Bishop, J. Mark |
collection | PubMed |
description | Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all. |
format | Online Article Text |
id | pubmed-7874145 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-78741452021-02-11 Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It Bishop, J. Mark Front Psychol Psychology Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all. Frontiers Media S.A. 2021-01-05 /pmc/articles/PMC7874145/ /pubmed/33584394 http://dx.doi.org/10.3389/fpsyg.2020.513474 Text en Copyright © 2021 Bishop. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Bishop, J. Mark Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It |
title | Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It |
title_full | Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It |
title_fullStr | Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It |
title_full_unstemmed | Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It |
title_short | Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It |
title_sort | artificial intelligence is stupid and causal reasoning will not fix it |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7874145/ https://www.ncbi.nlm.nih.gov/pubmed/33584394 http://dx.doi.org/10.3389/fpsyg.2020.513474 |
work_keys_str_mv | AT bishopjmark artificialintelligenceisstupidandcausalreasoningwillnotfixit |