Cargando…

Deepfakes, Fake Barns, and Knowledge from Videos

Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a clo...

Descripción completa

Detalles Bibliográficos
Autor principal: Matthews, Taylor
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9869812/
https://www.ncbi.nlm.nih.gov/pubmed/36714268
http://dx.doi.org/10.1007/s11229-022-04033-x
Descripción
Sumario:Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given that barn cases have posed a long-standing challenge for virtue-theoretic accounts of knowledge, I consider whether a similar challenge extends to deepfakes. In doing so, I consider how Duncan Pritchard’s recent anti-risk virtue epistemology meets the challenge. While Pritchard’s account avoids problems in traditional barn cases, I claim that it leads to local scepticism about knowledge from online videos in the case of deepfakes. I end by considering how two alternative virtue-theoretic approaches might vindicate our epistemic dependence on videos in an increasingly digital world.