Cargando…
Probabilistic coherence, logical consistency, and Bayesian learning: Neural language models as epistemic agents
It is argued that suitably trained neural language models exhibit key properties of epistemic agency: they hold probabilistically coherent and logically consistent degrees of belief, which they can rationally revise in the face of novel evidence. To this purpose, we conduct computational experiments...
Autores principales: | Betz, Gregor, Richardson, Kyle |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9910757/ https://www.ncbi.nlm.nih.gov/pubmed/36758000 http://dx.doi.org/10.1371/journal.pone.0281372 |
Ejemplares similares
-
Judgment aggregation, discursive dilemma and reflective equilibrium: Neural language models as self-improving doxastic agents
por: Betz, Gregor, et al.
Publicado: (2022) -
The logic of epistemic justification
por: Smith, Martin
Publicado: (2017) -
Probabilistic logic in a coherent setting
por: Coletii, Giulianella, et al.
Publicado: (2002) -
Expertise and information: an epistemic logic perspective
por: Singleton, Joseph, et al.
Publicado: (2023) -
Epistemics of the soul: Epistemic logics in German 18th‐century empirical psychology
por: Rydberg, Andreas
Publicado: (2022)