Cargando…

ThoughtSource: A central hub for large language model reasoning data

Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concer...

Descripción completa

Detalles Bibliográficos
Autores principales: Ott, Simon, Hebenstreit, Konstantin, Liévin, Valentin, Hother, Christoffer Egeberg, Moradi, Milad, Mayrhauser, Maximilian, Praas, Robert, Winther, Ole, Samwald, Matthias
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10409727/
https://www.ncbi.nlm.nih.gov/pubmed/37553439
http://dx.doi.org/10.1038/s41597-023-02433-3
_version_ 1785086306529837056
author Ott, Simon
Hebenstreit, Konstantin
Liévin, Valentin
Hother, Christoffer Egeberg
Moradi, Milad
Mayrhauser, Maximilian
Praas, Robert
Winther, Ole
Samwald, Matthias
author_facet Ott, Simon
Hebenstreit, Konstantin
Liévin, Valentin
Hother, Christoffer Egeberg
Moradi, Milad
Mayrhauser, Maximilian
Praas, Robert
Winther, Ole
Samwald, Matthias
author_sort Ott, Simon
collection PubMed
description Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets.
format Online
Article
Text
id pubmed-10409727
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-104097272023-08-10 ThoughtSource: A central hub for large language model reasoning data Ott, Simon Hebenstreit, Konstantin Liévin, Valentin Hother, Christoffer Egeberg Moradi, Milad Mayrhauser, Maximilian Praas, Robert Winther, Ole Samwald, Matthias Sci Data Data Descriptor Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets. Nature Publishing Group UK 2023-08-08 /pmc/articles/PMC10409727/ /pubmed/37553439 http://dx.doi.org/10.1038/s41597-023-02433-3 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Data Descriptor
Ott, Simon
Hebenstreit, Konstantin
Liévin, Valentin
Hother, Christoffer Egeberg
Moradi, Milad
Mayrhauser, Maximilian
Praas, Robert
Winther, Ole
Samwald, Matthias
ThoughtSource: A central hub for large language model reasoning data
title ThoughtSource: A central hub for large language model reasoning data
title_full ThoughtSource: A central hub for large language model reasoning data
title_fullStr ThoughtSource: A central hub for large language model reasoning data
title_full_unstemmed ThoughtSource: A central hub for large language model reasoning data
title_short ThoughtSource: A central hub for large language model reasoning data
title_sort thoughtsource: a central hub for large language model reasoning data
topic Data Descriptor
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10409727/
https://www.ncbi.nlm.nih.gov/pubmed/37553439
http://dx.doi.org/10.1038/s41597-023-02433-3
work_keys_str_mv AT ottsimon thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT hebenstreitkonstantin thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT lievinvalentin thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT hotherchristofferegeberg thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT moradimilad thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT mayrhausermaximilian thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT praasrobert thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT wintherole thoughtsourceacentralhubforlargelanguagemodelreasoningdata
AT samwaldmatthias thoughtsourceacentralhubforlargelanguagemodelreasoningdata