Cargando…
Artificial Hallucinations by Google Bard: Think Before You Leap
One of the critical challenges posed by artificial intelligence (AI) tools like Google Bard (Google LLC, Mountain View, California, United States) is the potential for "artificial hallucinations." These refer to instances where an AI chatbot generates fictional, erroneous, or unsubstantiat...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cureus
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10492900/ https://www.ncbi.nlm.nih.gov/pubmed/37700993 http://dx.doi.org/10.7759/cureus.43313 |
Sumario: | One of the critical challenges posed by artificial intelligence (AI) tools like Google Bard (Google LLC, Mountain View, California, United States) is the potential for "artificial hallucinations." These refer to instances where an AI chatbot generates fictional, erroneous, or unsubstantiated information in response to queries. In research, such inaccuracies can lead to the propagation of misinformation and undermine the credibility of scientific literature. The experience presented here highlights the importance of cross-checking the information provided by AI tools with reliable sources and maintaining a cautious approach when utilizing these tools in research writing. |
---|