Cargando…

Streamlining Systematic Reviews: Harnessing Large Language Models for Quality Assessment and Risk-of-Bias Evaluation

This editorial explores the innovative application of large language Models (LLMs) in conducting systematic reviews, specifically focusing on quality assessment and risk-of-bias evaluation. As integral components of systematic reviews, these tasks traditionally require extensive human effort, subjec...

Descripción completa

Detalles Bibliográficos
Autores principales: Nashwan, Abdulqadir J, Jaradat, Jaber H
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cureus 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10478591/
https://www.ncbi.nlm.nih.gov/pubmed/37674957
http://dx.doi.org/10.7759/cureus.43023
Descripción
Sumario:This editorial explores the innovative application of large language Models (LLMs) in conducting systematic reviews, specifically focusing on quality assessment and risk-of-bias evaluation. As integral components of systematic reviews, these tasks traditionally require extensive human effort, subjectivity, and time. Integrating LLMs can revolutionize this process, providing an objective, consistent, and rapid methodology for quality assessment and risk-of-bias evaluation. With their ability to comprehend context, predict semantic relationships, and extract relevant information, LLMs can effectively appraise study quality and risk of bias. However, careful consideration must be given to potential risks and limitations associated with over-reliance on machine learning models and inherent biases in training data. An optimal balance and combination between human expertise and automated LLM evaluation might offer the most effective approach to advance and streamline the field of evidence synthesis.