Cargando…

Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer

PURPOSE: We aimed to evaluate the time and cost of developing prompts using large language model (LLM), tailored to extract clinical factors in breast cancer patients and their accuracy. MATERIALS AND METHODS: We collected data from reports of surgical pathology and ultrasound from breast cancer pat...

Descripción completa

Detalles Bibliográficos
Autores principales: Choi, Hyeon Seok, Song, Jun Yeong, Shin, Kyung Hwan, Chang, Ji Hyun, Jang, Bum-Sup
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Korean Society for Radiation Oncology 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10556835/
https://www.ncbi.nlm.nih.gov/pubmed/37793630
http://dx.doi.org/10.3857/roj.2023.00633
_version_ 1785116954837647360
author Choi, Hyeon Seok
Song, Jun Yeong
Shin, Kyung Hwan
Chang, Ji Hyun
Jang, Bum-Sup
author_facet Choi, Hyeon Seok
Song, Jun Yeong
Shin, Kyung Hwan
Chang, Ji Hyun
Jang, Bum-Sup
author_sort Choi, Hyeon Seok
collection PubMed
description PURPOSE: We aimed to evaluate the time and cost of developing prompts using large language model (LLM), tailored to extract clinical factors in breast cancer patients and their accuracy. MATERIALS AND METHODS: We collected data from reports of surgical pathology and ultrasound from breast cancer patients who underwent radiotherapy from 2020 to 2022. We extracted the information using the Generative Pre-trained Transformer (GPT) for Sheets and Docs extension plugin and termed this the “LLM” method. The time and cost of developing the prompts with LLM methods were assessed and compared with those spent on collecting information with “full manual” and “LLM-assisted manual” methods. To assess accuracy, 340 patients were randomly selected, and the extracted information by LLM method were compared with those collected by “full manual” method. RESULTS: Data from 2,931 patients were collected. We developed 12 prompts for Extract function and 12 for Format function to extract and standardize the information. The overall accuracy was 87.7%. For lymphovascular invasion, it was 98.2%. Developing and processing the prompts took 3.5 hours and 15 minutes, respectively. Utilizing the ChatGPT application programming interface cost US $65.8 and when factoring in the estimated wage, the total cost was US $95.4. In an estimated comparison, “LLM-assisted manual” and “LLM” methods were time- and cost-efficient compared to the “full manual” method. CONCLUSION: Developing and facilitating prompts for LLM to derive clinical factors was efficient to extract crucial information from huge medical records. This study demonstrated the potential of the application of natural language processing using LLM model in breast cancer patients. Prompts from the current study can be re-used for other research to collect clinical information.
format Online
Article
Text
id pubmed-10556835
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher The Korean Society for Radiation Oncology
record_format MEDLINE/PubMed
spelling pubmed-105568352023-10-07 Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer Choi, Hyeon Seok Song, Jun Yeong Shin, Kyung Hwan Chang, Ji Hyun Jang, Bum-Sup Radiat Oncol J Original Article PURPOSE: We aimed to evaluate the time and cost of developing prompts using large language model (LLM), tailored to extract clinical factors in breast cancer patients and their accuracy. MATERIALS AND METHODS: We collected data from reports of surgical pathology and ultrasound from breast cancer patients who underwent radiotherapy from 2020 to 2022. We extracted the information using the Generative Pre-trained Transformer (GPT) for Sheets and Docs extension plugin and termed this the “LLM” method. The time and cost of developing the prompts with LLM methods were assessed and compared with those spent on collecting information with “full manual” and “LLM-assisted manual” methods. To assess accuracy, 340 patients were randomly selected, and the extracted information by LLM method were compared with those collected by “full manual” method. RESULTS: Data from 2,931 patients were collected. We developed 12 prompts for Extract function and 12 for Format function to extract and standardize the information. The overall accuracy was 87.7%. For lymphovascular invasion, it was 98.2%. Developing and processing the prompts took 3.5 hours and 15 minutes, respectively. Utilizing the ChatGPT application programming interface cost US $65.8 and when factoring in the estimated wage, the total cost was US $95.4. In an estimated comparison, “LLM-assisted manual” and “LLM” methods were time- and cost-efficient compared to the “full manual” method. CONCLUSION: Developing and facilitating prompts for LLM to derive clinical factors was efficient to extract crucial information from huge medical records. This study demonstrated the potential of the application of natural language processing using LLM model in breast cancer patients. Prompts from the current study can be re-used for other research to collect clinical information. The Korean Society for Radiation Oncology 2023-09 2023-09-21 /pmc/articles/PMC10556835/ /pubmed/37793630 http://dx.doi.org/10.3857/roj.2023.00633 Text en Copyright © 2023 The Korean Society for Radiation Oncology https://creativecommons.org/licenses/by-nc/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/ (https://creativecommons.org/licenses/by-nc/4.0/) ) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Article
Choi, Hyeon Seok
Song, Jun Yeong
Shin, Kyung Hwan
Chang, Ji Hyun
Jang, Bum-Sup
Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer
title Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer
title_full Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer
title_fullStr Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer
title_full_unstemmed Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer
title_short Developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer
title_sort developing prompts from large language model for extracting clinical information from pathology and ultrasound reports in breast cancer
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10556835/
https://www.ncbi.nlm.nih.gov/pubmed/37793630
http://dx.doi.org/10.3857/roj.2023.00633
work_keys_str_mv AT choihyeonseok developingpromptsfromlargelanguagemodelforextractingclinicalinformationfrompathologyandultrasoundreportsinbreastcancer
AT songjunyeong developingpromptsfromlargelanguagemodelforextractingclinicalinformationfrompathologyandultrasoundreportsinbreastcancer
AT shinkyunghwan developingpromptsfromlargelanguagemodelforextractingclinicalinformationfrompathologyandultrasoundreportsinbreastcancer
AT changjihyun developingpromptsfromlargelanguagemodelforextractingclinicalinformationfrompathologyandultrasoundreportsinbreastcancer
AT jangbumsup developingpromptsfromlargelanguagemodelforextractingclinicalinformationfrompathologyandultrasoundreportsinbreastcancer