Cargando…

Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists

IMPORTANCE: Accurate clinical documentation is critical to health care quality and safety. Dictation services supported by speech recognition (SR) technology and professional medical transcriptionists are widely used by US clinicians. However, the quality of SR-assisted documentation has not been th...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Li, Blackley, Suzanne V., Kowalski, Leigh, Doan, Raymond, Acker, Warren W., Landman, Adam B., Kontrient, Evgeni, Mack, David, Meteer, Marie, Bates, David W., Goss, Foster R.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Medical Association 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6203313/
https://www.ncbi.nlm.nih.gov/pubmed/30370424
http://dx.doi.org/10.1001/jamanetworkopen.2018.0530
_version_ 1783365855758778368
author Zhou, Li
Blackley, Suzanne V.
Kowalski, Leigh
Doan, Raymond
Acker, Warren W.
Landman, Adam B.
Kontrient, Evgeni
Mack, David
Meteer, Marie
Bates, David W.
Goss, Foster R.
author_facet Zhou, Li
Blackley, Suzanne V.
Kowalski, Leigh
Doan, Raymond
Acker, Warren W.
Landman, Adam B.
Kontrient, Evgeni
Mack, David
Meteer, Marie
Bates, David W.
Goss, Foster R.
author_sort Zhou, Li
collection PubMed
description IMPORTANCE: Accurate clinical documentation is critical to health care quality and safety. Dictation services supported by speech recognition (SR) technology and professional medical transcriptionists are widely used by US clinicians. However, the quality of SR-assisted documentation has not been thoroughly studied. OBJECTIVE: To identify and analyze errors at each stage of the SR-assisted dictation process. DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional study collected a stratified random sample of 217 notes (83 office notes, 75 discharge summaries, and 59 operative notes) dictated by 144 physicians between January 1 and December 31, 2016, at 2 health care organizations using Dragon Medical 360 | eScription (Nuance). Errors were annotated in the SR engine–generated document (SR), the medical transcriptionist–edited document (MT), and the physician’s signed note (SN). Each document was compared with a criterion standard created from the original audio recordings and medical record review. MAIN OUTCOMES AND MEASURES: Error rate; mean errors per document; error frequency by general type (eg, deletion), semantic type (eg, medication), and clinical significance; and variations by physician characteristics, note type, and institution. RESULTS: Among the 217 notes, there were 144 unique dictating physicians: 44 female (30.6%) and 10 unknown sex (6.9%). Mean (SD) physician age was 52 (12.5) years (median [range] age, 54 [28-80] years). Among 121 physicians for whom specialty information was available (84.0%), 35 specialties were represented, including 45 surgeons (37.2%), 30 internists (24.8%), and 46 others (38.0%). The error rate in SR notes was 7.4% (ie, 7.4 errors per 100 words). It decreased to 0.4% after transcriptionist review and 0.3% in SNs. Overall, 96.3% of SR notes, 58.1% of MT notes, and 42.4% of SNs contained errors. Deletions were most common (34.7%), then insertions (27.0%). Among errors at the SR, MT, and SN stages, 15.8%, 26.9%, and 25.9%, respectively, involved clinical information, and 5.7%, 8.9%, and 6.4%, respectively, were clinically significant. Discharge summaries had higher mean SR error rates than other types (8.9% vs 6.6%; difference, 2.3%; 95% CI, 1.0%-3.6%; P < .001). Surgeons’ SR notes had lower mean error rates than other physicians’ (6.0% vs 8.1%; difference, 2.2%; 95% CI, 0.8%-3.5%; P = .002). One institution had a higher mean SR error rate (7.6% vs 6.6%; difference, 1.0%; 95% CI, −0.2% to 2.8%; P = .10) but lower mean MT and SN error rates (0.3% vs 0.7%; difference, −0.3%; 95% CI, −0.63% to −0.04%; P = .03 and 0.2% vs 0.6%; difference, −0.4%; 95% CI, −0.7% to −0.2%; P = .003). CONCLUSIONS AND RELEVANCE: Seven in 100 words in SR-generated documents contain errors; many errors involve clinical information. That most errors are corrected before notes are signed demonstrates the importance of manual review, quality assurance, and auditing.
format Online
Article
Text
id pubmed-6203313
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher American Medical Association
record_format MEDLINE/PubMed
spelling pubmed-62033132018-10-26 Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists Zhou, Li Blackley, Suzanne V. Kowalski, Leigh Doan, Raymond Acker, Warren W. Landman, Adam B. Kontrient, Evgeni Mack, David Meteer, Marie Bates, David W. Goss, Foster R. JAMA Netw Open Original Investigation IMPORTANCE: Accurate clinical documentation is critical to health care quality and safety. Dictation services supported by speech recognition (SR) technology and professional medical transcriptionists are widely used by US clinicians. However, the quality of SR-assisted documentation has not been thoroughly studied. OBJECTIVE: To identify and analyze errors at each stage of the SR-assisted dictation process. DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional study collected a stratified random sample of 217 notes (83 office notes, 75 discharge summaries, and 59 operative notes) dictated by 144 physicians between January 1 and December 31, 2016, at 2 health care organizations using Dragon Medical 360 | eScription (Nuance). Errors were annotated in the SR engine–generated document (SR), the medical transcriptionist–edited document (MT), and the physician’s signed note (SN). Each document was compared with a criterion standard created from the original audio recordings and medical record review. MAIN OUTCOMES AND MEASURES: Error rate; mean errors per document; error frequency by general type (eg, deletion), semantic type (eg, medication), and clinical significance; and variations by physician characteristics, note type, and institution. RESULTS: Among the 217 notes, there were 144 unique dictating physicians: 44 female (30.6%) and 10 unknown sex (6.9%). Mean (SD) physician age was 52 (12.5) years (median [range] age, 54 [28-80] years). Among 121 physicians for whom specialty information was available (84.0%), 35 specialties were represented, including 45 surgeons (37.2%), 30 internists (24.8%), and 46 others (38.0%). The error rate in SR notes was 7.4% (ie, 7.4 errors per 100 words). It decreased to 0.4% after transcriptionist review and 0.3% in SNs. Overall, 96.3% of SR notes, 58.1% of MT notes, and 42.4% of SNs contained errors. Deletions were most common (34.7%), then insertions (27.0%). Among errors at the SR, MT, and SN stages, 15.8%, 26.9%, and 25.9%, respectively, involved clinical information, and 5.7%, 8.9%, and 6.4%, respectively, were clinically significant. Discharge summaries had higher mean SR error rates than other types (8.9% vs 6.6%; difference, 2.3%; 95% CI, 1.0%-3.6%; P < .001). Surgeons’ SR notes had lower mean error rates than other physicians’ (6.0% vs 8.1%; difference, 2.2%; 95% CI, 0.8%-3.5%; P = .002). One institution had a higher mean SR error rate (7.6% vs 6.6%; difference, 1.0%; 95% CI, −0.2% to 2.8%; P = .10) but lower mean MT and SN error rates (0.3% vs 0.7%; difference, −0.3%; 95% CI, −0.63% to −0.04%; P = .03 and 0.2% vs 0.6%; difference, −0.4%; 95% CI, −0.7% to −0.2%; P = .003). CONCLUSIONS AND RELEVANCE: Seven in 100 words in SR-generated documents contain errors; many errors involve clinical information. That most errors are corrected before notes are signed demonstrates the importance of manual review, quality assurance, and auditing. American Medical Association 2018-07-06 /pmc/articles/PMC6203313/ /pubmed/30370424 http://dx.doi.org/10.1001/jamanetworkopen.2018.0530 Text en Copyright 2018 Zhou L et al. JAMA Network Open. http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the CC-BY License.
spellingShingle Original Investigation
Zhou, Li
Blackley, Suzanne V.
Kowalski, Leigh
Doan, Raymond
Acker, Warren W.
Landman, Adam B.
Kontrient, Evgeni
Mack, David
Meteer, Marie
Bates, David W.
Goss, Foster R.
Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists
title Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists
title_full Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists
title_fullStr Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists
title_full_unstemmed Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists
title_short Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists
title_sort analysis of errors in dictated clinical documents assisted by speech recognition software and professional transcriptionists
topic Original Investigation
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6203313/
https://www.ncbi.nlm.nih.gov/pubmed/30370424
http://dx.doi.org/10.1001/jamanetworkopen.2018.0530
work_keys_str_mv AT zhouli analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT blackleysuzannev analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT kowalskileigh analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT doanraymond analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT ackerwarrenw analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT landmanadamb analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT kontrientevgeni analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT mackdavid analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT meteermarie analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT batesdavidw analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists
AT gossfosterr analysisoferrorsindictatedclinicaldocumentsassistedbyspeechrecognitionsoftwareandprofessionaltranscriptionists