Cargando…

Assessing the Readability of Medical Documents: A Ranking Approach

BACKGROUND: The use of electronic health record (EHR) systems with patient engagement capabilities, including viewing, downloading, and transmitting health information, has recently grown tremendously. However, using these resources to engage patients in managing their own health remains challenging...

Descripción completa

Detalles Bibliográficos
Autores principales: Zheng, Jiaping, Yu, Hong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889493/
https://www.ncbi.nlm.nih.gov/pubmed/29572199
http://dx.doi.org/10.2196/medinform.8611
_version_ 1783312707223552000
author Zheng, Jiaping
Yu, Hong
author_facet Zheng, Jiaping
Yu, Hong
author_sort Zheng, Jiaping
collection PubMed
description BACKGROUND: The use of electronic health record (EHR) systems with patient engagement capabilities, including viewing, downloading, and transmitting health information, has recently grown tremendously. However, using these resources to engage patients in managing their own health remains challenging due to the complex and technical nature of the EHR narratives. OBJECTIVE: Our objective was to develop a machine learning–based system to assess readability levels of complex documents such as EHR notes. METHODS: We collected difficulty ratings of EHR notes and Wikipedia articles using crowdsourcing from 90 readers. We built a supervised model to assess readability based on relative orders of text difficulty using both surface text features and word embeddings. We evaluated system performance using the Kendall coefficient of concordance against human ratings. RESULTS: Our system achieved significantly higher concordance (.734) with human annotators than did a baseline using the Flesch-Kincaid Grade Level, a widely adopted readability formula (.531). The improvement was also consistent across different disease topics. This method’s concordance with an individual human user’s ratings was also higher than the concordance between different human annotators (.658). CONCLUSIONS: We explored methods to automatically assess the readability levels of clinical narratives. Our ranking-based system using simple textual features and easy-to-learn word embeddings outperformed a widely used readability formula. Our ranking-based method can predict relative difficulties of medical documents. It is not constrained to a predefined set of readability levels, a common design in many machine learning–based systems. Furthermore, the feature set does not rely on complex processing of the documents. One potential application of our readability ranking is personalization, allowing patients to better accommodate their own background knowledge.
format Online
Article
Text
id pubmed-5889493
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-58894932018-04-09 Assessing the Readability of Medical Documents: A Ranking Approach Zheng, Jiaping Yu, Hong JMIR Med Inform Original Paper BACKGROUND: The use of electronic health record (EHR) systems with patient engagement capabilities, including viewing, downloading, and transmitting health information, has recently grown tremendously. However, using these resources to engage patients in managing their own health remains challenging due to the complex and technical nature of the EHR narratives. OBJECTIVE: Our objective was to develop a machine learning–based system to assess readability levels of complex documents such as EHR notes. METHODS: We collected difficulty ratings of EHR notes and Wikipedia articles using crowdsourcing from 90 readers. We built a supervised model to assess readability based on relative orders of text difficulty using both surface text features and word embeddings. We evaluated system performance using the Kendall coefficient of concordance against human ratings. RESULTS: Our system achieved significantly higher concordance (.734) with human annotators than did a baseline using the Flesch-Kincaid Grade Level, a widely adopted readability formula (.531). The improvement was also consistent across different disease topics. This method’s concordance with an individual human user’s ratings was also higher than the concordance between different human annotators (.658). CONCLUSIONS: We explored methods to automatically assess the readability levels of clinical narratives. Our ranking-based system using simple textual features and easy-to-learn word embeddings outperformed a widely used readability formula. Our ranking-based method can predict relative difficulties of medical documents. It is not constrained to a predefined set of readability levels, a common design in many machine learning–based systems. Furthermore, the feature set does not rely on complex processing of the documents. One potential application of our readability ranking is personalization, allowing patients to better accommodate their own background knowledge. JMIR Publications 2018-03-23 /pmc/articles/PMC5889493/ /pubmed/29572199 http://dx.doi.org/10.2196/medinform.8611 Text en ©Jiaping Zheng, Hong Yu. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 23.03.2018. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Zheng, Jiaping
Yu, Hong
Assessing the Readability of Medical Documents: A Ranking Approach
title Assessing the Readability of Medical Documents: A Ranking Approach
title_full Assessing the Readability of Medical Documents: A Ranking Approach
title_fullStr Assessing the Readability of Medical Documents: A Ranking Approach
title_full_unstemmed Assessing the Readability of Medical Documents: A Ranking Approach
title_short Assessing the Readability of Medical Documents: A Ranking Approach
title_sort assessing the readability of medical documents: a ranking approach
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889493/
https://www.ncbi.nlm.nih.gov/pubmed/29572199
http://dx.doi.org/10.2196/medinform.8611
work_keys_str_mv AT zhengjiaping assessingthereadabilityofmedicaldocumentsarankingapproach
AT yuhong assessingthereadabilityofmedicaldocumentsarankingapproach