Cargando…
Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users
BACKGROUND: Decision-support tools (DST) are typically developed by computer engineers for use by clinicians. Prototype testing DSTs may be performed relatively easily by one or two clinical experts. The costly alternative is to test each prototype on a larger number of diverse clinicians, based on...
Autores principales: | , , , , , , |
---|---|
Formato: | Texto |
Lenguaje: | English |
Publicado: |
Department of Emergency Medicine, University of California, Irvine School of Medicine
2008
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2672244/ https://www.ncbi.nlm.nih.gov/pubmed/19561711 |
_version_ | 1782166499265347584 |
---|---|
author | Walsh, Paul Doyle, Donal McQuillen, Kemedy K. Bigler, Joshua Thompson, Caleb Lin, Ed Cunningham, Padraig |
author_facet | Walsh, Paul Doyle, Donal McQuillen, Kemedy K. Bigler, Joshua Thompson, Caleb Lin, Ed Cunningham, Padraig |
author_sort | Walsh, Paul |
collection | PubMed |
description | BACKGROUND: Decision-support tools (DST) are typically developed by computer engineers for use by clinicians. Prototype testing DSTs may be performed relatively easily by one or two clinical experts. The costly alternative is to test each prototype on a larger number of diverse clinicians, based on the untested assumption that these evaluations would more accurately reflect those of actual end users. HYPOTHESIS: We hypothesized substantial or better agreement (as defined by a κ statistic greater than 0.6) between the evaluations of a case based reasoning (CBR) DST predicting ED admission for bronchiolitis performed by the clinically diverse end users, to those of two clinical experts who evaluated the same DST output. METHODS: Three outputs from a previously described DST were evaluated by the emergency physicians (EP) who originally saw the patients and by two pediatric EPs with an interest in bronchiolitis. The DST outputs were as follows: predicted disposition, an example of another previously seen patient to explain the prediction, and explanatory dialog. Each was rated using the scale Definitely Not, No, Maybe, Yes, and Absolutely. This was converted to a Likert scale for analysis. Agreement was measured using the κ statistic. RESULTS: Agreement with the DST predicted disposition was moderate between end users and the expert reviewers, but was only fair or poor for value of the explanatory case and dialog. CONCLUSION: Agreement between expert evaluators and end users on the value of a CBR DST predicted dispositions was moderate. For the more subjective explicative components, agreement was fair, poor, or worse. |
format | Text |
id | pubmed-2672244 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2008 |
publisher | Department of Emergency Medicine, University of California, Irvine School of Medicine |
record_format | MEDLINE/PubMed |
spelling | pubmed-26722442009-06-24 Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users Walsh, Paul Doyle, Donal McQuillen, Kemedy K. Bigler, Joshua Thompson, Caleb Lin, Ed Cunningham, Padraig West J Emerg Med Original Research BACKGROUND: Decision-support tools (DST) are typically developed by computer engineers for use by clinicians. Prototype testing DSTs may be performed relatively easily by one or two clinical experts. The costly alternative is to test each prototype on a larger number of diverse clinicians, based on the untested assumption that these evaluations would more accurately reflect those of actual end users. HYPOTHESIS: We hypothesized substantial or better agreement (as defined by a κ statistic greater than 0.6) between the evaluations of a case based reasoning (CBR) DST predicting ED admission for bronchiolitis performed by the clinically diverse end users, to those of two clinical experts who evaluated the same DST output. METHODS: Three outputs from a previously described DST were evaluated by the emergency physicians (EP) who originally saw the patients and by two pediatric EPs with an interest in bronchiolitis. The DST outputs were as follows: predicted disposition, an example of another previously seen patient to explain the prediction, and explanatory dialog. Each was rated using the scale Definitely Not, No, Maybe, Yes, and Absolutely. This was converted to a Likert scale for analysis. Agreement was measured using the κ statistic. RESULTS: Agreement with the DST predicted disposition was moderate between end users and the expert reviewers, but was only fair or poor for value of the explanatory case and dialog. CONCLUSION: Agreement between expert evaluators and end users on the value of a CBR DST predicted dispositions was moderate. For the more subjective explicative components, agreement was fair, poor, or worse. Department of Emergency Medicine, University of California, Irvine School of Medicine 2008-05 /pmc/articles/PMC2672244/ /pubmed/19561711 Text en Copyright © 2008 the authors. http://creativecommons.org/licenses/by-nc/4.0 This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) License. See: http://creativecommons.org/licenses/by-nc/4.0/. |
spellingShingle | Original Research Walsh, Paul Doyle, Donal McQuillen, Kemedy K. Bigler, Joshua Thompson, Caleb Lin, Ed Cunningham, Padraig Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users |
title | Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users |
title_full | Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users |
title_fullStr | Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users |
title_full_unstemmed | Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users |
title_short | Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Specialist Expert Reviewers with Those of End Users |
title_sort | comparison of the evaluations of a case-based reasoning decision support tool by specialist expert reviewers with those of end users |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2672244/ https://www.ncbi.nlm.nih.gov/pubmed/19561711 |
work_keys_str_mv | AT walshpaul comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyspecialistexpertreviewerswiththoseofendusers AT doyledonal comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyspecialistexpertreviewerswiththoseofendusers AT mcquillenkemedyk comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyspecialistexpertreviewerswiththoseofendusers AT biglerjoshua comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyspecialistexpertreviewerswiththoseofendusers AT thompsoncaleb comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyspecialistexpertreviewerswiththoseofendusers AT lined comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyspecialistexpertreviewerswiththoseofendusers AT cunninghampadraig comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyspecialistexpertreviewerswiththoseofendusers |