Cargando…
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development
BACKGROUND: The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning mod...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7673979/ https://www.ncbi.nlm.nih.gov/pubmed/33146623 http://dx.doi.org/10.2196/18752 |
_version_ | 1783611427609640960 |
---|---|
author | Ammar, Nariman Shaban-Nejad, Arash |
author_facet | Ammar, Nariman Shaban-Nejad, Arash |
author_sort | Ammar, Nariman |
collection | PubMed |
description | BACKGROUND: The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem. OBJECTIVE: In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance. METHODS: We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. RESULTS: To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. CONCLUSIONS: This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make. |
format | Online Article Text |
id | pubmed-7673979 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | JMIR Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-76739792020-11-20 Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development Ammar, Nariman Shaban-Nejad, Arash JMIR Med Inform Original Paper BACKGROUND: The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem. OBJECTIVE: In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance. METHODS: We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. RESULTS: To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. CONCLUSIONS: This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make. JMIR Publications 2020-11-04 /pmc/articles/PMC7673979/ /pubmed/33146623 http://dx.doi.org/10.2196/18752 Text en ©Nariman Ammar, Arash Shaban-Nejad. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 04.11.2020. https://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on http://medinform.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Ammar, Nariman Shaban-Nejad, Arash Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_full | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_fullStr | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_full_unstemmed | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_short | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_sort | explainable artificial intelligence recommendation system by leveraging the semantics of adverse childhood experiences: proof-of-concept prototype development |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7673979/ https://www.ncbi.nlm.nih.gov/pubmed/33146623 http://dx.doi.org/10.2196/18752 |
work_keys_str_mv | AT ammarnariman explainableartificialintelligencerecommendationsystembyleveragingthesemanticsofadversechildhoodexperiencesproofofconceptprototypedevelopment AT shabannejadarash explainableartificialintelligencerecommendationsystembyleveragingthesemanticsofadversechildhoodexperiencesproofofconceptprototypedevelopment |