Cargando…

Advancing Computational Toxicology by Interpretable Machine Learning

[Image: see text] Chemical toxicity evaluations for drugs, consumer products, and environmental chemicals have a critical impact on human health. Traditional animal models to evaluate chemical toxicity are expensive, time-consuming, and often fail to detect toxicants in humans. Computational toxicol...

Descripción completa

Detalles Bibliográficos
Autores principales: Jia, Xuelian, Wang, Tong, Zhu, Hao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Chemical Society 2023
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10666545/
https://www.ncbi.nlm.nih.gov/pubmed/37224004
http://dx.doi.org/10.1021/acs.est.3c00653
_version_ 1785148974938718208
author Jia, Xuelian
Wang, Tong
Zhu, Hao
author_facet Jia, Xuelian
Wang, Tong
Zhu, Hao
author_sort Jia, Xuelian
collection PubMed
description [Image: see text] Chemical toxicity evaluations for drugs, consumer products, and environmental chemicals have a critical impact on human health. Traditional animal models to evaluate chemical toxicity are expensive, time-consuming, and often fail to detect toxicants in humans. Computational toxicology is a promising alternative approach that utilizes machine learning (ML) and deep learning (DL) techniques to predict the toxicity potentials of chemicals. Although the applications of ML- and DL-based computational models in chemical toxicity predictions are attractive, many toxicity models are “black boxes” in nature and difficult to interpret by toxicologists, which hampers the chemical risk assessments using these models. The recent progress of interpretable ML (IML) in the computer science field meets this urgent need to unveil the underlying toxicity mechanisms and elucidate the domain knowledge of toxicity models. In this review, we focused on the applications of IML in computational toxicology, including toxicity feature data, model interpretation methods, use of knowledge base frameworks in IML development, and recent applications. The challenges and future directions of IML modeling in toxicology are also discussed. We hope this review can encourage efforts in developing interpretable models with new IML algorithms that can assist new chemical assessments by illustrating toxicity mechanisms in humans.
format Online
Article
Text
id pubmed-10666545
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher American Chemical Society
record_format MEDLINE/PubMed
spelling pubmed-106665452023-11-23 Advancing Computational Toxicology by Interpretable Machine Learning Jia, Xuelian Wang, Tong Zhu, Hao Environ Sci Technol [Image: see text] Chemical toxicity evaluations for drugs, consumer products, and environmental chemicals have a critical impact on human health. Traditional animal models to evaluate chemical toxicity are expensive, time-consuming, and often fail to detect toxicants in humans. Computational toxicology is a promising alternative approach that utilizes machine learning (ML) and deep learning (DL) techniques to predict the toxicity potentials of chemicals. Although the applications of ML- and DL-based computational models in chemical toxicity predictions are attractive, many toxicity models are “black boxes” in nature and difficult to interpret by toxicologists, which hampers the chemical risk assessments using these models. The recent progress of interpretable ML (IML) in the computer science field meets this urgent need to unveil the underlying toxicity mechanisms and elucidate the domain knowledge of toxicity models. In this review, we focused on the applications of IML in computational toxicology, including toxicity feature data, model interpretation methods, use of knowledge base frameworks in IML development, and recent applications. The challenges and future directions of IML modeling in toxicology are also discussed. We hope this review can encourage efforts in developing interpretable models with new IML algorithms that can assist new chemical assessments by illustrating toxicity mechanisms in humans. American Chemical Society 2023-05-24 /pmc/articles/PMC10666545/ /pubmed/37224004 http://dx.doi.org/10.1021/acs.est.3c00653 Text en © 2023 The Authors. Published by American Chemical Society https://creativecommons.org/licenses/by/4.0/Permits the broadest form of re-use including for commercial purposes, provided that author attribution and integrity are maintained (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Jia, Xuelian
Wang, Tong
Zhu, Hao
Advancing Computational Toxicology by Interpretable Machine Learning
title Advancing Computational Toxicology by Interpretable Machine Learning
title_full Advancing Computational Toxicology by Interpretable Machine Learning
title_fullStr Advancing Computational Toxicology by Interpretable Machine Learning
title_full_unstemmed Advancing Computational Toxicology by Interpretable Machine Learning
title_short Advancing Computational Toxicology by Interpretable Machine Learning
title_sort advancing computational toxicology by interpretable machine learning
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10666545/
https://www.ncbi.nlm.nih.gov/pubmed/37224004
http://dx.doi.org/10.1021/acs.est.3c00653
work_keys_str_mv AT jiaxuelian advancingcomputationaltoxicologybyinterpretablemachinelearning
AT wangtong advancingcomputationaltoxicologybyinterpretablemachinelearning
AT zhuhao advancingcomputationaltoxicologybyinterpretablemachinelearning