Cargando…
Interpretation of Machine Learning Models for Data Sets with Many Features Using Feature Importance
[Image: see text] Feature importance (FI) is used to interpret the machine learning model y = f(x) constructed between the explanatory variables or features, x, and the objective variables, y. For a large number of features, interpreting the model in the order of increasing FI is inefficient when th...
Autor principal: | Kaneko, Hiromasa |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
American Chemical Society
2023
|
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10308517/ https://www.ncbi.nlm.nih.gov/pubmed/37396269 http://dx.doi.org/10.1021/acsomega.3c03722 |
Ejemplares similares
-
Predicting melanoma survival and metastasis with interpretable histopathological features and machine learning models
por: Couetil, Justin, et al.
Publicado: (2023) -
Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity
por: Webb, Samuel J, et al.
Publicado: (2014) -
Evaluating annotations of an Agilent expression chip suggests that many features cannot be interpreted
por: Gertz, E Michael, et al.
Publicado: (2009) -
Predicting diabetic retinopathy and identifying interpretable biomedical features using machine learning algorithms
por: Tsao, Hsin-Yi, et al.
Publicado: (2018) -
Using machine learning analysis to interpret the relationship between music emotion and lyric features
por: Xu, Liang, et al.
Publicado: (2021)