Cargando…
Interpretability With Accurate Small Models
Models often need to be constrained to a certain size for them to be considered interpretable. For example, a decision tree of depth 5 is much easier to understand than one of depth 50. Limiting model size, however, often reduces accuracy. We suggest a practical technique that minimizes this trade-o...
Autores principales: | Ghose, Abhishek, Ravindran, Balaraman |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861231/ https://www.ncbi.nlm.nih.gov/pubmed/33733123 http://dx.doi.org/10.3389/frai.2020.00003 |
Ejemplares similares
-
Single Shot Corrective CNN for Anatomically Correct 3D Hand Pose Estimation
por: Isaac, Joseph H. R., et al.
Publicado: (2022) -
An Interpretable Predictive Model of Vaccine Utilization for Tanzania
por: Hariharan, Ramkumar, et al.
Publicado: (2020) -
No silver bullet: interpretable ML models must be explained
por: Marques-Silva, Joao, et al.
Publicado: (2023) -
Interpreting vision and language generative models with semantic visual priors
por: Cafagna, Michele, et al.
Publicado: (2023) -
Toward the appropriate interpretation of Alphafold2
por: Xu, Tian, et al.
Publicado: (2023)