Cargando…
Information Fusion-2-Text: Explainable Aggregation via Linguistic Protoforms
Recent advancements and applications in artificial intelligence (AI) and machine learning (ML) have highlighted the need for explainable, interpretable, and actionable AI-ML. Most work is focused on explaining deep artificial neural networks, e.g., visual and image captioning. In recent work, we est...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7274687/ http://dx.doi.org/10.1007/978-3-030-50153-2_9 |
Sumario: | Recent advancements and applications in artificial intelligence (AI) and machine learning (ML) have highlighted the need for explainable, interpretable, and actionable AI-ML. Most work is focused on explaining deep artificial neural networks, e.g., visual and image captioning. In recent work, we established a set of indices and processes for explainable AI (XAI) relative to information fusion. While informative, the result is information overload and domain expertise is required to understand the results. Herein, we explore the extraction of a reduced set of higher-level linguistic summaries to inform and improve communication with non-fusion experts. Our contribution is a proposed structure of a fusion summary and method to extract this information from a given set of indices. In order to demonstrate the usefulness of the proposed methodology, we provide a case study for using the fuzzy integral to combine a heterogeneous set of deep learners in remote sensing for object detection and land cover classification. This case study shows the potential of our approach to inform users about important trends and anomalies in the models, data and fusion results. This information is critical with respect to transparency, trustworthiness, and identifying limitations of fusion techniques, which may motivate future research and innovation. |
---|