Cargando…

Explainable AI: A Neurally-Inspired Decision Stack Framework

European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called “de...

Descripción completa

Detalles Bibliográficos
Autores principales: Khan, Muhammad Salar, Nayebpour, Mehdi, Li, Meng-Hao, El-Amine, Hadi, Koizumi, Naoru, Olds, James L.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9496620/
https://www.ncbi.nlm.nih.gov/pubmed/36134931
http://dx.doi.org/10.3390/biomimetics7030127
Descripción
Sumario:European law now requires AI to be explainable in the context of adverse decisions affecting the European Union (EU) citizens. At the same time, we expect increasing instances of AI failure as it operates on imperfect data. This paper puts forward a neurally inspired theoretical framework called “decision stacks” that can provide a way forward in research to develop Explainable Artificial Intelligence (X-AI). By leveraging findings from the finest memory systems in biological brains, the decision stack framework operationalizes the definition of explainability. It then proposes a test that can potentially reveal how a given AI decision was made.