Cargando…
Explainable Agents for Less Bias in Human-Agent Decision Making
As autonomous agents become more self-governing, ubiquitous and sophisticated, it is vital that humans should have effective interactions with them. Agents often use Machine Learning (ML) for acquiring expertise, but traditional ML methods produce opaque results which are difficult to interpret. Hen...
Autores principales: | Malhi, Avleen, Knapic, Samanta, Främling, Kary |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338195/ http://dx.doi.org/10.1007/978-3-030-51924-7_8 |
Ejemplares similares
-
Automated IoT Device Identification Based on Full Packet Information Using Real-Time Network Traffic
por: Yousefnezhad, Narges, et al.
Publicado: (2021) -
Decision Theory Meets Explainable AI
por: Främling, Kary
Publicado: (2020) -
A Comprehensive Security Architecture for Information Management throughout the Lifecycle of IoT Products
por: Yousefnezhad, Narges, et al.
Publicado: (2023) -
A Novel LSTM for Multivariate Time Series with Massive Missingness
por: Fouladgar, Nazanin, et al.
Publicado: (2020) -
Explainable, transparent autonomous agents and multi-agent systems: first international workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13-14, 2019, revised selected papers
por: Calvaresi, Davide, et al.
Publicado: (2019)