Cargando…

Tell Me, What Do You See?—Interpretable Classification of Wiring Harness Branches with Deep Neural Networks

In the context of the robotisation of industrial operations related to manipulating deformable linear objects, there is a need for sophisticated machine vision systems, which could classify the wiring harness branches and provide information on where to put them in the assembly process. However, ind...

Descripción completa

Detalles Bibliográficos
Autores principales: Kicki, Piotr, Bednarek, Michał, Lembicz, Paweł, Mierzwiak, Grzegorz, Szymko, Amadeusz, Kraft, Marek, Walas, Krzysztof
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8271466/
https://www.ncbi.nlm.nih.gov/pubmed/34202713
http://dx.doi.org/10.3390/s21134327
Descripción
Sumario:In the context of the robotisation of industrial operations related to manipulating deformable linear objects, there is a need for sophisticated machine vision systems, which could classify the wiring harness branches and provide information on where to put them in the assembly process. However, industrial applications require the interpretability of the machine learning system predictions, as the user wants to know the underlying reason for the decision made by the system. We propose several different neural network architectures that are tested on our novel dataset to address this issue. We conducted various experiments to assess the influence of modality, data fusion type, and the impact of data augmentation and pretraining. The outcome of the network is evaluated in terms of the performance and is also equipped with saliency maps, which allow the user to gain in-depth insight into the classifier’s operation, including a way of explaining the responses of the deep neural network and making system predictions interpretable by humans.