Cargando…

Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification

In this paper, we present a novel methodology based on machine learning for identifying the most appropriate from a set of available state-of-the-art object detectors for a given application. Our particular interest is to develop a road map for identifying verifiably optimal selections, especially f...

Descripción completa

Detalles Bibliográficos
Autores principales: Mohamed, Elhassan, Sirlantzis, Konstantinos, Howells, Gareth, Hoque, Sanaul
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9330345/
https://www.ncbi.nlm.nih.gov/pubmed/35898097
http://dx.doi.org/10.3390/s22155596
_version_ 1784758138153467904
author Mohamed, Elhassan
Sirlantzis, Konstantinos
Howells, Gareth
Hoque, Sanaul
author_facet Mohamed, Elhassan
Sirlantzis, Konstantinos
Howells, Gareth
Hoque, Sanaul
author_sort Mohamed, Elhassan
collection PubMed
description In this paper, we present a novel methodology based on machine learning for identifying the most appropriate from a set of available state-of-the-art object detectors for a given application. Our particular interest is to develop a road map for identifying verifiably optimal selections, especially for challenging applications such as detecting small objects in a mixed-size object dataset. State-of-the-art object detection systems often find the localisation of small-size objects challenging since most are usually trained on large-size objects. These contain abundant information as they occupy a large number of pixels relative to the total image size. This fact is normally exploited by the model during training and inference processes. To dissect and understand this process, our approach systematically examines detectors’ performances using two very distinct deep convolutional networks. The first is the single-stage YOLO V3 and the second is the double-stage Faster R-CNN. Specifically, our proposed method explores and visually illustrates the impact of feature extraction layers, number of anchor boxes, data augmentation, etc., utilising ideas from the field of explainable Artificial Intelligence (XAI). Our results, for example, show that multi-head YOLO V3 detectors trained using augmented data produce better performance even with a fewer number of anchor boxes. Moreover, robustness regarding the detector’s ability to explain how a specific decision was reached is investigated using different explanation techniques. Finally, two new visualisation techniques are proposed, WS-Grad and Concat-Grad, for identifying explanation cues of different detectors. These are applied to specific object detection tasks to illustrate their reliability and transparency with respect to the decision process. It is shown that the proposed techniques can result in high resolution and comprehensive heatmaps of the image areas, significantly affecting detector decisions as compared to the state-of-the-art techniques tested.
format Online
Article
Text
id pubmed-9330345
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93303452022-07-29 Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification Mohamed, Elhassan Sirlantzis, Konstantinos Howells, Gareth Hoque, Sanaul Sensors (Basel) Article In this paper, we present a novel methodology based on machine learning for identifying the most appropriate from a set of available state-of-the-art object detectors for a given application. Our particular interest is to develop a road map for identifying verifiably optimal selections, especially for challenging applications such as detecting small objects in a mixed-size object dataset. State-of-the-art object detection systems often find the localisation of small-size objects challenging since most are usually trained on large-size objects. These contain abundant information as they occupy a large number of pixels relative to the total image size. This fact is normally exploited by the model during training and inference processes. To dissect and understand this process, our approach systematically examines detectors’ performances using two very distinct deep convolutional networks. The first is the single-stage YOLO V3 and the second is the double-stage Faster R-CNN. Specifically, our proposed method explores and visually illustrates the impact of feature extraction layers, number of anchor boxes, data augmentation, etc., utilising ideas from the field of explainable Artificial Intelligence (XAI). Our results, for example, show that multi-head YOLO V3 detectors trained using augmented data produce better performance even with a fewer number of anchor boxes. Moreover, robustness regarding the detector’s ability to explain how a specific decision was reached is investigated using different explanation techniques. Finally, two new visualisation techniques are proposed, WS-Grad and Concat-Grad, for identifying explanation cues of different detectors. These are applied to specific object detection tasks to illustrate their reliability and transparency with respect to the decision process. It is shown that the proposed techniques can result in high resolution and comprehensive heatmaps of the image areas, significantly affecting detector decisions as compared to the state-of-the-art techniques tested. MDPI 2022-07-26 /pmc/articles/PMC9330345/ /pubmed/35898097 http://dx.doi.org/10.3390/s22155596 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mohamed, Elhassan
Sirlantzis, Konstantinos
Howells, Gareth
Hoque, Sanaul
Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification
title Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification
title_full Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification
title_fullStr Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification
title_full_unstemmed Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification
title_short Optimisation of Deep Learning Small-Object Detectors with Novel Explainable Verification
title_sort optimisation of deep learning small-object detectors with novel explainable verification
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9330345/
https://www.ncbi.nlm.nih.gov/pubmed/35898097
http://dx.doi.org/10.3390/s22155596
work_keys_str_mv AT mohamedelhassan optimisationofdeeplearningsmallobjectdetectorswithnovelexplainableverification
AT sirlantziskonstantinos optimisationofdeeplearningsmallobjectdetectorswithnovelexplainableverification
AT howellsgareth optimisationofdeeplearningsmallobjectdetectorswithnovelexplainableverification
AT hoquesanaul optimisationofdeeplearningsmallobjectdetectorswithnovelexplainableverification