Cargando…
Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study
We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantizat...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10215960/ https://www.ncbi.nlm.nih.gov/pubmed/37239004 http://dx.doi.org/10.3390/biomedicines11051333 |
_version_ | 1785048187277410304 |
---|---|
author | Kim, Hee E. Maros, Mate E. Miethke, Thomas Kittel, Maximilian Siegel, Fabian Ganslandt, Thomas |
author_facet | Kim, Hee E. Maros, Mate E. Miethke, Thomas Kittel, Maximilian Siegel, Fabian Ganslandt, Thomas |
author_sort | Kim, Hee E. |
collection | PubMed |
description | We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization schemes (tensor- or channel-wise) using float32 or int8 on publicly available (DIBaS, n = 660) and locally compiled (n = 8500) datasets. Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. The overall overview of performances including accuracy, inference time and model size was also visualized. Frames per second (FPS) of small models consistently surpassed their large counterparts by a factor of 1-2×. DeiT small was the fastest VT in int8 configuration (6.0 FPS). In conclusion, VTs consistently outperformed CNNs for Gram-stain classification in most settings even on smaller datasets. |
format | Online Article Text |
id | pubmed-10215960 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-102159602023-05-27 Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study Kim, Hee E. Maros, Mate E. Miethke, Thomas Kittel, Maximilian Siegel, Fabian Ganslandt, Thomas Biomedicines Article We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization schemes (tensor- or channel-wise) using float32 or int8 on publicly available (DIBaS, n = 660) and locally compiled (n = 8500) datasets. Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. The overall overview of performances including accuracy, inference time and model size was also visualized. Frames per second (FPS) of small models consistently surpassed their large counterparts by a factor of 1-2×. DeiT small was the fastest VT in int8 configuration (6.0 FPS). In conclusion, VTs consistently outperformed CNNs for Gram-stain classification in most settings even on smaller datasets. MDPI 2023-04-30 /pmc/articles/PMC10215960/ /pubmed/37239004 http://dx.doi.org/10.3390/biomedicines11051333 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Kim, Hee E. Maros, Mate E. Miethke, Thomas Kittel, Maximilian Siegel, Fabian Ganslandt, Thomas Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study |
title | Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study |
title_full | Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study |
title_fullStr | Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study |
title_full_unstemmed | Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study |
title_short | Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study |
title_sort | lightweight visual transformers outperform convolutional neural networks for gram-stained image classification: an empirical study |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10215960/ https://www.ncbi.nlm.nih.gov/pubmed/37239004 http://dx.doi.org/10.3390/biomedicines11051333 |
work_keys_str_mv | AT kimheee lightweightvisualtransformersoutperformconvolutionalneuralnetworksforgramstainedimageclassificationanempiricalstudy AT marosmatee lightweightvisualtransformersoutperformconvolutionalneuralnetworksforgramstainedimageclassificationanempiricalstudy AT miethkethomas lightweightvisualtransformersoutperformconvolutionalneuralnetworksforgramstainedimageclassificationanempiricalstudy AT kittelmaximilian lightweightvisualtransformersoutperformconvolutionalneuralnetworksforgramstainedimageclassificationanempiricalstudy AT siegelfabian lightweightvisualtransformersoutperformconvolutionalneuralnetworksforgramstainedimageclassificationanempiricalstudy AT ganslandtthomas lightweightvisualtransformersoutperformconvolutionalneuralnetworksforgramstainedimageclassificationanempiricalstudy |