Cargando…

Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images

Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches...

Descripción completa

Detalles Bibliográficos
Autores principales: Barua, Prabal Datta, Baygin, Nursena, Dogan, Sengul, Baygin, Mehmet, Arunkumar, N., Fujita, Hamido, Tuncer, Turker, Tan, Ru-San, Palmer, Elizabeth, Azizan, Muhammad Mokhzaini Bin, Kadri, Nahrizul Adib, Acharya, U. Rajendra
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9568538/
https://www.ncbi.nlm.nih.gov/pubmed/36241674
http://dx.doi.org/10.1038/s41598-022-21380-4
_version_ 1784809660219392000
author Barua, Prabal Datta
Baygin, Nursena
Dogan, Sengul
Baygin, Mehmet
Arunkumar, N.
Fujita, Hamido
Tuncer, Turker
Tan, Ru-San
Palmer, Elizabeth
Azizan, Muhammad Mokhzaini Bin
Kadri, Nahrizul Adib
Acharya, U. Rajendra
author_facet Barua, Prabal Datta
Baygin, Nursena
Dogan, Sengul
Baygin, Mehmet
Arunkumar, N.
Fujita, Hamido
Tuncer, Turker
Tan, Ru-San
Palmer, Elizabeth
Azizan, Muhammad Mokhzaini Bin
Kadri, Nahrizul Adib
Acharya, U. Rajendra
author_sort Barua, Prabal Datta
collection PubMed
description Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or “shutter blinds”. A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases—University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database—which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain.
format Online
Article
Text
id pubmed-9568538
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-95685382022-10-16 Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images Barua, Prabal Datta Baygin, Nursena Dogan, Sengul Baygin, Mehmet Arunkumar, N. Fujita, Hamido Tuncer, Turker Tan, Ru-San Palmer, Elizabeth Azizan, Muhammad Mokhzaini Bin Kadri, Nahrizul Adib Acharya, U. Rajendra Sci Rep Article Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or “shutter blinds”. A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases—University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database—which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain. Nature Publishing Group UK 2022-10-14 /pmc/articles/PMC9568538/ /pubmed/36241674 http://dx.doi.org/10.1038/s41598-022-21380-4 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Barua, Prabal Datta
Baygin, Nursena
Dogan, Sengul
Baygin, Mehmet
Arunkumar, N.
Fujita, Hamido
Tuncer, Turker
Tan, Ru-San
Palmer, Elizabeth
Azizan, Muhammad Mokhzaini Bin
Kadri, Nahrizul Adib
Acharya, U. Rajendra
Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images
title Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images
title_full Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images
title_fullStr Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images
title_full_unstemmed Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images
title_short Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images
title_sort automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9568538/
https://www.ncbi.nlm.nih.gov/pubmed/36241674
http://dx.doi.org/10.1038/s41598-022-21380-4
work_keys_str_mv AT baruaprabaldatta automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT bayginnursena automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT dogansengul automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT bayginmehmet automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT arunkumarn automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT fujitahamido automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT tuncerturker automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT tanrusan automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT palmerelizabeth automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT azizanmuhammadmokhzainibin automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT kadrinahrizuladib automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages
AT acharyaurajendra automateddetectionofpainlevelsusingdeepfeatureextractionfromshutterblindsbaseddynamicsizedhorizontalpatcheswithfacialimages