Cargando…

Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring

The aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatme...

Descripción completa

Detalles Bibliográficos
Autores principales: Gera, Arwa, Gera, Shadi, Dalstra, Michel, Cattaneo, Paolo M., Cornelis, Marie A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8070578/
https://www.ncbi.nlm.nih.gov/pubmed/33924334
http://dx.doi.org/10.3390/jcm10081646
_version_ 1783683502908112896
author Gera, Arwa
Gera, Shadi
Dalstra, Michel
Cattaneo, Paolo M.
Cornelis, Marie A.
author_facet Gera, Arwa
Gera, Shadi
Dalstra, Michel
Cattaneo, Paolo M.
Cornelis, Marie A.
author_sort Gera, Arwa
collection PubMed
description The aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatment stages by two operators using two methods: first, digitally, on direct digital models using Ortho Analyzer software; and second, manually, on printed model equivalents using a digital caliper. All measurements were repeated at a one-week interval. Paired sample t-tests were used to compare PAR scores and its components between both methods and raters. Intra-class correlation coefficients (ICC) were used to compute intra- and inter-rater reproducibility. The error of the method was calculated. The agreement between both methods was analyzed using Bland-Altman plots. There were no significant differences in the mean PAR scores between both methods and both raters. ICC for intra- and inter-rater reproducibility was excellent (≥0.95). All error-of-the-method values were smaller than the associated minimum standard deviation. Bland-Altman plots confirmed the validity of the measurements. PAR scoring on digital models showed excellent validity and reproducibility compared with manual scoring on printed model equivalents by means of a digital caliper.
format Online
Article
Text
id pubmed-8070578
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-80705782021-04-26 Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring Gera, Arwa Gera, Shadi Dalstra, Michel Cattaneo, Paolo M. Cornelis, Marie A. J Clin Med Article The aim of this study was to assess the validity and reproducibility of digital scoring of the Peer Assessment Rating (PAR) index and its components using a software, compared with conventional manual scoring on printed model equivalents. The PAR index was scored on 15 cases at pre- and post-treatment stages by two operators using two methods: first, digitally, on direct digital models using Ortho Analyzer software; and second, manually, on printed model equivalents using a digital caliper. All measurements were repeated at a one-week interval. Paired sample t-tests were used to compare PAR scores and its components between both methods and raters. Intra-class correlation coefficients (ICC) were used to compute intra- and inter-rater reproducibility. The error of the method was calculated. The agreement between both methods was analyzed using Bland-Altman plots. There were no significant differences in the mean PAR scores between both methods and both raters. ICC for intra- and inter-rater reproducibility was excellent (≥0.95). All error-of-the-method values were smaller than the associated minimum standard deviation. Bland-Altman plots confirmed the validity of the measurements. PAR scoring on digital models showed excellent validity and reproducibility compared with manual scoring on printed model equivalents by means of a digital caliper. MDPI 2021-04-13 /pmc/articles/PMC8070578/ /pubmed/33924334 http://dx.doi.org/10.3390/jcm10081646 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Gera, Arwa
Gera, Shadi
Dalstra, Michel
Cattaneo, Paolo M.
Cornelis, Marie A.
Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring
title Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring
title_full Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring
title_fullStr Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring
title_full_unstemmed Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring
title_short Validity and Reproducibility of the Peer Assessment Rating Index Scored on Digital Models Using a Software Compared with Traditional Manual Scoring
title_sort validity and reproducibility of the peer assessment rating index scored on digital models using a software compared with traditional manual scoring
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8070578/
https://www.ncbi.nlm.nih.gov/pubmed/33924334
http://dx.doi.org/10.3390/jcm10081646
work_keys_str_mv AT geraarwa validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring
AT gerashadi validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring
AT dalstramichel validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring
AT cattaneopaolom validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring
AT cornelismariea validityandreproducibilityofthepeerassessmentratingindexscoredondigitalmodelsusingasoftwarecomparedwithtraditionalmanualscoring