Cargando…
Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories
Modern sensors deployed in most Industry 4.0 applications are intelligent, meaning that they present sophisticated behavior, usually due to embedded software, and network connectivity capabilities. For that reason, the task of calibrating an intelligent sensor currently involves more than measuring...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8197301/ https://www.ncbi.nlm.nih.gov/pubmed/34074060 http://dx.doi.org/10.3390/s21113660 |
_version_ | 1783706887783448576 |
---|---|
author | Chapetta, Wladmir Araujo das Neves, Jailton Santos Machado, Raphael Carlos Santos |
author_facet | Chapetta, Wladmir Araujo das Neves, Jailton Santos Machado, Raphael Carlos Santos |
author_sort | Chapetta, Wladmir Araujo |
collection | PubMed |
description | Modern sensors deployed in most Industry 4.0 applications are intelligent, meaning that they present sophisticated behavior, usually due to embedded software, and network connectivity capabilities. For that reason, the task of calibrating an intelligent sensor currently involves more than measuring physical quantities. As the behavior of modern sensors depends on embedded software, comprehensive assessments of such sensors necessarily demands the analysis of their embedded software. On the other hand, interlaboratory comparisons are comparative analyses of a body of labs involved in such assessments. While interlaboratory comparison is a well-established practice in fields related to physical, chemical and biological sciences, it is a recent challenge for software assessment. Establishing quantitative metrics to compare the performance of software analysis and testing accredited labs is no trivial task. Software is intangible and its requirements accommodate some ambiguity, inconsistency or information loss. Besides, software testing and analysis are highly human-dependent activities. In the present work, we investigate whether performing interlaboratory comparisons for software assessment by using quantitative performance measurement is feasible. The proposal was to evaluate the competence in software code analysis activities of each lab by using two quantitative metrics (code coverage and mutation score). Our results demonstrate the feasibility of establishing quantitative comparisons among software analysis and testing accredited laboratories. One of these rounds was registered as formal proficiency testing in the database—the first registered proficiency testing focused on code analysis. |
format | Online Article Text |
id | pubmed-8197301 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-81973012021-06-13 Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories Chapetta, Wladmir Araujo das Neves, Jailton Santos Machado, Raphael Carlos Santos Sensors (Basel) Article Modern sensors deployed in most Industry 4.0 applications are intelligent, meaning that they present sophisticated behavior, usually due to embedded software, and network connectivity capabilities. For that reason, the task of calibrating an intelligent sensor currently involves more than measuring physical quantities. As the behavior of modern sensors depends on embedded software, comprehensive assessments of such sensors necessarily demands the analysis of their embedded software. On the other hand, interlaboratory comparisons are comparative analyses of a body of labs involved in such assessments. While interlaboratory comparison is a well-established practice in fields related to physical, chemical and biological sciences, it is a recent challenge for software assessment. Establishing quantitative metrics to compare the performance of software analysis and testing accredited labs is no trivial task. Software is intangible and its requirements accommodate some ambiguity, inconsistency or information loss. Besides, software testing and analysis are highly human-dependent activities. In the present work, we investigate whether performing interlaboratory comparisons for software assessment by using quantitative performance measurement is feasible. The proposal was to evaluate the competence in software code analysis activities of each lab by using two quantitative metrics (code coverage and mutation score). Our results demonstrate the feasibility of establishing quantitative comparisons among software analysis and testing accredited laboratories. One of these rounds was registered as formal proficiency testing in the database—the first registered proficiency testing focused on code analysis. MDPI 2021-05-24 /pmc/articles/PMC8197301/ /pubmed/34074060 http://dx.doi.org/10.3390/s21113660 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Chapetta, Wladmir Araujo das Neves, Jailton Santos Machado, Raphael Carlos Santos Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories |
title | Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories |
title_full | Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories |
title_fullStr | Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories |
title_full_unstemmed | Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories |
title_short | Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories |
title_sort | quantitative metrics for performance monitoring of software code analysis accredited testing laboratories |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8197301/ https://www.ncbi.nlm.nih.gov/pubmed/34074060 http://dx.doi.org/10.3390/s21113660 |
work_keys_str_mv | AT chapettawladmiraraujo quantitativemetricsforperformancemonitoringofsoftwarecodeanalysisaccreditedtestinglaboratories AT dasnevesjailtonsantos quantitativemetricsforperformancemonitoringofsoftwarecodeanalysisaccreditedtestinglaboratories AT machadoraphaelcarlossantos quantitativemetricsforperformancemonitoringofsoftwarecodeanalysisaccreditedtestinglaboratories |