Cargando…

On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement

Complexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to)...

Descripción completa

Detalles Bibliográficos
Autores principales: Hao, Gao, Hijazi, Haytham, Durães, João, Medeiros, Júlio, Couceiro, Ricardo, Lam, Chan Tong, Teixeira, César, Castelhano, João, Castelo Branco, Miguel, Carvalho, Paulo, Madeira, Henrique
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9942489/
https://www.ncbi.nlm.nih.gov/pubmed/36825214
http://dx.doi.org/10.3389/fnins.2022.1065366
_version_ 1784891512839995392
author Hao, Gao
Hijazi, Haytham
Durães, João
Medeiros, Júlio
Couceiro, Ricardo
Lam, Chan Tong
Teixeira, César
Castelhano, João
Castelo Branco, Miguel
Carvalho, Paulo
Madeira, Henrique
author_facet Hao, Gao
Hijazi, Haytham
Durães, João
Medeiros, Júlio
Couceiro, Ricardo
Lam, Chan Tong
Teixeira, César
Castelhano, João
Castelo Branco, Miguel
Carvalho, Paulo
Madeira, Henrique
author_sort Hao, Gao
collection PubMed
description Complexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to) understand a set of programs, while the complexity of such programs is assessed through different methods and perspectives: (a) classic code complexity metrics such as McCabe and Halstead metrics, (b) cognitive complexity metrics based on scored code constructs, (c) cognitive complexity metrics from state-of-the-art tools such as SonarQube, (d) human-centered metrics relying on the direct assessment of programmers’ behavioral features (e.g., reading time, and revisits) using eye tracking, and (e) cognitive load/mental effort assessed using electroencephalography (EEG). The human-centered perspective was complemented by the subjective evaluation of participants on the mental effort required to understand the programs using the NASA Task Load Index (TLX). Additionally, the evaluation of the code complexity is measured at both the program level and, whenever possible, at the very low level of code constructs/code regions, to identify the actual code elements and the code context that may trigger a complexity surge in the programmers’ perception of code comprehension difficulty. The programmers’ cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from SonarSource tools deviate considerably from the programmers’ perception of code complexity and often do not show the expected monotonic behavior. The article summarizes the findings in a set of guidelines to improve existing code complexity metrics, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools.
format Online
Article
Text
id pubmed-9942489
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-99424892023-02-22 On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement Hao, Gao Hijazi, Haytham Durães, João Medeiros, Júlio Couceiro, Ricardo Lam, Chan Tong Teixeira, César Castelhano, João Castelo Branco, Miguel Carvalho, Paulo Madeira, Henrique Front Neurosci Neuroscience Complexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to) understand a set of programs, while the complexity of such programs is assessed through different methods and perspectives: (a) classic code complexity metrics such as McCabe and Halstead metrics, (b) cognitive complexity metrics based on scored code constructs, (c) cognitive complexity metrics from state-of-the-art tools such as SonarQube, (d) human-centered metrics relying on the direct assessment of programmers’ behavioral features (e.g., reading time, and revisits) using eye tracking, and (e) cognitive load/mental effort assessed using electroencephalography (EEG). The human-centered perspective was complemented by the subjective evaluation of participants on the mental effort required to understand the programs using the NASA Task Load Index (TLX). Additionally, the evaluation of the code complexity is measured at both the program level and, whenever possible, at the very low level of code constructs/code regions, to identify the actual code elements and the code context that may trigger a complexity surge in the programmers’ perception of code comprehension difficulty. The programmers’ cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from SonarSource tools deviate considerably from the programmers’ perception of code complexity and often do not show the expected monotonic behavior. The article summarizes the findings in a set of guidelines to improve existing code complexity metrics, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools. Frontiers Media S.A. 2023-02-07 /pmc/articles/PMC9942489/ /pubmed/36825214 http://dx.doi.org/10.3389/fnins.2022.1065366 Text en Copyright © 2023 Hao, Hijazi, Durães, Medeiros, Couceiro, Lam, Teixeira, Castelhano, Castelo Branco, Carvalho and Madeira. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Hao, Gao
Hijazi, Haytham
Durães, João
Medeiros, Júlio
Couceiro, Ricardo
Lam, Chan Tong
Teixeira, César
Castelhano, João
Castelo Branco, Miguel
Carvalho, Paulo
Madeira, Henrique
On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_full On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_fullStr On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_full_unstemmed On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_short On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_sort on the accuracy of code complexity metrics: a neuroscience-based guideline for improvement
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9942489/
https://www.ncbi.nlm.nih.gov/pubmed/36825214
http://dx.doi.org/10.3389/fnins.2022.1065366
work_keys_str_mv AT haogao ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT hijazihaytham ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT duraesjoao ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT medeirosjulio ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT couceiroricardo ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT lamchantong ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT teixeiracesar ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT castelhanojoao ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT castelobrancomiguel ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT carvalhopaulo ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement
AT madeirahenrique ontheaccuracyofcodecomplexitymetricsaneurosciencebasedguidelineforimprovement