Cargando…

Applying sorting algorithms to sensory ranking tests – A proof of concept study

In a sensory or consumer setting, panelists are commonly asked to rank a set of stimuli, either by the panelist's liking of the samples, or by the samples' perceived intensity of a particular sensory note. Ranking is seen as a “simple” task for panelists, and thus is usually performed with...

Descripción completa

Detalles Bibliográficos
Autores principales: Ekman, Markus, Olsson, Asa Amanda, Andersson, Kent, Jonsson, Amanda, Stelick, Alina, Dando, Robin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7473370/
https://www.ncbi.nlm.nih.gov/pubmed/32914110
http://dx.doi.org/10.1016/j.crfs.2019.12.002
Descripción
Sumario:In a sensory or consumer setting, panelists are commonly asked to rank a set of stimuli, either by the panelist's liking of the samples, or by the samples' perceived intensity of a particular sensory note. Ranking is seen as a “simple” task for panelists, and thus is usually performed with minimal (or no) specific instructions given to panelists. Despite its common usage, seemingly little is known about the specific cognitive task that panelists are performing when ranking samples. It becomes quickly unruly to suggest a series of paired comparisons between samples, with 45 individual paired comparisons needed to rank 10 samples. Comparing a number of elements with regards to a scaled value is common in computer science, with a number of differing sorting algorithms used to sort arrays of numerical elements. We compared the efficacy of the most basic sorting algorithm, Bubble Sort (based on comparing each element to its neighbor, moving the higher to the right, and repeating), vs a more advanced algorithm, Merge Sort (based on dividing the array into sub arrays, sorting these sub arrays, and then combining), in a sensory ranking task of 6 ascending concentrations of sucrose (n = 73 panelists). Results confirm that as seen in computer science, a Merge Sort procedure performs better than Bubble Sort in sensory ranking tasks, although the perceived difficulty of the approach suggests panelists would benefit from a longer period of training. Lastly, through a series of video recorded one-on-one interviews, and an additional sensory ranking test (n = 78), it seems that most panelists natively follow a similar procedure to Bubble Sorting when asked to rank without instructions, with correspondingly inferior results to those that may be obtained if a Merge Sorting procedure was applied. Results suggests that ranking may be improved if panelists were given a simple set of instructions on the Merge Sorting procedure.