Cargando…

When Similarity Beats Expertise—Differential Effects of Patient and Expert Ratings on Physician Choice: Field and Experimental Study

BACKGROUND: Increasing numbers of patients consult Web-based rating platforms before making health care decisions. These platforms often provide ratings from other patients, reflecting their subjective experience. However, patients often lack the knowledge to be able to judge the objective quality o...

Descripción completa

Detalles Bibliográficos
Autores principales: Kranzbühler, Anne-Madeleine, Kleijnen, Mirella H P, Verlegh, Peeter W J, Teerling, Marije
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6617917/
https://www.ncbi.nlm.nih.gov/pubmed/31244481
http://dx.doi.org/10.2196/12454
Descripción
Sumario:BACKGROUND: Increasing numbers of patients consult Web-based rating platforms before making health care decisions. These platforms often provide ratings from other patients, reflecting their subjective experience. However, patients often lack the knowledge to be able to judge the objective quality of health services. To account for this potential bias, many rating platforms complement patient ratings with more objective expert ratings, which can lead to conflicting signals as these different types of evaluations are not always aligned. OBJECTIVE: This study aimed to fill the gap on how consumers combine information from 2 different sources—patients or experts—to form opinions and make purchase decisions in a health care context. More specifically, we assessed prospective patients’ decision making when considering both types of ratings simultaneously on a Web-based rating platform. In addition, we examined how the influence of patient and expert ratings is conditional upon rating volume (ie, the number of patient opinions). METHODS: In a field study, we analyzed a dataset from a Web-based physician rating platform containing clickstream data for more than 5000 US doctors. We complemented this with an experimental lab study consisting of a sample of 112 students from a Dutch university. The average age was 23.1 years, and 60.7% (68/112) of the respondents were female. RESULTS: The field data illustrated the moderating effect of rating volume. If the patient advice was based on small numbers, prospective patients tended to base their selection of a physician on expert rather than patient advice (profile clicks beta=.14, P<.001; call clicks beta=.28, P=.03). However, when the group of patients substantially grew in size, prospective patients started to rely on patients rather than the expert (profile clicks beta=.23, SE=0.07, P=.004; call clicks beta=.43, SE=0.32, P=.10). The experimental study replicated and validated these findings for conflicting patient versus expert advice in a controlled setting. When patient ratings were aggregated from a high number of opinions, prospective patients’ evaluations were affected more strongly by patient than expert advice (mean(patient positive/expert negative)=3.06, SD=0.94; mean(expert positive/patient negative)=2.55, SD=0.89; F(1,108)=4.93, P=.03). Conversely, when patient ratings were aggregated from a low volume, participants were affected more strongly by expert compared with patient advice (mean(patient positive/expert negative)=2.36, SD=0.76; mean(expert positive/patient negative)=3.01, SD=0.81; F(1,108)=8.42, P=.004). This effect occurred despite the fact that they considered the patients to be less knowledgeable than experts. CONCLUSIONS: When confronted with information from both sources simultaneously, prospective patients are influenced more strongly by other patients. This effect reverses when the patient rating has been aggregated from a (very) small number of individual opinions. This has important implications for how to present health care provider ratings to prospective patients to aid their decision-making process.