Cargando…

Content validity of the newly developed risk assessment tool for religious mass gathering events in an Indian setting (Mass Gathering Risk Assessment Tool-MGRAT)

BACKGROUND: Risk assessment (RA) for mass gathering events is crucial to identify potential health hazards. It aids in planning and response activities specific to the event but is often overlooked by the event organizers. This paper reports the content validity process of a newly developed tool cal...

Descripción completa

Detalles Bibliográficos
Autores principales: Sharma, Upasana, Desikachari, BR, Sarma, Sankara
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Wolters Kluwer - Medknow 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6691416/
https://www.ncbi.nlm.nih.gov/pubmed/31463231
http://dx.doi.org/10.4103/jfmpc.jfmpc_380_19
Descripción
Sumario:BACKGROUND: Risk assessment (RA) for mass gathering events is crucial to identify potential health hazards. It aids in planning and response activities specific to the event but is often overlooked by the event organizers. This paper reports the content validity process of a newly developed tool called Mass Gathering Risk Assessment Tool (MGRAT), which intends to assess the risks associated with religious mass gathering events in Indian settings. METHODS: Qualitative approach was followed to identify the risks associated with mass gathering events and to identify the domains and items to be included in the RA tool. The draft tool was shared with six experts who were selected by the convenient method; selected experts were requested to assess the tool and give their comments about the domains, items, relevant responses, and overall presentation of the tool using content validity questionnaire. Content validity index and Fleiss kappa statistics were calculated to assess the agreement between multiple raters. RESULTS: Agreement proportion expressed as scale-level content validity index (S-CVI) calculated by the averaging method is 0.92. S-CVI; calculated by universal agreement is 0.78. Fleiss kappa statistics to measure the agreement between multiple experts after adjusting the component of the chance agreement is 0.522 (95% CI: 0.417, 0.628, P value: 0.001). CONCLUSION: MGRAT is a valid tool, which has an appropriate level of content validity. As the number of raters increases, there will be difficulty in achieving consensus among all the items, which is the reason for lower Content Validity Index/Universal Average (CVI/UA) when compared with Content Validity Index/Average (CVI/Ave). Fleiss kappa statistics also indicated moderate agreement among the raters beyond the chance agreement, which also supports the appropriate content validity of MGRAT.