Cargando…
Poor agreement between the automated risk assessment of a smartphone application for skin cancer detection and the rating by dermatologists
BACKGROUND: Several smartphone applications (app) with an automated risk assessment claim to be able to detect skin cancer at an early stage. Various studies that have evaluated these apps showed mainly poor performance. However, all studies were done in patients and lesions were mainly selected by...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7027514/ https://www.ncbi.nlm.nih.gov/pubmed/31423673 http://dx.doi.org/10.1111/jdv.15873 |
Sumario: | BACKGROUND: Several smartphone applications (app) with an automated risk assessment claim to be able to detect skin cancer at an early stage. Various studies that have evaluated these apps showed mainly poor performance. However, all studies were done in patients and lesions were mainly selected by a specialist. OBJECTIVES: To investigate the performance of the automated risk assessment of an app by comparing its assessment to that of a dermatologist in lesions selected by the participants. METHODS: Participants of a National Skin Cancer Day were enrolled in a multicentre study. Skin lesions indicated by the participants were analysed by the automated risk assessment of the app prior to blinded rating by the dermatologist. The ratings of the automated risk assessment were compared to the assessment and diagnosis of the dermatologist. Due to the setting of the Skin Cancer Day, lesions were not verified by histopathology. RESULTS: We included 125 participants (199 lesions). The app was not able to analyse 90 cases (45%) of which nine BCC, four atypical naevi and one lentigo maligna. Thirty lesions (67%) with a high and 21 with a medium risk (70%) rating by the app were diagnosed as benign naevi or seborrhoeic keratoses. The interobserver agreement between the ratings of the automated risk assessment and the dermatologist was poor (weighted kappa = 0.02; 95% CI −0.08‐0.12; P = 0.74). CONCLUSIONS: The rating of the automated risk assessment was poor. Further investigations about the diagnostic accuracy in real‐life situations are needed to provide consumers with reliable information about this healthcare application. |
---|