Cargando…
Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset
To compare the seizure detection performance of three expert humans and two computer algorithms in a large set of epilepsy monitoring unit EEG recordings. METHODS: One hundred twenty prolonged EEGs, 100 containing clinically reported EEG-evident seizures, were evaluated. Seizures were marked by the...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Journal of Clinical Neurophysiology
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404956/ https://www.ncbi.nlm.nih.gov/pubmed/32472781 http://dx.doi.org/10.1097/WNP.0000000000000709 |
_version_ | 1783746241252818944 |
---|---|
author | Scheuer, Mark L. Wilson, Scott B. Antony, Arun Ghearing, Gena Urban, Alexandra Bagić, Anto I. |
author_facet | Scheuer, Mark L. Wilson, Scott B. Antony, Arun Ghearing, Gena Urban, Alexandra Bagić, Anto I. |
author_sort | Scheuer, Mark L. |
collection | PubMed |
description | To compare the seizure detection performance of three expert humans and two computer algorithms in a large set of epilepsy monitoring unit EEG recordings. METHODS: One hundred twenty prolonged EEGs, 100 containing clinically reported EEG-evident seizures, were evaluated. Seizures were marked by the experts and algorithms. Pairwise sensitivity and false-positive rates were calculated for each human–human and algorithm–human pair. Differences in human pairwise performance were calculated and compared with the range of algorithm versus human performance differences as a type of statistical modified Turing test. RESULTS: A total of 411 individual seizure events were marked by the experts in 2,805 hours of EEG. Mean, pairwise human sensitivities and false-positive rates were 84.9%, 73.7%, and 72.5%, and 1.0, 0.4, and 1.0/day, respectively. Only the Persyst 14 algorithm was comparable with humans—78.2% and 1.0/day. Evaluation of pairwise differences in sensitivity and false-positive rate demonstrated that Persyst 14 met statistical noninferiority criteria compared with the expert humans. CONCLUSIONS: Evaluating typical prolonged EEG recordings, human experts had a modest level of agreement in seizure marking and low false-positive rates. The Persyst 14 algorithm was statistically noninferior to the humans. For the first time, a seizure detection algorithm and human experts performed similarly. |
format | Online Article Text |
id | pubmed-8404956 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Journal of Clinical Neurophysiology |
record_format | MEDLINE/PubMed |
spelling | pubmed-84049562021-09-03 Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset Scheuer, Mark L. Wilson, Scott B. Antony, Arun Ghearing, Gena Urban, Alexandra Bagić, Anto I. J Clin Neurophysiol Original Research To compare the seizure detection performance of three expert humans and two computer algorithms in a large set of epilepsy monitoring unit EEG recordings. METHODS: One hundred twenty prolonged EEGs, 100 containing clinically reported EEG-evident seizures, were evaluated. Seizures were marked by the experts and algorithms. Pairwise sensitivity and false-positive rates were calculated for each human–human and algorithm–human pair. Differences in human pairwise performance were calculated and compared with the range of algorithm versus human performance differences as a type of statistical modified Turing test. RESULTS: A total of 411 individual seizure events were marked by the experts in 2,805 hours of EEG. Mean, pairwise human sensitivities and false-positive rates were 84.9%, 73.7%, and 72.5%, and 1.0, 0.4, and 1.0/day, respectively. Only the Persyst 14 algorithm was comparable with humans—78.2% and 1.0/day. Evaluation of pairwise differences in sensitivity and false-positive rate demonstrated that Persyst 14 met statistical noninferiority criteria compared with the expert humans. CONCLUSIONS: Evaluating typical prolonged EEG recordings, human experts had a modest level of agreement in seizure marking and low false-positive rates. The Persyst 14 algorithm was statistically noninferior to the humans. For the first time, a seizure detection algorithm and human experts performed similarly. Journal of Clinical Neurophysiology 2021-09 2020-05-25 /pmc/articles/PMC8404956/ /pubmed/32472781 http://dx.doi.org/10.1097/WNP.0000000000000709 Text en Copyright © 2020 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Clinical Neurophysiology Society. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) , where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. |
spellingShingle | Original Research Scheuer, Mark L. Wilson, Scott B. Antony, Arun Ghearing, Gena Urban, Alexandra Bagić, Anto I. Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset |
title | Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset |
title_full | Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset |
title_fullStr | Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset |
title_full_unstemmed | Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset |
title_short | Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset |
title_sort | seizure detection: interreader agreement and detection algorithm assessments using a large dataset |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404956/ https://www.ncbi.nlm.nih.gov/pubmed/32472781 http://dx.doi.org/10.1097/WNP.0000000000000709 |
work_keys_str_mv | AT scheuermarkl seizuredetectioninterreaderagreementanddetectionalgorithmassessmentsusingalargedataset AT wilsonscottb seizuredetectioninterreaderagreementanddetectionalgorithmassessmentsusingalargedataset AT antonyarun seizuredetectioninterreaderagreementanddetectionalgorithmassessmentsusingalargedataset AT ghearinggena seizuredetectioninterreaderagreementanddetectionalgorithmassessmentsusingalargedataset AT urbanalexandra seizuredetectioninterreaderagreementanddetectionalgorithmassessmentsusingalargedataset AT bagicantoi seizuredetectioninterreaderagreementanddetectionalgorithmassessmentsusingalargedataset |