Cargando…

Transformer-Based Fire Detection in Videos

Fire detection in videos forms a valuable feature in surveillance systems, as its utilization can prevent hazardous situations. The combination of an accurate and fast model is necessary for the effective confrontation of this significant task. In this work, a transformer-based network for the detec...

Descripción completa

Detalles Bibliográficos
Autores principales: Mardani, Konstantina, Vretos, Nicholas, Daras, Petros
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10051572/
https://www.ncbi.nlm.nih.gov/pubmed/36991746
http://dx.doi.org/10.3390/s23063035
Descripción
Sumario:Fire detection in videos forms a valuable feature in surveillance systems, as its utilization can prevent hazardous situations. The combination of an accurate and fast model is necessary for the effective confrontation of this significant task. In this work, a transformer-based network for the detection of fire in videos is proposed. It is an encoder–decoder architecture that consumes the current frame that is under examination, in order to compute attention scores. These scores denote which parts of the input frame are more relevant for the expected fire detection output. The model is capable of recognizing fire in video frames and specifying its exact location in the image plane in real-time, as can be seen in the experimental results, in the form of segmentation mask. The proposed methodology has been trained and evaluated for two computer vision tasks, the full-frame classification task (fire/no fire in frames) and the fire localization task. In comparison with the state-of-the-art models, the proposed method achieves outstanding results in both tasks, with [Formula: see text] accuracy, [Formula: see text] fps processing time, [Formula: see text] false positive rate for fire localization, and [Formula: see text] for f-score and recall metrics in the full-frame classification task.