Cargando…

Video Anomaly Detection Based on Convolutional Recurrent AutoEncoder

As an essential task in computer vision, video anomaly detection technology is used in video surveillance, scene understanding, road traffic analysis and other fields. However, the definition of anomaly, scene change and complex background present great challenges for video anomaly detection tasks....

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Bokun, Yang, Caiqian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9230876/
https://www.ncbi.nlm.nih.gov/pubmed/35746427
http://dx.doi.org/10.3390/s22124647
Descripción
Sumario:As an essential task in computer vision, video anomaly detection technology is used in video surveillance, scene understanding, road traffic analysis and other fields. However, the definition of anomaly, scene change and complex background present great challenges for video anomaly detection tasks. The insight that motivates this study is that the reconstruction error for normal samples would be lower since they are closer to the training data, while the anomalies could not be reconstructed well. In this paper, we proposed a Convolutional Recurrent AutoEncoder (CR-AE), which combines an attention-based Convolutional Long–Short-Term Memory (ConvLSTM) network and a Convolutional AutoEncoder. The ConvLSTM network and the Convolutional AutoEncoder could capture the irregularity of the temporal pattern and spatial irregularity, respectively. The attention mechanism was used to obtain the current output characteristics from the hidden state of each Covn-LSTM layer. Then, a convolutional decoder was utilized to reconstruct the input video clip and the testing video clip with higher reconstruction error, which were further judged to be anomalies. The proposed method was tested on two popular benchmarks (UCSD ped2 Dataset and Avenue Dataset), and the experimental results demonstrated that CR-AE achieved 95.6% and 73.1% frame-level AUC on two public datasets, respectively.