Cargando…
Suspicious activity recognition for monitoring cheating in exams
Video processing is getting special attention from research and industries. The existence of smart surveillance cameras with high processing power has opened the for making it conceivable to design intelligent visual surveillance systems. Know a day it is very possible to assure invigilators safety...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Indian National Science Academy
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8866922/ http://dx.doi.org/10.1007/s43538-022-00069-2 |
Sumario: | Video processing is getting special attention from research and industries. The existence of smart surveillance cameras with high processing power has opened the for making it conceivable to design intelligent visual surveillance systems. Know a day it is very possible to assure invigilators safety during the examination period. This work aims to distinguish the suspicious activities of students during the exam for surveillance examination halls. For this, a 63 layers deep CNN model is suggested and named "L4-BranchedActionNet". The suggested CNN structure is centered on the alteration of VGG-16 with added four blanched. The developed framework is initially turned into a pre-trained framework by using the SoftMax function to train it on the CUI-EXAM dataset. The dataset for detecting suspicious activity is subsequently sent to this pre-trained algorithm for feature extraction. Feature subset optimization is applied to the deep features that have been obtained. These extracted features are first entropy coded, and then an ant colony system (ACS) is used to optimize the entropy-based coded features. The configured features are then input into a variety of classification models based on SVM and KNN. With a performance of 0.9299 in terms of accuracy, the cubic SVM gets the greatest efficiency scores. The suggested model was further tested on the CIFAR-100 dataset, and it was shown to be accurate to the tune of 0.89796. The result indicates the suggested frameworks soundness. |
---|