Cargando…

Deep learning based object detection and surrounding environment description for visually impaired people

Object detection, one of the most significant contributions of computer vision and machine learning, plays an immense role in identifying and locating objects in an image or a video. We recognize distinct objects and precisely get their information through object detection, such as their size, shape...

Descripción completa

Detalles Bibliográficos
Autores principales: Islam, Raihan Bin, Akhter, Samiha, Iqbal, Faria, Saif Ur Rahman, Md., Khan, Riasat
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10360957/
https://www.ncbi.nlm.nih.gov/pubmed/37484219
http://dx.doi.org/10.1016/j.heliyon.2023.e16924
Descripción
Sumario:Object detection, one of the most significant contributions of computer vision and machine learning, plays an immense role in identifying and locating objects in an image or a video. We recognize distinct objects and precisely get their information through object detection, such as their size, shape, and location. This paper developed a low-cost assistive system of obstacle detection and the surrounding environment depiction to help blind people using deep learning techniques. TensorFlow object detection API and SSDLite MobileNetV2 have been used to create the proposed object detection model. The pre-trained SSDLite MobileNetV2 model is trained on the COCO dataset, with almost 328,000 images of 90 different objects. The gradient particle swarm optimization (PSO) technique has been used in this work to optimize the final layers and their corresponding hyperparameters of the MobileNetV2 model. Next, we used the Google text-to-speech module, PyAudio, playsound, and speech recognition to generate the audio feedback of the detected objects. A Raspberry Pi camera captures real-time video where real-time object detection is done frame by frame with Raspberry Pi 4B microcontroller. The proposed device is integrated into a head cap, which will help visually impaired people to detect obstacles in their path, as it is more efficient than a traditional white cane. Apart from this detection model, we trained a secondary computer vision model and named it the “ambiance mode.” In this mode, the last three convolutional layers of SSDLite MobileNetV2 are trained through transfer learning on a weather dataset. The dataset comprises around 500 images from four classes: cloudy, rainy, foggy, and sunrise. In this mode, the proposed system will narrate the surrounding scene elaborately, almost like a human describing a landscape or a beautiful sunset to a visually impaired person. The performance of the object detection and ambiance description modes are tested and evaluated in a desktop computer and Raspberry Pi embedded system. Detection accuracy and mean average precision, frame rate, confusion matrix, and ROC curve measure the model's accuracy on both setups. This low-cost proposed system is believed to help visually impaired people in their day-to-day life.