Cargando…
A Deep Neural Network Sensor for Visual Servoing in 3D Spaces
This paper describes a novel stereo vision sensor based on deep neural networks, that can be used to produce a feedback signal for visual servoing in unmanned aerial vehicles such as drones. Two deep convolutional neural networks attached to the stereo camera in the drone are trained to detect wind...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7085743/ https://www.ncbi.nlm.nih.gov/pubmed/32155733 http://dx.doi.org/10.3390/s20051437 |
Sumario: | This paper describes a novel stereo vision sensor based on deep neural networks, that can be used to produce a feedback signal for visual servoing in unmanned aerial vehicles such as drones. Two deep convolutional neural networks attached to the stereo camera in the drone are trained to detect wind turbines in images and stereo triangulation is used to calculate the distance from a wind turbine to the drone. Our experimental results show that the sensor produces data accurate enough to be used for servoing, even in the presence of noise generated when the drone is not being completely stable. Our results also show that appropriate filtering of the signals is needed and that to produce correct results, it is very important to keep the wind turbine within the field of vision of both cameras, so that both deep neural networks could detect it. |
---|