Cargando…

Long short term memory deep net performance on fused Planet-Scope and Sentinel-2 imagery for detection of agricultural crop

In view of the challenges faced by organizations and departments concerned with agricultural capacity observations, we collected In-Situ data consisting of diverse crops (More than 11 consumable vegetation types) in our pilot region of Harichand Charsadda, Khyber Pakhtunkhwa (KP), Pakistan. Our prop...

Descripción completa

Detalles Bibliográficos
Autores principales: Rehman, Touseef Ur, Alam, Maaz, Minallah, Nasru, Khan, Waleed, Frnda, Jaroslav, Mushtaq, Shawal, Ajmal, Muhammad
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9897520/
https://www.ncbi.nlm.nih.gov/pubmed/36735648
http://dx.doi.org/10.1371/journal.pone.0271897
Descripción
Sumario:In view of the challenges faced by organizations and departments concerned with agricultural capacity observations, we collected In-Situ data consisting of diverse crops (More than 11 consumable vegetation types) in our pilot region of Harichand Charsadda, Khyber Pakhtunkhwa (KP), Pakistan. Our proposed Long Short-Term Memory based Deep Neural network model was trained for land cover land use statistics generation using the acquired ground truth data, for a synergy between Planet-Scope Dove and European Space Agency’s Sentinel-2. Total of 4 bands from both sentinel-2 and planet scope including Red, Green, Near-Infrared (NIR) and Normalised Difference Vegetation Index (NDVI) were used for classification purpose. Using short temporal frame of Sentinel-2 comprising 5 date images, we propose an realistic and implementable procedure for generating accurate crop statistics using remote sensing. Our self collected data-set consists of a total number of 107,899 pixels which was further split into 70% and 30% for training and testing purpose of the model respectively. The collected data is in the shape of field parcels, which has been further split for training, validation and test sets, to avoid spatial auto-correlation. To ensure the quality and accuracy 15% of the training data was left out for validation purpose, and 15% for testing. Prediction was also performed on our trained model and visual analysis of the area from the image showed significant results. Further more a comparison between Sentinel-2 time series is performed separately from the fused Planet-Scope and Sentinel-2 time-series data sets. The results achieved shows a weighted average of 93% for Sentinel-2 time series and 97% for fused Planet-Scope and Sentinel-2 time series.