Crowdsourcing experiment and fully convolutional neural networks for coastal remote sensing of seagrass and macro-algae

Hobley, Brandon, Mackiewicz, Michal ORCID:, Bremner, Julie, Dolphin, Tony and Arosio, Riccardo (2023) Crowdsourcing experiment and fully convolutional neural networks for coastal remote sensing of seagrass and macro-algae. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 16. pp. 8734-8746. ISSN 2151-1535

[thumbnail of Crowdsourcing_experiment_and_fully_convolutional_neural_networks_for_coastal_remote_sensing_of_seagrass_and_macro-algae]
PDF (Crowdsourcing_experiment_and_fully_convolutional_neural_networks_for_coastal_remote_sensing_of_seagrass_and_macro-algae) - Accepted Version
Available under License Creative Commons Attribution.

Download (18MB) | Preview


Recently, convolutional neural networks and fully convolutional neural networks (FCNs) have been successfully used for monitoring coastal marine ecosystems, in particular vegetation. However, even with recent advances in computational modeling and data acquisition, deep learning models require substantial amounts of good quality reference data to effectively self-learn internal representations of input imagery. The classical approach for coastal mapping requires experts to transcribe in situ records and delineate polygons from high-resolution imagery such that FCNs can self-learn. However, labeling by a single individual limits the training data, whereas crowdsourcing labels can increase the volume of training data, but may compromise label quality and consistency. In this article, we assessed the reliability of crowdsourced labels on a complex multiclass problem domain for estuarine vegetation and unvegetated sediment. An interobserver variability experiment was conducted in order to assess the statistical differences in crowdsourced annotations for plant species and sediment. The participants were grouped based on their discipline and level of expertise, and the statistical differences were evaluated using Cochran's Q-test and the annotation accuracy of each group to determine observation biases. Given the crowdsourced labels, FCNs were trained with majority-vote annotations from each group to check whether observation biases were propagated to FCN performance. Two scenarios were examined: first, a direct comparison of FCNs trained with transcribed in situ labels and crowdsourced labels from each group was established. Then, transcribed in situ labels were supplemented with crowdsourced labels to investigate the feasibility of training FCNs with crowdsourced labels in coastal mapping applications. We show that annotations sourced from discipline experts (ecologists and geomorphologists) familiar with the study site were more accurate than experts with no prior knowledge of the site and nonexperts, with our results confirming that biases in participant annotation were propagated in FCN performance. Furthermore, FCNs trained with a combined dataset of in situ and crowdsourced labels performed better than FCNs trained on the same imagery with in situ labels.

Item Type: Article
Additional Information: Funding Information: This work was supported by in part by Cefas, in part EA, and in part by the Natural Environmental Research Council through Industrial CASE, under Grant NE/R007888/1.
Uncontrolled Keywords: annotations,biological system modeling,convolutional neural network,crowdsourcing,deep learning,image resolution,multispectral,remote sensing,remote sensing,sea measurements,sediments,training,crowdsourcing,remote sensing,convolutional neural network (cnn),deep learning (dl),multispectral,computers in earth sciences,atmospheric science,3* ,/dk/atira/pure/subjectarea/asjc/1900/1903
Faculty \ School: Faculty of Science > School of Computing Sciences
UEA Research Groups: Faculty of Science > Research Groups > Colour and Imaging Lab
Faculty of Science > Research Groups > Collaborative Centre for Sustainable Use of the Seas
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 19 Sep 2023 12:30
Last Modified: 01 Nov 2023 03:29
DOI: 10.1109/JSTARS.2023.3312820

Actions (login required)

View Item View Item