Implicit motion-compensated network for unsupervised video object segmentation

Xi, Lin ORCID: https://orcid.org/0000-0001-6075-5614, Chen, Weihai, Wu, Xingming, Liu, Zhong and Li, Zhengguo (2022) Implicit motion-compensated network for unsupervised video object segmentation. IEEE Transactions on Circuits and Systems for Video Technology, 32 (9). pp. 6279-6292. ISSN 1051-8215

[thumbnail of Implicit_Motion-Compensated_Network_for_Unsupervised_Video_Object_Segmentation]
Preview
PDF (Implicit_Motion-Compensated_Network_for_Unsupervised_Video_Object_Segmentation) - Accepted Version
Download (8MB) | Preview

Abstract

Unsupervised video object segmentation (UVOS) aims at automatically separating the primary foreground object(s) from the background in a video sequence. Existing UVOS methods either lack robustness when there are visually similar surroundings (appearance-based) or suffer from deterioration in the quality of their predictions because of dynamic background and inaccurate flow (flow-based). To overcome the limitations, we propose an implicit motion-compensated network (IMCNet) combining complementary cues (i.e., appearance and motion) with aligned motion information from the adjacent frames to the current frame at the feature level without estimating optical flows. The proposed IMCNet consists of an affinity computing module (ACM), an attention propagation module (APM), and a motion compensation module (MCM). The light-weight ACM extracts commonality between neighboring input frames based on appearance features. The APM then transmits global correlation in a top-down manner. Through coarse-To-fine iterative inspiring, the APM will refine object regions from multiple resolutions so as to efficiently avoid losing details. Finally, the MCM aligns motion information from temporally adjacent frames to the current frame which achieves implicit motion compensation at the feature level. We perform extensive experiments on DAVIS 16 and YouTube-Objects. Our network achieves favorable performance while running at a faster speed compared to the state-of-The-Art methods. Our code is available at https://github.com/xilin1991/IMCNet.

Item Type: Article
Additional Information: Publisher Copyright: © 1991-2012 IEEE.
Uncontrolled Keywords: attention mechanism,motion compensation,video object segmentation,video processing,media technology,electrical and electronic engineering ,/dk/atira/pure/subjectarea/asjc/2200/2214
Faculty \ School: Faculty of Science > School of Computing Sciences
Related URLs:
Depositing User: LivePure Connector
Date Deposited: 05 Nov 2024 12:30
Last Modified: 12 Nov 2024 13:30
URI: https://ueaeprints.uea.ac.uk/id/eprint/97504
DOI: 10.1109/TCSVT.2022.3165932

Downloads

Downloads per month over past year

Actions (login required)

View Item View Item