Multimodal fusion-based spatiotemporal incremental learning for ocean environment perception under sparse observation

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number102360
Journal / PublicationInformation Fusion
Volume108
Online published20 Mar 2024
Publication statusPublished - Aug 2024

Abstract

Accurate ocean environment perception is crucial for weather and climate prediction. Environmental limitations and deployment costs constrain satellite and buoy real-time observation, leading to sparse data availability. This paper proposes a novel approach, multimodal fusion-based spatiotemporal incremental learning, enhancing the ocean environment perception under sparse observations. This method uses sparse real-time observations to comprehend, reconstruct, and predict the full environment. First, spatiotemporal disentanglement decouples intrinsic features by integrating physical principles and data learning. Subsequently, incremental extension captures the dynamic environment through stable representation updating and dynamic behavior learning. Then, multimodal information fusion synergizes multisource intrinsic features, enabling the full perception of the ocean environment. Finally, the methodology is supported by convergence analysis and error boundary evaluation. Validation with global sea surface temperature and western Pacific Ocean high-dimensional temperature datasets demonstrates its potential for advancing ocean research and applications using sparse real-time observation. © 2024 Elsevier B.V.

Research Area(s)

  • Incremental learning, Information fusion, Ocean environment, Sparse observation, Spatiotemporal modeling