Continuous U-Net: Faster, Greater and Noiseless

Chun-Wun Cheng (Co-first Author), Christina Runkel (Co-first Author), Lihao Liu (Co-first Author), Raymond H. Chan, Carola-Bibiane Schönlieb, Angelica I. Aviles-Rivero

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Image segmentation is a fundamental task in image analysis and clinical practice. The current state-of-the-art techniques are based on U-shape type encoder-decoder networks with skip connections called U-Net. Despite the powerful performance reported by existing U-Net type networks, they suffer from several major limitations. These issues include the hard coding of the receptive field size, compromising the performance and computational cost, as well as the fact that they do not account for inherent noise in the data. They have problems associated with discrete layers, and do not offer any theoretical underpinning. In this work we introduce continuous U-Net, a novel family of networks for image segmentation. Firstly, continuous U-Net is a continuous deep neural network that introduces new dynamic blocks modelled by second order ordinary differential equations. Secondly, we provide theoretical guarantees for our network demonstrating faster convergence, higher robustness and less sensitivity to noise. Thirdly, we derive qualitative measures to tailor-made segmentation tasks. We demonstrate, through extensive numerical and visual results, that our model outperforms existing U-Net blocks for several medical image segmentation benchmarking datasets. © 2024, Transactions on Machine Learning Research. All rights reserved.
Original languageEnglish
Number of pages18
JournalTransactions on Machine Learning Research
Volume2024
Online published29 Apr 2024
Publication statusPublished - Apr 2024

Funding

CWC acknowledges support from Department of Mathematics, College of Science , CityU, HKASR reaching out award and funding from CCMI, University of Cambridge. RHC acknowledges support from HKRGC GRF grants CityU1101120 and CityU11309922 and CRF grant C1013-21GF. AAR gratefully acknowledges funding from the Cambridge Centre for Data-Driven Discovery and Accelerate Programme for Scientific Discovery, made possible by a donation from Schmidt Futures, ESPRC Digital Core Capability Award, and CMIH, CCMI, University of Cambridge. CBS acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC advanced career fellowship EP/V029428/1, EPSRC grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Awards 215733/Z/19/Z and 221633/Z/20/Z, the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 777826 NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute.

Fingerprint

Dive into the research topics of 'Continuous U-Net: Faster, Greater and Noiseless'. Together they form a unique fingerprint.

Cite this