Energy-Efficient Hybrid Impulsive Model for Joint Classification and Segmentation on CT Images

Bin Hu, Zhi-Hong Guan*, Guanrong Chen, Jurgen Kurths

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Highly flexible foundation models like artificial neural networks are imperative in medical practice, enabling diverse tasks with little or no task-specific labelled data. The crucial problem remains as how to link latent features and a priori knowledge within multi-task decision outputs, particularly in joint classification and segmentation tasks on images. This article develops a hybrid encoder-decoding model substantiating hybrid computations of continuous convolution variables and discrete nerve impulses, where impulsive neurons are adopted to boost nonlinear activations. By presenting a flexible network architecture with regularized multi-loss training, this hybrid model can learn shared features of classification and segmentation. The joint decoder does not only provide classification results, but also predicts intelligible task-specific outputs from input images. Applied to the COVID-19 lung CT and the Synapse multiorgan CT datasets, experimental results and ablation studies demonstrate the effectiveness and flexibility of this hybrid model, which outperforms convolution models and human experts. Comparative studies further highlight the high energy-efficient attribute and the decision-output visibility of the hybrid impulsive model, indicating a potential for edge healthcare and biomedical applications. © 2020 IEEE.
Original languageEnglish
JournalIEEE Transactions on Artificial Intelligence
DOIs
Publication statusOnline published - 18 Dec 2024

Research Keywords

  • Classification and segmentation
  • Convolution-impulsive neuron
  • Energy efficiency
  • Hybrid model
  • Joint decoding

Fingerprint

Dive into the research topics of 'Energy-Efficient Hybrid Impulsive Model for Joint Classification and Segmentation on CT Images'. Together they form a unique fingerprint.

Cite this