Exploiting Multimodal Features and Deep Learning for Predicting Crowdfunding Successes

Zijian Zhang, Raymond Y.K. Lau

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

1 Citation (Scopus)

Abstract

Though structured and textual unstructured data were examined in predicting crowdfunding successes, multimodal features have seldom been exploited. In this research, we have examined the predictive power of multimodal features and various deep learning models for predicting crowdfunding successes. In particular, we examined implicit features such as textual project descriptions, project related audio clips, and project related video clips. First, we utilized an explanatory statistical method to identify the significance of explicit features in explaining the variance of crowdfunding amounts. Our empirical results reveal that a project description supplemented with a video clip, the number of backers, and the number of projects previously supported are the top three most significant features. Second, we applied deep learning models such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Bidirectional Encoder Representations from Transformers (BERT) fed with multimodal features such as project description text, audio clips, and video clips to predict crowdfunding successes based on a dataset that we crawled from Kickstarter. Our experiments show that the prediction accuracy of textual feature fed to the TextCNN model achieves relatively high accuracy when a single modality of feature is used. By combing all features from three modalities via a late fusion method, the TextCNN model achieves the best accuracy of 82.2%. Our research work opens the door to the exploitation of multimodal features and deep learning to improve the prediction of crowdfunding successes. © 2024 IEEE.
Original languageEnglish
Title of host publication2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS)
PublisherIEEE
Number of pages6
ISBN (Electronic)9798350349597
ISBN (Print)979-8-3503-4960-3
DOIs
Publication statusPublished - 2024
Event2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS 2024) - King's College London (Hybrid), London, United Kingdom
Duration: 29 Jul 202431 Jul 2024
https://coinsconf.com/2024/

Publication series

NameIEEE International Conference on Omni-Layer Intelligent Systems, COINS
ISSN (Print)2996-5322
ISSN (Electronic)2996-5330

Conference

Conference2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS 2024)
Country/TerritoryUnited Kingdom
CityLondon
Period29/07/2431/07/24
Internet address

Research Keywords

  • Business Intelligence
  • Deep Learning
  • Machine Learning
  • Multimodal Analytics

Fingerprint

Dive into the research topics of 'Exploiting Multimodal Features and Deep Learning for Predicting Crowdfunding Successes'. Together they form a unique fingerprint.

Cite this