Exploiting Multimodal Features and Deep Learning for Predicting Crowdfunding Successes

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

1 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publication2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS)
PublisherInstitute of Electrical and Electronics Engineers, Inc.
Number of pages6
ISBN (electronic)9798350349597
ISBN (print)979-8-3503-4960-3
Publication statusPublished - 2024

Publication series

NameIEEE International Conference on Omni-Layer Intelligent Systems, COINS
ISSN (Print)2996-5322
ISSN (electronic)2996-5330

Conference

Title2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS 2024)
LocationKing's College London (Hybrid)
PlaceUnited Kingdom
CityLondon
Period29 - 31 July 2024

Abstract

Though structured and textual unstructured data were examined in predicting crowdfunding successes, multimodal features have seldom been exploited. In this research, we have examined the predictive power of multimodal features and various deep learning models for predicting crowdfunding successes. In particular, we examined implicit features such as textual project descriptions, project related audio clips, and project related video clips. First, we utilized an explanatory statistical method to identify the significance of explicit features in explaining the variance of crowdfunding amounts. Our empirical results reveal that a project description supplemented with a video clip, the number of backers, and the number of projects previously supported are the top three most significant features. Second, we applied deep learning models such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Bidirectional Encoder Representations from Transformers (BERT) fed with multimodal features such as project description text, audio clips, and video clips to predict crowdfunding successes based on a dataset that we crawled from Kickstarter. Our experiments show that the prediction accuracy of textual feature fed to the TextCNN model achieves relatively high accuracy when a single modality of feature is used. By combing all features from three modalities via a late fusion method, the TextCNN model achieves the best accuracy of 82.2%. Our research work opens the door to the exploitation of multimodal features and deep learning to improve the prediction of crowdfunding successes. © 2024 IEEE.

Research Area(s)

  • Business Intelligence, Deep Learning, Machine Learning, Multimodal Analytics

Citation Format(s)

Exploiting Multimodal Features and Deep Learning for Predicting Crowdfunding Successes. / Zhang, Zijian; Lau, Raymond Y.K.
2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS). Institute of Electrical and Electronics Engineers, Inc., 2024. (IEEE International Conference on Omni-Layer Intelligent Systems, COINS).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review