Exploiting Multimodal Features and Deep Learning for Predicting Crowdfunding Successes
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | 2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) |
Publisher | Institute of Electrical and Electronics Engineers, Inc. |
Number of pages | 6 |
ISBN (electronic) | 9798350349597 |
ISBN (print) | 979-8-3503-4960-3 |
Publication status | Published - 2024 |
Publication series
Name | IEEE International Conference on Omni-Layer Intelligent Systems, COINS |
---|---|
ISSN (Print) | 2996-5322 |
ISSN (electronic) | 2996-5330 |
Conference
Title | 2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS 2024) |
---|---|
Location | King's College London (Hybrid) |
Place | United Kingdom |
City | London |
Period | 29 - 31 July 2024 |
Link(s)
Abstract
Though structured and textual unstructured data were examined in predicting crowdfunding successes, multimodal features have seldom been exploited. In this research, we have examined the predictive power of multimodal features and various deep learning models for predicting crowdfunding successes. In particular, we examined implicit features such as textual project descriptions, project related audio clips, and project related video clips. First, we utilized an explanatory statistical method to identify the significance of explicit features in explaining the variance of crowdfunding amounts. Our empirical results reveal that a project description supplemented with a video clip, the number of backers, and the number of projects previously supported are the top three most significant features. Second, we applied deep learning models such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Bidirectional Encoder Representations from Transformers (BERT) fed with multimodal features such as project description text, audio clips, and video clips to predict crowdfunding successes based on a dataset that we crawled from Kickstarter. Our experiments show that the prediction accuracy of textual feature fed to the TextCNN model achieves relatively high accuracy when a single modality of feature is used. By combing all features from three modalities via a late fusion method, the TextCNN model achieves the best accuracy of 82.2%. Our research work opens the door to the exploitation of multimodal features and deep learning to improve the prediction of crowdfunding successes. © 2024 IEEE.
Research Area(s)
- Business Intelligence, Deep Learning, Machine Learning, Multimodal Analytics
Citation Format(s)
Exploiting Multimodal Features and Deep Learning for Predicting Crowdfunding Successes. / Zhang, Zijian; Lau, Raymond Y.K.
2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS). Institute of Electrical and Electronics Engineers, Inc., 2024. (IEEE International Conference on Omni-Layer Intelligent Systems, COINS).
2024 IEEE International Conference on Omni-Layer Intelligent Systems (COINS). Institute of Electrical and Electronics Engineers, Inc., 2024. (IEEE International Conference on Omni-Layer Intelligent Systems, COINS).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review