An evaluation of deep neural network models for music classification using spectrograms

Jingxian Li, Lixin Han*, Xiaoshuang Li, Jun Zhu, Baohua Yuan, Zhinan Gou

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

34 Citations (Scopus)

Abstract

Deep Neural Network (DNN) models have lately received considerable attention for that the network structure can extract deep features to improve classification accuracy and achieve excellent results in the field of image. However, due to the different content forms of music and images, transferring deep learning to music classification is still a problem. To address this issue, in the paper, we transfer the state-of-the-art DNN models to music classification and evaluate the performance of the models using spectrograms. Firstly, we convert the music audio files into spectrograms by modal transformation, and then classify music through deep learning. In order to alleviate the problem of overfitting during training, we propose a balanced trusted loss function and build the balanced trusted model ResNet50_trust. Finally, we compare the performance of different DNN models in music classification. Furthermore, this work adds music sentiment analysis based on the newly constructed music emotion dataset. Extensive experimental evaluations on three music datasets show that our proposed model Resnet50_trust consistently outperforms other DNN models.
Original languageEnglish
Pages (from-to)4621–4647
JournalMultimedia Tools and Applications
Volume81
Issue number4
Online published9 Feb 2021
DOIs
Publication statusPublished - Feb 2022

Research Keywords

  • Deep learning
  • DNN models
  • Music classification
  • Spectrograms
  • Transfer learning

Fingerprint

Dive into the research topics of 'An evaluation of deep neural network models for music classification using spectrograms'. Together they form a unique fingerprint.

Cite this