Towards Better Understanding with Uniformity and Explicit Regularization of Embeddings in Embedding-based Neural Topic Models

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

View graph of relations

Detail(s)

Original languageEnglish
Title of host publication2022 International Joint Conference on Neural Networks (IJCNN 2022)
PublisherIEEE
Number of pages9
ISBN (Electronic)978-1-7281-8671-9
ISBN (Print)978-1-6654-9526-4
Publication statusPublished - 2022

Publication series

Name
ISSN (Print)2161-4393
ISSN (Electronic)2161-4407

Conference

Title2022 International Joint Conference on Neural Networks (IJCNN 2022)
PlaceItaly
CityPadova
Period18 - 23 July 2022

Abstract

Embedding-based neural topic models could explicitly represent words and topics by embedding them to a homogeneous feature space, which shows higher interpretability. However, there are no explicit constraints for the training of embeddings, leading to a larger optimization space. Also, a clear description of the changes in embeddings and the impact on model performance is still lacking. In this paper, we propose an embedding regularized neural topic model, which applies the specially designed training constraints on word embedding and topic embedding to reduce the optimization space of parameters. To reveal the changes and roles of embeddings, we introduce uniformity into the embedding-based neural topic model as the evaluation metric of embedding space. On this basis, we describe how embeddings tend to change during training via the changes in the uniformity of embeddings. Furthermore, we demonstrate the impact of changes in embeddings in embedding-based neural topic models through ablation studies. The results of experiments on two mainstream datasets indicate that our model significantly outperforms baseline models in terms of the harmony between topic quality and document modeling. This work is the first attempt to exploit uniformity to explore changes in embeddings of embedding-based neural topic models and their impact on model performance to the best of our knowledge.

Research Area(s)

  • neural topic model, word embedding, topic embedding, interpretability, neural network

Citation Format(s)

Towards Better Understanding with Uniformity and Explicit Regularization of Embeddings in Embedding-based Neural Topic Models. / Shao, Wei; Huang, Lei; Liu, Shuqi; Ma, Shihua; Song, Linqi.

2022 International Joint Conference on Neural Networks (IJCNN 2022). IEEE, 2022.

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review