Deep auto-encoder in model reduction of lage-scale spatiotemporal dynamics

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

9 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Title of host publication2016 International Joint Conference on Neural Networks (IJCNN)
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3180-3186
ISBN (Electronic)978-1-5090-0620-5
ISBN (Print)9781509006199
Publication statusPublished - Jul 2016

Publication series

Name
ISSN (Electronic)2161-4407

Conference

Title2016 International Joint Conference on Neural Networks (IJCNN 2016)
LocationVancouver Convention Centre
PlaceCanada
CityVancouver
Period24 - 29 July 2016

Abstract

This paper presents a deep auto-encoder based model reduction method for large scale spatiotemporal process. This method includes three phases in order to find the near-optimal parameters of the reduced order model. The sequence of the phases is allocated according to the idea of greedy training which approximately minimizes the modeling error. This method also avoids including the spatial dimensionality into the model which enables it to handle large-scale model reduction. Two case studies are carried out to demonstrate the effectiveness of the method.

Research Area(s)

  • Deep auto-encoder, Model reduction, Restricted Boltzmann Machine

Citation Format(s)

Deep auto-encoder in model reduction of lage-scale spatiotemporal dynamics. / WANG, Mingliang; LI, Han-Xiong; SHEN, Wenjing.
2016 International Joint Conference on Neural Networks (IJCNN). Institute of Electrical and Electronics Engineers Inc., 2016. p. 3180-3186 7727605.

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review