Deep auto-encoder in model reduction of lage-scale spatiotemporal dynamics

Mingliang WANG, Han-Xiong LI, Wenjing SHEN

    Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

    10 Citations (Scopus)

    Abstract

    This paper presents a deep auto-encoder based model reduction method for large scale spatiotemporal process. This method includes three phases in order to find the near-optimal parameters of the reduced order model. The sequence of the phases is allocated according to the idea of greedy training which approximately minimizes the modeling error. This method also avoids including the spatial dimensionality into the model which enables it to handle large-scale model reduction. Two case studies are carried out to demonstrate the effectiveness of the method.
    Original languageEnglish
    Title of host publication2016 International Joint Conference on Neural Networks (IJCNN)
    PublisherIEEE
    Pages3180-3186
    ISBN (Electronic)978-1-5090-0620-5
    ISBN (Print)9781509006199
    DOIs
    Publication statusPublished - Jul 2016
    Event2016 International Joint Conference on Neural Networks (IJCNN 2016) - Vancouver Convention Centre , Vancouver, Canada
    Duration: 24 Jul 201629 Jul 2016
    http://www.wcci2016.org/

    Publication series

    Name
    ISSN (Electronic)2161-4407

    Conference

    Conference2016 International Joint Conference on Neural Networks (IJCNN 2016)
    Abbreviated titleIJCNN 2016
    PlaceCanada
    CityVancouver
    Period24/07/1629/07/16
    Internet address

    Research Keywords

    • Deep auto-encoder
    • Model reduction
    • Restricted Boltzmann Machine

    Fingerprint

    Dive into the research topics of 'Deep auto-encoder in model reduction of lage-scale spatiotemporal dynamics'. Together they form a unique fingerprint.

    Cite this