Weighted Gate Layer Autoencoders

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Author(s)

  • Heba El-Fiqi
  • Min Wang
  • Kathryn Kasmarik
  • Anastasios Bezerianos
  • Hussein A. Abbass

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Cybernetics
Online published27 Jan 2021
Publication statusOnline published - 27 Jan 2021

Abstract

A single dataset could hide a significant number of relationships among its feature set. Learning these relationships simultaneously avoids the time complexity associated with running the learning algorithm for every possible relationship, and affords the learner with an ability to recover missing data and substitute erroneous ones by using available data. In our previous research, we introduced the gate-layer autoencoders (GLAEs), which offer an architecture that enables a single model to approximate multiple relationships simultaneously. GLAE controls what an autoencoder learns in a time series by switching on and off certain input gates, thus, allowing and disallowing the data to flow through the network to increase network\textquoteright s robustness. However, GLAE is limited to binary gates. In this article, we generalize the architecture to weighted gate layer autoencoders (WGLAE) through the addition of a weight layer to update the error according to which variables are more critical and to encourage the network to learn these variables. This new weight layer can also be used as an output gate and uses additional control parameters to afford the network with abilities to represent different models that can learn through gating the inputs. We compare the architecture against similar architectures in the literature and demonstrate that the proposed architecture produces more robust autoencoders with the ability to reconstruct both incomplete synthetic and real data with high accuracy.

Research Area(s)

  • Autoencoder (AE), data reconstruction, neural networks, unsupervised learning

Citation Format(s)

Weighted Gate Layer Autoencoders. / El-Fiqi, Heba; Wang, Min; Kasmarik, Kathryn; Bezerianos, Anastasios; Tan, Kay Chen; Abbass, Hussein A.

In: IEEE Transactions on Cybernetics, 27.01.2021.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review