TY - GEN
T1 - Latent label consistent K-SVD for joint machine faults representation and classification
AU - Zhang, Zhao
AU - Jiang, Weiming
AU - Jia, Lei
AU - Zhao, Mingbo
AU - Li, Fanzhang
PY - 2017/1/13
Y1 - 2017/1/13
N2 - We propose a new discriminative dictionary learning framework termed Latent Label Consistent K-SVD (LLC-KSVD) for representing and classifying machine faults. Our LLC-KSVD handles the task by minimizing the reconstruction, discriminative sparse-code and classification errors at the same time. To enhance the representation and classification powers, LLC-KSVD aim to decompose given data into a sparse reconstruction part, a salient feature part and an error part. The salient features are learnt by embedding data onto a projection and a classifier is then trained over extracted salient features so that features are ensured to be optimal for classification. Thus, the classification approach of our LLC-KSVD is very efficient, since there is no need to involve a time-consuming sparse reconstruction process with well-trained dictionary for each test signal as existing models. Besides, to make the classifier be robust to noise and outliers, we regularize the l2,¿-norm on classifier so that the predictions are more accurate. Simulations on several machine fault datasets demonstrate the state-of-the-art performance by our LLC-KSVD.
AB - We propose a new discriminative dictionary learning framework termed Latent Label Consistent K-SVD (LLC-KSVD) for representing and classifying machine faults. Our LLC-KSVD handles the task by minimizing the reconstruction, discriminative sparse-code and classification errors at the same time. To enhance the representation and classification powers, LLC-KSVD aim to decompose given data into a sparse reconstruction part, a salient feature part and an error part. The salient features are learnt by embedding data onto a projection and a classifier is then trained over extracted salient features so that features are ensured to be optimal for classification. Thus, the classification approach of our LLC-KSVD is very efficient, since there is no need to involve a time-consuming sparse reconstruction process with well-trained dictionary for each test signal as existing models. Besides, to make the classifier be robust to noise and outliers, we regularize the l2,¿-norm on classifier so that the predictions are more accurate. Simulations on several machine fault datasets demonstrate the state-of-the-art performance by our LLC-KSVD.
KW - feature extraction
KW - label consistent dictionary learning
KW - machine faults representation and classification
KW - SPARSE REPRESENTATION
KW - RECOGNITION
KW - DIAGNOSIS
UR - http://www.scopus.com/inward/record.url?scp=85012906113&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85012906113&origin=recordpage
U2 - 10.1109/INDIN.2016.7819268
DO - 10.1109/INDIN.2016.7819268
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 9781509028702
SP - 794
EP - 799
BT - IEEE International Conference on Industrial Informatics (INDIN)
PB - IEEE
T2 - 14th IEEE International Conference on Industrial Informatics (INDIN 2016)
Y2 - 19 July 2016 through 21 July 2016
ER -