TY - GEN
T1 - Parallel image matrix compression for face recognition
AU - Xu, Dong
AU - Yan, Shuicheng
AU - Zhang, Lei
AU - Li, Mingjing
AU - Ma, Weiying
AU - Liu, Zhengkai
AU - Zhang, Hongjiang
N1 - Publication details (e.g. title, author(s), publication statuses and dates) are captured on an “AS IS” and “AS AVAILABLE” basis at the time of record harvesting from the data source. Suggestions for further amendments or supplementary information can be sent to [email protected].
PY - 2005
Y1 - 2005
N2 - The canonical face recognition algorithm Eigenface and Fisherface are both based on one dimensional vector representation. However, with the high feature dimensions and the small training data, face recognition often suffers from the curse of dimension and the small sample problem. Recent research [4] shows that face recognition based on direct 2D matrix representation, i.e. 2DPCA, obtains better performance than that based on traditional vector representation. However, there are three questions left unresolved in the 2DPCA algorithm: I ) what is the meaning of the eigenvalue and eigenvector of the covariance matrix in 2DPCA; 2) why 2DPCA can outperform Eigenface; and 3) how to reduce the dimension after 2DPCA directly. In this paper, we analyze 2DPCA in a different view and proof that is 2DPCA actually a "localized" PCA with each row vector of an image as object. With this explanation, we discover the intrinsic reason that 2DPCA can outperform Eigenface is because fewer feature dimensions and more samples are used in 2DPCA when compared with Eigenface. To further reduce the dimension after 2DPCA, a two-stage strategy, namely parallel image matrix compression (PIMC), is proposed to compress the image matrix redundancy, which exists among row vectors and column vectors. The exhaustive experiment results demonstrate that PIMC is superior to 2DPCA and Eigenface, and PIMC+LDA outperforms 2DPC+LDA and Fisherface. © 2005 IEEE.
AB - The canonical face recognition algorithm Eigenface and Fisherface are both based on one dimensional vector representation. However, with the high feature dimensions and the small training data, face recognition often suffers from the curse of dimension and the small sample problem. Recent research [4] shows that face recognition based on direct 2D matrix representation, i.e. 2DPCA, obtains better performance than that based on traditional vector representation. However, there are three questions left unresolved in the 2DPCA algorithm: I ) what is the meaning of the eigenvalue and eigenvector of the covariance matrix in 2DPCA; 2) why 2DPCA can outperform Eigenface; and 3) how to reduce the dimension after 2DPCA directly. In this paper, we analyze 2DPCA in a different view and proof that is 2DPCA actually a "localized" PCA with each row vector of an image as object. With this explanation, we discover the intrinsic reason that 2DPCA can outperform Eigenface is because fewer feature dimensions and more samples are used in 2DPCA when compared with Eigenface. To further reduce the dimension after 2DPCA, a two-stage strategy, namely parallel image matrix compression (PIMC), is proposed to compress the image matrix redundancy, which exists among row vectors and column vectors. The exhaustive experiment results demonstrate that PIMC is superior to 2DPCA and Eigenface, and PIMC+LDA outperforms 2DPC+LDA and Fisherface. © 2005 IEEE.
UR - http://www.scopus.com/inward/record.url?scp=56149118158&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-56149118158&origin=recordpage
U2 - 10.1109/MMMC.2005.57
DO - 10.1109/MMMC.2005.57
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 0769521649
SN - 9780769521640
T3 - Proceedings of the 11th International Multimedia Modelling Conference, MMM 2005
SP - 232
EP - 238
BT - Proceedings of the 11th International Multimedia Modelling Conference, MMM 2005
T2 - 11th International Multimedia Modelling Conference, MMM 2005
Y2 - 12 January 2005 through 14 January 2005
ER -