Home > Papers

 
 
Equivalence Between 2DPCA-L1 and PCA-L1
Wang Haixian *
Research Center for Learning Science, Southeast University
*Correspondence author
#Submitted by
Subject:
Funding: This work was supported by Specialized Research Fund for the Doctoral Program of Higher Education of China under grant(No.20070286030)
Opened online:12 January 2011
Accepted by: none
Citation: Wang Haixian.Equivalence Between 2DPCA-L1 and PCA-L1[OL]. [12 January 2011] http://en.paper.edu.cn/en_releasepaper/content/4404185
 
 
Principal component analysis (PCA), as one of the most popular unsupervised dimensionality reduction methods, is of importance in multivariate data analysis. It seeks a set of orthogonal bases such that the variance of the input data points is maximized. The conventional PCA, however, is sensitive to outliers due to the utilization of L2-norm. As a robust alternative to PCA, PCA-L1 is proposed in literature. In image domain, two-dimensional PCA (2DPCA) is directly based on image matrices, obviating the image-to-vector transformation as in PCA. Likewise, 2DPCA uses L2-norm, and 2DPCA-L1, proposed in literature, is the robust version of 2DPCA. PCA-L1 and 2DPCA-L1 are two important subspace learning approaches developed recently. In this paper, we show that 2DPCA-L1 is in fact a special case of PCA-L1 applying to row vectors of image matrices. Thus, the relationship between these two methods is made clear.
Keywords:Principal component analysis (PCA); two-dimensional PCA (2DPCA); PCA-L1; 2DPCA-L1
 
 
 

For this paper

  • PDF (0B)
  • ● Revision 0   
  • ● Print this paper
  • ● Recommend this paper to a friend
  • ● Add to my favorite list

    Saved Papers

    Please enter a name for this paper to be shown in your personalized Saved Papers list

Tags

Add yours

Related Papers

Statistics

PDF Downloaded 361
Bookmarked 0
Recommend 5
Comments Array
Submit your papers