|
Principal component analysis (PCA), as one of the most popular unsupervised dimensionality reduction methods, is of importance in multivariate data analysis. It seeks a set of orthogonal bases such that the variance of the input data points is maximized. The conventional PCA, however, is sensitive to outliers due to the utilization of L2-norm. As a robust alternative to PCA, PCA-L1 is proposed in literature. In image domain, two-dimensional PCA (2DPCA) is directly based on image matrices, obviating the image-to-vector transformation as in PCA. Likewise, 2DPCA uses L2-norm, and 2DPCA-L1, proposed in literature, is the robust version of 2DPCA. PCA-L1 and 2DPCA-L1 are two important subspace learning approaches developed recently. In this paper, we show that 2DPCA-L1 is in fact a special case of PCA-L1 applying to row vectors of image matrices. Thus, the relationship between these two methods is made clear. |
|
Keywords:Principal component analysis (PCA); two-dimensional PCA (2DPCA); PCA-L1; 2DPCA-L1 |
|