|
Recently, bidirectional PCA(BDPCA) has been proven to be an e眂ient tool for
pattern recognition and image analysis. Encouraging experimental results have been
reported and discussed in the literature. However, BDPCA has to be performed in
batch mode, it means that all the training data has to be ready before we calculate
the projection matrices. If there are additional samples need to be incorporated
into an existing system, it has to be retrained with the whole updated training set.
Moreover, the scatter matrices of BDPCA are formulated as the sum of K(samples
size) image covariance matrices, this leads to the incremental learning directly on
the scatters impossible, thus it presents new challenge for on-line training.
In fact, there are two major reasons for building incremental algorithms. The 痳st
reason is that in some cases, when the number of training images is very large, the
batch algorithm can not process the entire training set due to large computational or
space requirements of the batch approach. The second reason is when the learning
algorithm is supposed to operate in a dynamical settings, that all the training
data is not given in advance, and new training samples may arrive at any time,
and they have to be processed in an online manner. Through matricizations of
3th-order tensor, we successfully transfer the eigenvalue decomposition problem of
scatters to the SVD of corresponding unfolded matrices, followed by complexity
and memory analysis on the novel algorithm. A theoretical clue for selecting suitable
dimensionality parameters without losing classi痗ation information is also presented
in this paper. Experimental results on FERET and CMU PIE databases show that
the IBDPCA algorithm gives a close approximation to the BDPCA method, but
using less time. |
|
Keywords:Incremental learning;Bidirectional principal component;analysis;Singular value decomposition;Tensor; k-mode unfolding;Face recognition |
|