Complete canonical correlation analysis with application to multi-view gait recognition
完全典型相关分析及其在多视角步态识别中的应用
Pattern Recognition, Volume 50, February 2016, Pages 107-117
Abstract:Canonical correlation analysis(CCA) is a well-known multivariate analysis method for quantifying the correlations between two sets of multidimensional variables. However, for multi-view gait recognition, it is difficult to directly apply CCA to deal with two sets of high-dimensional vectors because of computational complexity. Moreover, in such situation, the eigenmatrix of CCA is usually singular which makes the direct implementation of the CCA algorithm almost impossible. In practice, PCA or singular value decomposition is employed as a preprocessing step to solve these problems. Nevertheless, this strategy may discard dimensions that contain important discriminative information and correlation information. To overcome the shortcomings of CCA when dealing with two sets of high-dimensional vectors, we develop a novel method, named complete canonical correlation analysis (C3A). In our method, we first reformulate the traditional CCA so that we can avoid the computing of the inverse of a high-dimensional matrix. With the help of this reformulation, C3A further transforms the singular generalized eigensystem computation of CCA into two stable eigenvalue decomposition problems. Moreover, a feasible and effective method is proposed to alleviate the computational burden of high dimensional matrix for typical gait image data. Experimental results on two benchmark gait databases, CASIA gait database and the challenge USF gait database, demonstrate the effectiveness of the proposed method.
Frontal gait recognition fromoccluded scenes
闭塞场景下的前视步态识别
Pattern Recognition Letters, Volume 63, 1 October 2015, Pages 9-15
Pratik Chattopadhyay, Shamik Sural, Jayanta Mukherjee
Abstract:In this paper, we propose a method using Kinect depth data to address the problem of occlusion in frontal gait recognition. We consider situations where such depth cameras are mounted on top of entry and exit points of a zone under surveillance, respectively capturing the back and front views of each subject passing through the zone. A feature set corresponding to the back view is derived from the depth information along the contour of the silhouette, while periodic variation of the skeleton structure of the lower body region as estimated by Kinect is extracted from the front view. These feature sets preserve gait dynamics at a high resolution and can be extracted efficiently. In congested places like airports, railway stations and shopping malls, multiple persons move into the surveillance zone one after another, thereby causing occlusion of the target. The proposed recognition procedure compares the unoccluded frames of a cluttered test sequence with the matching frames of a training sequence. Dynamic programming based local sequence alignment is used to determine this frame correspondence. The method is computationally efficient and shows encouraging results under different levels of occlusion.
Uncooperative gait recognition: Re-ranking based on sparse coding and multi-viewhypergraph learning
非协作步态识别:基于稀疏编码与多视角超图学习的重新分类
Pattern Recognition, Available online 2 December 2015
Abstract:Gait is an important biometric which can operate from a distance without subject cooperation. However, it is easily affected by changes in covariate conditions (carrying, clothing, view angle, walking speed, random noise etc.). It is hard for training set to cover all conditions. Bipartite ranking model has achieved success in gait recognition without assumption of subject cooperation. We propose a multi-view hypergraph learning re-ranking (MHLRR) method by integrating multi-view hypergraph learning (MHL) with hypergraph-based re-ranking framework. Sparse coding re-ranking (SCRR) and MHLRR are integrated under the graph-based framework to get a model. We define it as the sparse coding multi-view hypergraph learning re-ranking (SCMHLRR) method, which makes our approach achieve higher recognition accuracy under a genuine uncooperative setting. Extensive experiments demonstrate that our approach drastically outperforms existing ranking based methods, achieving good increase in recognition rate under the most difficult uncooperative settings.
Cross-view gait recognition based on human walking trajectory
基于行人轨迹的跨视角步态识别
Journal of Visual Communication and Image Representation, Volume 25, Issue 8, November 2014, Pages 1842-1855
Abstract:We propose in this paper a novel cross-view gait recognition method based on projection of gravity center trajectory (GCT). We project the coefficients of 3-D GCT in reality to different view planes to complete view variation. Firstly, we estimate the real GCT curve in 3-D space under different views by statistics of limb parameters. Then, we get the view transformation matrix based on the projection principle between curve and plane, and estimate the view of a silhouette sequence by this matrix to complete view variance of gait features. We calculate the body part trajectory on silhouette sequence to improve recognition accuracy by using correlation strength as similarity measure. Lastly, we take nested match method to calculate the final matching score of two kinds of features. Experimental results on the widely used CASIA-B gait database demonstrate the effectiveness and practicability of the proposed method.
View-invariant gait recognition viadeterministic learning
基于确定学习的视角无关步态识别
Neurocomputing, Volume 175, Part A, 29 January 2016, Pages 324-335
Abstract:Performance of gait recognition can be affected by many factors, especially by the variation of view angle which will significantly change the available visual features for matching. In this paper, we present a new method to eliminate the effect of view angle for efficient gait recognition via deterministic learning theory. The width of the binarized silhouette models the periodic deformation of human gait shape and is selected as the gait feature. It captures the spatio-temporal characteristics of each individual, represents the dynamics of gait motion, and sensitively reflects the variance between gait patterns across various views. The gait recognition approach consists of two phases: a training phase and a recognition phase. In the training phase, gait dynamics underlying different individuals׳ gaits observed from different view angles are locally accurately approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated gait dynamics is stored in constant RBF networks. In order to address the problem of view change no matter the variation is small or significantly large, the training patterns from different views constitute a uniform training dataset containing all kinds of gait dynamics of each individual observed across various views. In the recognition phase, a bank of dynamical estimators is constructed for all the training gait patterns. Prior knowledge of human gait dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gait pattern whose view pattern contained in the prior training dataset, a set of recognition errors are generated. The average L1 norms of the errors are taken as the similarity measure between the dynamics of the training gait patterns and the dynamics of the test gait pattern. The test gait pattern similar to one of the training gait patterns can be recognized according to the smallest error principle. Finally, comprehensive experiments are carried out on the widely adopted multiview gait databases: CASIA-B and CMU MoBo to demonstrate the effectiveness of the proposed approach.
Robust view-invariantmultiscale gait recognition
鲁棒视角无关多尺度步态识别
Pattern Recognition, Volume 48, Issue 3, March 2015, Pages 798-811
Sruti Das Choudhury, Tardi Tjahjadi
Abstract:The paper proposes a two-phase view-invariant multiscale gait recognition method (VI-MGR) which is robust to variation in clothing and presence of a carried item. In phase 1, VI-MGR uses the entropy of the limb region of a gait energy image (GEI) to determine the matching gallery view of the probe using 2-dimensional principal component analysis and Euclidean distance classifier. In phase 2, the probe subject is compared with the matching view of the gallery subjects using multiscale shape analysis. In this phase, VI-MGR applies Gaussian filter to a GEI to generate a multiscale gait image for gradually highlighting the subject׳s inner shape characteristics to achieve insensitiveness to boundary shape alterations due to carrying conditions and clothing variation. A weighted random subspace learning based classification is used to exploit the high dimensionality of the feature space for improved identification by avoiding overlearning. Experimental analyses on public datasets demonstrate the efficacy of VI-MGR.