Subject-based discriminative sparse representation model for detection of concealed information
Computer Methods and Programs in Biomedicine, Volume 143, May 2017, Pages 25-33
Amir Akhavan, Mohammad Hassan Moradi, Safa Rafiei Vand
Abstract：Background and objectives:The use of machine learning approaches in concealed information test (CIT) plays a key role in the progress of this neurophysiological field. In this paper, we presented a new machine learning method for CIT in which each subject is considered independent of the others. The main goal of this study is to adapt the discriminative sparse models to be applicable for subject-based concealed information test. Methods：In order to provide sufficient discriminability between guilty and innocent subjects, we introduced a novel discriminative sparse representation model and its appropriate learning methods. For evaluation of the method forty-four subjects participated in a mock crime scenario and their EEG data were recorded. As the model input, in this study the recurrence plot features were extracted from single trial data of different stimuli. Then the extracted feature vectors were reduced using statistical dependency method. The reduced feature vector went through the proposed subject-based sparse model in which the discrimination power of sparse code and reconstruction error were applied simultaneously. Results：Experimental results showed that the proposed approach achieved better performance than other competing discriminative sparse models. The classification accuracy, sensitivity and specificity of the presented sparsity-based method were about 93%, 91% and 95% respectively. Conclusions: Using the EEG data of a single subject in response to different stimuli types and with the aid of the proposed discriminative sparse representation model, one can distinguish guilty subjects from innocent ones. Indeed, this property eliminates the necessity of several subject EEG data in model learning and decision making for a specific subject.
Multiscale and Multiresolution methods for Sparse representation of Large datasets
Procedia Computer Science, Volume 108, 2017, Pages 1652-1661
Prashant Shekhar, Abani Patra, Beata M. Csatho
Abstract：In this paper, we have presented a strategy for studying a large observational dataset at different resolutions to obtain a sparse representation in a computationally efficient manner. Such representations are crucial for many applications from modeling and inference to visualization. Resolution here stems from the variation of the correlation strength among the different observation instances. The motivation behind the approach is to make a large dataset as small as possible by removing all the redundant information so that, the original data can be reconstructed with minimal losses of information.Our past work borrowed ideas from multilevel simulations to extract a sparse representaiton. Here, we introduce the use of multi-resolution kernels. We have tested our approach on a carefully designed suite of analytical functions along with gravity and altimetry time series datasets from a section of the Greenland Icesheet. In addition to providing a good strategy for data compression, the proposed approach also finds application in efficient sampling procedures and error filtering in the datasets. The results, presented in the article clearly establish the promising nature of the approach along with prospects of its application in different fields of data analytics in the scientific computing and related domains.
Nonparametric kernel sparse representation-based classifier
Pattern Recognition Letters, Volume 89, 1 April 2017, Pages 46-52
Alireza Esmaeilzehi, Hamid Abrishami Moghaddam
Abstract：Sparse representation-based classifier (SRC) and kernel sparse representation-based classifier (KSRC) are founded on combining pattern recognition and compressive sensing methods and provide acceptable results in many machine learning problems. Nevertheless, these classifiers suffer from some shortcomings. For instance, SRC's accuracy drops against samples from same directional classes or KSRC's output declines when data is not normally distributed in kernel space. This paper introduces nonparametric kernel sparse representation-based classifier (NKSRC) as a generalized framework for SRC and KSRC. First, it applies kernel on samples to overcome data directionality and then employs nonparametric discriminant analysis (NDA) to reduce data dimensionality in kernel space alleviating concern about data distribution type. The experimental results of NKSRC demonstrate its superiority over SRC and KSRC–LDA and its equal or superior performance with respect to KSRC–PCA on some synthetic, four well-known face recognition and several UCI datasets.