Hybridsoft computingapproaches to content based video retrieval: A brief review
基于内容的视频检索混合软计算方法
Applied Soft Computing, In Press, Uncorrected Proof, Available online 27 April 2016
Abstract:There has been an unrestrained growth of videos on the Internet due to proliferation of multimedia devices. These videos are mostly stored in unstructured repositories which pose enormous challenges for the task of both image and video retrieval. Users aim to retrieve videos of interest having content which is relevant to their need. Traditionally, low-level visual features have been used for content based video retrieval (CBVR). Consequently, a gap existed between these low-level features and the high level semantic content. The semantic differential was partially bridged by proliferation of research on interest point detectors and descriptors, which represented mid-level features of the content. The computational time and human interaction involved in the classical approaches for CBVR are quite cumbersome. In order to increase the accuracy, efficiency and effectiveness of the retrieval process, researchers resorted to soft computing paradigms. The entire retrieval task was automated to a great extent using individual soft computing components. Due to voluminous growth in the size of multimedia databases, augmented by an exponential rise in the number of users, integration of two or more soft computing techniques was desirable for enhanced efficiency and accuracy of the retrieval process. The hybrid approaches serve to enhance the overall performance and robustness of the system with reduced human interference. This article is targeted to focus on the relevant hybrid soft computing techniques which are in practice for content-based image and video retrieval.
Feature extraction and soft computing methods for aerospace structure defect classification
特征提取与软计算方法在宇航结构瑕疵分类中的应用
Measurement, Volume 85, May 2016, Pages 192-209
Abstract:This study concerns the effectiveness of several techniques and methods of signals processing and data interpretation for the diagnosis of aerospace structure defects. This is done by applying different known feature extraction methods, in addition to a new CBIR-based one; and some soft computing techniques including a recent HPC parallel implementation of the U-BRAIN learning algorithm on Non Destructive Testing data. The performance of the resulting detection systems are measured in terms of Accuracy, Sensitivity, Specificity, and Precision. Their effectiveness is evaluated by the Matthews correlation, the Area Under Curve (AUC), and the F-Measure. Several experiments are performed on a standard dataset of eddy current signal samples for aircraft structures. Our experimental results evidence that the key to a successful defect classifier is the feature extraction method – namely the novel CBIR-based one outperforms all the competitors – and they illustrate the greater effectiveness of the U-BRAIN algorithm and the MLP neural network among the soft computing methods in this kind of application.
A feasibility cachaca type recognition using computer vision and pattern recognition
基于计算机视觉与模式识别的朗姆酒类识别
Computers and Electronics in Agriculture, Volume 123, April 2016, Pages 410-414
Abstract:Brazilian rum (also known as cachaça) is the third most commonly consumed distilled alcoholic drink in the world, with approximately 2.5 billion liters produced each year. It is a traditional drink with refined features and a delicate aroma that is produced mainly in Brazil but consumed in many countries. It can be aged in various types of wood for 1–3 years, which adds aroma and a distinctive flavor with different characteristics that affect the price. A research challenge is to develop a cheap automatic recognition system that inspects the finished product for the wood type and the aging time of its production. Some classical methods use chemical analysis, but this approach requires relatively expensive laboratory equipment. By contrast, the system proposed in this paper captures image signals from samples and uses an intelligent classification technique to recognize the wood type and the aging time. The classification system uses an ensemble of classifiers obtained from different wavelet decompositions. Each classifier is obtained with different wavelet transform settings. We compared the proposed approach with classical methods based on chemical features. We analyzed 105 samples that had been aged for 3 years and we showed that the proposed solution could automatically recognize wood types and the aging time with an accuracy up to 100.00% and 85.71% respectively, and our method is also cheaper.
Weight prediction ofbroiler chickensusing 3D computer vision
三维计算机视觉技术在肉鸡重量预测中的应用
Computers and Electronics in Agriculture, Volume 123, April 2016, Pages 319-326
Abstract:In modern broiler houses, the broilers are traditionally weighed using automatic electronic platform weighers that the broilers have to visit voluntarily. Heavy broilers may avoid the weigher. Camera-based weighing systems have the potential of weighing a wider variety of broilers that would avoid a platform weigher which may also include ill birds. In the current study, a fully-automatic 3D camera-based weighing system for broilers have been developed and evaluated in a commercial production environment. Specifically, a low-cost 3D camera (Kinect) that directly returned a depth image was employed. The camera was robust to the changing light conditions of the broiler house as it contained its own infrared light source.
A newly developed image processing algorithm is proposed. The algorithm first segmented the image with a range-based watershed algorithm, then extracted twelve different weight descriptors and, finally, predicted the individual broiler weights using a Bayesian Artificial Neural Network. Four other models for weight prediction were also evaluated.
The system were tested in a commercial broiler house with 48,000 broilers (Ross 308) during the last 20 days of the breeding period. A traditional platform weigher was used to estimate the reference weights. An average relative mean error of 7.8% between the predicted weights and the reference weights is achieved on a separate test set with 83 broilers in approximately 13,000 manually annotated images. The errors were generally larger in the end of the rearing period as the broiler density increased. The absolute errors were in the range of 20–100 g in the first half of the period and 50–250 g in the last half. The system could be the stepping stone for a wide variety of additional camera-based measurements in the commercial broiler pen, such as activity analysis and health alerts.
The role of big data and cognitive computing in the learning process
大数据与认知计算在学习过程中的作用
Journal of Visual Languages & Computing, In Press, Corrected Proof, Available online 6 April 2016
Abstract:In this paper, we investigate how the raise of big data and cognitive computing systems is going to redesign the labor market, also impacting on the learning processes. In this respect, we make reference to higher education and we depict a model of a smart university, which relies on the concepts that are at the basis of the novel smart-cities’ development trends. Thus, we regard education as a process so that we can find specific issues to solve to overcome existing criticisms, and provide some suggestions on how to enhance universities’ performances. We highlight inputs, outputs, and dependencies in a block diagram, and we propose a solution built on a new paradigm called smarter-university, in which knowledge grows rapidly, is easy to share, and is regarded as a common heritage of both teachers and students. Among the others, a paramount consequence is that there is a growing demand for competences and skills that recall the so called T-shape model and we observe that this is pushing the education system to include a blend of disciplines in the curriculums of their courses. In this overview, among the wide variety of recent innovations, we focus our attention on cognitive computing systems and on the exploitation of big data, that we expect to further accelerate the refurbishment process of the key components of the knowledge society and universities as well.
IBM Watson: How Cognitive Computing Can Be Applied to Big Data Challenges in Life Sciences Research
怎样用“认知计算”应对生命科学研究中的大数据挑战
Clinical Therapeutics, Volume 38, Issue 4, April 2016, Pages 688-701
Abstract:Life sciences researchers are under pressure to innovate faster than ever. Big data offer the promise of unlocking novel insights and accelerating breakthroughs. Ironically, although more data are available than ever, only a fraction is being integrated, understood, and analyzed. The challenge lies in harnessing volumes of data, integrating the data from hundreds of sources, and understanding their various formats.
New technologies such as cognitive computing offer promise for addressing this challenge because cognitive solutions are specifically designed to integrate and analyze big datasets. Cognitive solutions can understand different types of data such as lab values in a structured database or the text of a scientific publication. Cognitive solutions are trained to understand technical, industry-specific content and use advanced reasoning, predictive modeling, and machine learning techniques to advance research faster.
Watson, a cognitive computing technology, has been configured to support life sciences research. This version of Watson includes medical literature, patents, genomics, and chemical and pharmacological data that researchers would typically use in their work. Watson has also been developed with specific comprehension of scientific terminology so it can make novel connections in millions of pages of text. Watson has been applied to a few pilot studies in the areas of drug target identification and drug repurposing. The pilot results suggest that Watson can accelerate identification of novel drug candidates and novel drug targets by harnessing the potential of big data.