当前位置:   首页  -  学科服务  -  学科服务主页  -  学术前沿追踪  -  正文

​最新英文期刊文献(视频异常行为识别)推荐

Learning an video frame-based face detection system for security fields

 

基于视频帧的安全领域人脸检测系统及其深度学习工具

 

Journal of Visual Communication and Image Representation, Volume 55, August 2018, Pages 457-463

 

Gang Niu, Ququ Chen

 

摘要:It is know that face detection as a kind of artificial intelligence (AI) technology has become an indispensable tool in our daily life, which produce effects on every aspect of us. The demand for detection and recognition is higher accuracy and higher speed in different areas. So a new video frame-based face detection system is designed to help us making good safety precautions in recognition between normal face and abnormal face. Abnormal face means that face is partially occluded by some objects such as mask, sunglass and so on. Since these abnormal faces are easily recognized as normal faces in previous detection systems, they are often ignored. And it brings us some potential dangers, especially in the area of residential face detection access, bank business login and other security areas. This system provides a complete set of process for detecting faces from video and distinction them, which achieves a good real-time performance in accuracy and speed. We adopt libfacedetection to detect faces from each frame. In addition, we introduce a dlib library which is a deep learning tools to help aligning face and extract the characteristic value. And a GMM clustering algorithm is provided to train and test images for the system. This system can help us to make a distinction between normal face and abnormal face, which is of great significance to the security field in the future.

 

Automatic human trajectory destination prediction from video  

 

基于视频的智能人类轨迹目的地预测

 

Expert Systems with Applications, Volume 110, 15 November 2018, Pages 41-51

 

Palwasha Afsar, Paulo Cortez, Henrique Santos

 

摘要:This paper presents an intelligent human trajectory destination detection system from video. The system assumes a passive collection of video from a wide scene used by humans in their daily motion activities such as walking towards a door. The proposed system includes three main modules, namely human blob detection, star skeleton detection and destination area prediction, and it works directly with raw video, producing motion features for destination prediction system, such as position, velocity and acceleration from detected human skeletons, resulting in several input features that are used to train a machine learning classifier. We adopted a university campus exterior scene for the experimental study, which includes 348 pedestrian trajectories from 171 videos and five destination areas: A, B, C, D and E. A total of six data processing combinations and four machine learning classifiers were compared, under a realistic growing window evaluation. Overall, high quality results were achieved by the best model, which uses 37 skeleton motion inputs, undersampling on training data and a random forest. The global discrimination, in terms of area of the receiver operating characteristic curve is around 87%. Furthermore, the best model can predict in advance the five destination classes, obtaining a very good ahead discrimination for classes A, B, C and D, and a reasonable ahead discrimination for class E.

 

Multiple human tracking in wearable camera videos with informationless intervals  

 

可穿戴摄像机“无信息”间隔视频中的多人跟踪

 

Pattern Recognition Letters, Volume 112, 1 September 2018, Pages 104-110

 

Hongkai Yu, Haozhou Yu, Hao Guo, Jeff Simmons, Song Wang

 

摘要:Multiple human tracking plays a key role in video surveillance and human activity detection. Compared to fixed cameras, wearable cameras, such as GoPro and Google Glass, can follow and capture the targets (people of interest) in larger areas and from better view angles, following the motion of camera wearers. However, wearable camera videos suffer from sudden view changes, resulting in informationless (temporal) intervals of target loss, which make multiple human tracking a much more challenging problem. In particular, given large and unknown camera-pose change, it is difficult to associate the multiple targets over such an interval based on the spatial proximity or appearance matching. In this paper, we propose a new approach, where spatial pattern of the multiple targets are extracted, predicted and then leveraged to help associate the targets over an informationless interval. We also propose a classification based algorithm to identify the informationless intervals from wearable camera videos. Experiments are conducted on a new dataset containing 30 wearable-camera videos and the performance is compared to several other multi-target tracking algorithms.