当前位置:   首页  -  学科服务  -  学科服务主页  -  学术前沿追踪  -  正文

有关“手势(姿势)识别”方面最新英文文献推荐

Gesture Recognition Method Based on Ultrasound Propagation in Body

Hiroki Watanabe, Tsutomu Terada, Masahiko Tsukamoto

November 2016 MOBIQUITOUS 2016: Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services

Abstract:We propose a method for gesture recognition that utilizes active acoustic sensing, which transmits acoustic signals to a target, and recognizes the target's state by analyzing the response. In this study, the user wore a contact speaker that transmitted ultrasonic sweep signals to the user's body and a contact microphone that detected the ultrasound propagated through the body. The propagation characteristics of the ultrasound changed depending on the user movements. We utilized these changes to recognize the user's gestures. The novel point of our method is that the user's gestures can be acquired not only from the apparent movement but also from the user's internal state, such as muscle activity, since ultrasound is transmitted to the user's internal body and body surface. We implemented a prototype device and investigated the performance of the proposed method in 21 contexts with 10 subjects. The evaluation results confirmed that the recognition accuracy was 91.6 % on average when we trained/tested using the same data set.

 

Mudra: User-friendly Fine-grained Gesture Recognition using WiFi Signals

Ouyang Zhang, Kannan Srinivasan

December 2016 CoNEXT '16: Proceedings of the 12th International on Conference on emerging Networking EXperiments and Technologies

Abstract:There has been a great interest in recognizing gestures using wireless communication signals. We are motivated in detecting extremely fine, subtle finger gestures with WiFi signals. We envision this technology to find applications in finger-gesture control, disabled-friendly devices, physical therapy etc. The requirements of mm-level sensitivity and user-friendly feature using existing WiFi signals pose great challenges. Here, we present Mudra, a fine-grained finger gesture recognition system which leverages WiFi signals to enable a near-human-to-machine interaction with finger motion. Mudra uses a two-antenna receiver to detect and recognize finger gesture. It uses the signals received from one antenna to cancel the signal from the other. This "cancellation" is extremely sensitive to and enables us detect small variation in channel due to finger movements. Since Mudra decodes gestures with existing WiFi transmissions, Mudra enables gesture recognition without sacrificing WiFi transmission opportunities. Besides, Mudra is user-friendly with no need of user training. To demonstrate Mudra, we implement prototype on the NI-based SDR platform and use COTS WiFi adapter. We evaluate Mudra in a typical office environment. The results show that our system can achieve 96% accuracy.

 

Theft-resilient mobile wallets: transparently authenticating NFC users with tapping gesture biometrics

Babins Shrestha, Manar Mohamed, Sandeep Tamrakar, Nitesh Saxena

December 2016 ACSAC '16: Proceedings of the 32nd Annual Conference on Computer Security Applications

Abstract:The deployment of NFC technology on mobile phones is gaining momentum, enabling many important applications such as NFC payments, access control for building or public transit ticketing. However, (NFC) phones are prone to loss or theft, which allows the attacker with physical access to the phone to fully compromise the functionality provided by the NFC applications. Authenticating a user of an NFC phone using PINs or passwords provides only a weak level of security, and undermines the efficiency and convenience that NFC applications are supposed to provide.In this paper, we devise a novel gesture-centric NFC bio-metric authentication mechanism that is fully transparent to the user. Simply "tapping" the phone with the NFC reader - a natural gesture already performed by the user prior to making the NFC transaction - would unlock the NFC functionality. An unauthorized user cannot unlock the NFC functionality because tapping serves as a "hard-to-mimic" biometric gesture unique to each user. We show how the NFC tapping biometrics can be extracted in a highly robust manner using multiple - motion, position and ambient - phone's sensors and machine learning classifiers. The use of multiple sensors not only improves the authentication accuracy but also makes active attacks harder since multiple sensor events need to be mimicked simultaneously. Our work significantly enhances the security of NFC transactions without adding any extra burden on the users.

 

A Deep Learning Approach to Mid-air Gesture Interaction for Mobile Devices from Time-of-Flight Data

Thomas Kopinski, Fabian Sachara, Uwe Handmann

November 2016 MOBIQUITOUS 2016: Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services

Abstract:This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half a million data samples taken from different individuals our contribution shows how hand gesture recognition is realizable on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.

 

ubiGaze: ubiquitous augmented reality messaging using gaze gestures

Mihai Bâce, Teemu Leppänen, David Gil de Gomez, Argenis Ramirez Gomez

November 2016 SA '16: SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications

Abstract:We describe ubiGaze, a novel wearable ubiquitous method to augment any real-world object with invisible messages through gaze gestures that lock the message into the object. This enables a context and location dependent messaging service, which users can utilize discreetly and effortlessly. Further, gaze gestures can be used as an authentication method, even when the augmented object is publicly known. We developed a prototype using two wearable devices: a Pupil eye tracker equipped with a scene camera and a Sony Smartwatch 3. The eye tracker follows the users' gaze, the scene camera captures distinct features from the selected real-world object, and the smartwatch provides both input and output modalities for selecting and displaying messages. We describe the concept, design, and implementation of our real-world system. Finally, we discuss research implications and address future work.

 

Mogeste: A Mobile Tool for In-Situ Motion Gesture Design

Aman Parnami, Apurva Gupta, Gabriel Reyes, Ramik Sadana, Yang Li, Gregory D. Abowd

December 2016 IHCI '16: Proceedings of the 8th Indian Conference on Human Computer Interaction

Abstract:Motion gestures can be expressive, fast to access and perform, and facilitated by ubiquitous inertial sensors. However, implementing a gesture recognizer requires substantial programming and pattern recognition expertise. Although several graphical desktop-based tools lower the threshold of development, they do not support ad hoc development in naturalistic settings. We present Mogeste, a mobile tool for in-situ motion gesture design. Mogeste allows interaction designers to within minutes envision, train, and test motion gesture recognizers using inertial sensors in commodity devices. Furthermore, it enables rapid creative exploration by designers, at any time and within any context that inspires them. By supporting data collection, iterative design, and evaluation of envisioned gestural interactions within the context of its end-use, Mogeste reduces the gap between development and usage environments. In addition to the design and implementation of Mogeste, we also present findings from a user study with 7 novice designers.

 

Real-time Physics-based Motion Capture with Sparse Sensors

Sheldon Andrews, Ivan Huerta, Taku Komura, Leonid Sigal, Kenny Mitchell

December 2016 CVMP 2016: Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016)

Abstract:We propose a framework for real-time tracking of humans using sparse multi-modal sensor sets, including data obtained from optical markers and inertial measurement units. A small number of sensors leaves the performer unencumbered by not requiring dense coverage of the body. An inverse dynamics solver and physics-based body model are used, ensuring physical plausibility by computing joint torques and contact forces. A prior model is also used to give an improved estimate of motion of internal joints. The behaviour of our tracker is evaluated using several black box motion priors. We show that our system can track and simulate a wide range of dynamic movements including bipedal gait, ballistic movements such as jumping, and interaction with the environment. The reconstructed motion has low error and appears natural. As both the internal forces and contacts are obtained with high credibility, it is also useful for human movement analysis.