Optimized Continuous Hand Gesture Segmentation and Recognition based on Spatial-Temporal & Trajectory Information
Human Computer Interaction (HCI) provides a more intuitive way of interacting with computers and has long been an important and popular research field. HCI is emerging in various industries including multimedia and gaming and it is mainly supported by hand gestures. Ubiquitous computing makes hand gesture inputs even more attractive. The variations in gesture duration and unknown start and end points are the key issues in recognizing continuous gestures, thus lowering the performance of predictable recognition algorithms. In this research, we primarily present a study on issues and limitations of hand gesture recognition. Later, we recommend an outline for ‘Continuous Hand Gesture Segmentation and Recognition (CHGSR) based on Spatial-Temporal and Trajectory Information (STTI)’. The framework utilizes cognitive Deep Learning Networks and outperforms in recognizing continuous hand gestures. We utilized TensorFlow and Keras Deep Learning Libraries for implementing our Deep Learning Network. The experimental evaluation of the framework is done using a video dataset of hand gestures recorded by three different persons in different sessions. The videos are recorded using Microsoft Xbox Kinect sensor. In presence of continuous gestures and different gesticulation behaviors, the tentative results signify that the proposed framework has a high recognition accuracy and F-score of up to 0.98.