
Human Action Recognition based on Multi-Feature Fusion
Human Action Recognition based on Multi-Feature Fusion
This paper proposes a novel action recognition method based on multi-feature fusion. In this method, spatial-temporal features and depth features are merged in a random forest framework. The human body joint coordinates obtained from depth image sequences are processed into displacement feature as a new depth feature. We use this depth feature to describe the relative motions between two joints and the three-dimension structure of human. We densely sample trajectories from RGB image sequences, and utilize foreground detection approach to reduce the effect of complex background. Then spatial-temporal features are constructed by Bag-of-Words model with trajectories from foreground. Finally, the effective random forest framework fuses both spatial-temporal features and depth features for recognizing human actions in RGB-D image sequences. Experimental results on MSR Daily Activity 3D dataset demonstrate the effectiveness of the proposed method.
/
〈 |
|
〉 |