IVML  
  about | r&d | publications | courses | people | links
   

K. Rapantzikos, Y. Avrithis, S. Kollias
Dense saliency-based spatiotemporal feature points for action recognition
Conference on Computer Vision and Pattern Recognition (CVPR), 2009
ABSTRACT
Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. Feature points are detected using a number of measures, namely saliency, cornerness, periodicity, motion activity etc. Each of these measures is usually intensity-based and provides a different trade-off between density and informativeness. In this paper, we use saliency for feature point detection in videos and incorporate color and motion apart from intensity. Our method uses a multi-scale volumetric representation of the video and involves spatiotemporal operations at the voxel level. Saliency is computed by a global minimization process constrained by pure volumetric constraints, each of them being related to an informative visual aspect, namely spatial proximity, scale and feature similarity (intensity, color, motion). Points are selected as the extrema of the saliency response and prove to balance well between density and informativeness. We provide an intuitive view of the detected points and visual comparisons against state-of-the-art space-time detectors. Our detector outperforms them on the KTH dataset using Nearest-Neighbor classifiers and ranks among the top using different classification frameworks. Statistics and comparisons are also performed on the more difficult Hollywood Human Actions (HOHA) dataset increasing the performance compared to current published results.
23 June , 2009
K. Rapantzikos, Y. Avrithis, S. Kollias, "Dense saliency-based spatiotemporal feature points for action recognition", Conference on Computer Vision and Pattern Recognition (CVPR), 2009
[ save PDF] [ BibTex] [ Print] [ Back]

© 00 The Image, Video and Multimedia Systems Laboratory - v1.12