Pixeltrack: a fast adaptive algorithm for tracking non-rigid objects |
||||||||||||
|
AbstractIn this paper, we present a novel algorithm for fast tracking of generic objects in videos. The algorithm uses two components: a detector that makes use of the generalised Hough transform with pixel-based descriptors, and a probabilistic segmentation method based on global models for foreground and background. These components are used for tracking in a combined way, and they adapt each other in a co-training manner.Through effective model adaptation and segmentation, the algorithm is able to track objects that undergo rigid and non-rigid deformations and considerable shape and appearance variations. The proposed tracking method has been thoroughly evaluated on challenging standard videos, and outperforms state-of-the-art tracking methods designed for the same task. Finally, the proposed models allow for an extremely efficient implementation, and thus tracking is very fast. AuthorsStefan Duffner, LIRIS, INSA de Lyon, France Christophe Garcia, LIRIS, INSA de Lyon, France PaperS. Duffner and C. Garcia, Pixeltrack: a fast adaptive algorithm for tracking non-rigid objects, In Proceedings of ICCV, 2013 djvu bibtexDatasetsWe used two evaluation datasets:
AnnotationThe bounding box annotation in XML format can be downloaded from here.CodeC++ code using the OpenCV (2.4) library (tested under Linux):pixeltrack_v0.3.tgz. This code is published under the GPLv3 license. Use this code for research purposes only. If you use it, cite the paper mentioned above. ResultsHere are some videos showing qualitive results comparing it to HoughTrack (Godec et al. ICCV 2011) and TLD (Kalal et al. CVPR 2010). For quantitative results please see the paper.
|