Multi-scale particle filtering for multiple object tracking in video sequences
MetadataShow full item record
The tracking of moving objects in video sequences, also known as visual tracking, involves the estimation of positions, and possibly velocities, of these objects. Visual tracking is an important research problem because of its many industrial, biomedical, and security applications. Significant progress has been made on this topic over the last few decades. However, the ability to track objects accurately in video sequences having challenging conditions and unexpected events, e.g., background motion, object shadow, objects with different sizes and contrasts, a sudden change in illumination, partial object camouflage, and low signal-to-noise ratio, remains an important research problem. To address such difficulties, we adopted a multi-scale Bayesian approach to develop robust multiple object trackers. We introduce a novel concept in the field of visual tracking by adaptively fusing tracking results obtained from a fixed or variable number of wavelet subbands, corresponding to different scene directions and object scales, of a given video frame. Previous approaches to visual tracking were based on using the full- resolution video frame or a smoothed version of it. These approaches have limitations that were overcome by our multi-scale approach that is described in detail in this thesis. This thesis describes the design and implementation of four novel multi-scale visual trackers that are based on particle filtering and the adaptive fusion of subband frames generated using wavelets. We evaluated the performance of our novel trackers using different video sequences from the CAVIAR and VISOR databases. Compared to a standard full-resolution particle filter-based tracker, and a single wavelet subband (LL)2 based tracker, our multi-scale trackers demonstrate significantly more accurate tracking performance, in addition to a reduction in average frame processing time.