Scalable active matching
Handa A., Chli M., Strasdat H., Davison AJ.
In matching tasks in computer vision, and particularly in real-time tracking from video, there are generally strong priors available on absolute and relative correspondence locations thanks to motion and scene models. While these priors are often partially used post-hoc to resolve matching consensus in algorithms like RANSAC, it was recently shown that fully integrating them in an 'Active Matching' (AM) approach permits efficient guided image processing with rigorous decisions guided by Information Theory. AM's weakness was that the overhead induced by intermediate Bayesian updates required meant poor scaling to cases where many correspondences were sought. In this paper we show that relaxation of the rigid probabilistic model of AM, where every feature measurement directly affects the prediction of every other, permits dramatically more scalable operation without affecting accuracy. We take a general graph-theoretic view of the structure of prior information in matching to sparsify and approximate the interconnections. We demonstrate the performance of two variations, CLAM and SubAM, in the context of sequential camera tracking. These algorithms are highly competitive with other techniques at matching hundreds of features per frame while retaining great intuitive appeal and the full probabilistic capability to digest prior information. ©2010 IEEE.