GMS: Grid-based Motion Statistics for Fast, Ultra-robust Feature Correspondence

Abstract

Incorporating smoothness constraints into feature matching is known to enable ultra-robust matching. However, such formulations are both complex and slow, making them unsuitable for video applications. This paper proposes GMS (Grid-based Motion Statistics), a simple means of encapsulating motion smoothness as the statistical likelihood of a certain number of matches in a region. GMS enables translation of high match numbers into high match quality. This provides a real-time, ultra-robust correspondence system. Evaluation on videos, with low textures, blurs and wide-baselines show GMS consistently out-performs other real-time matchers and can achieve parity with more sophisticated, much slower techniques.

 

Publication

  • GMS: Grid-based Motion Statistics for Fast, Ultra-robust Feature Correspondence JiaWang Bian, Wen-Yan Lin, Yasuyuki Matsushita, Sai-Kit Yeung, Tan Dat Nguyen, Ming-Ming Cheng IEEE CVPR, 2017 [Project Page][pdf][C++][Video Demo]

Core idea: Motion Statistics

Observation: True correspondences normally appear certain consistency in term of the motion while false correspondences do not. Hence, we predict that a true match may have many more supporting matches in its surrounding area than a false match.

 

Partionability: Assign the motion statistical value as a score for each feature correspondence, we discover that there exists very different numerical distribution between true and false correspondences in terms of such value.

 

 

Grid Framework

Grid Pattern: For fast computation, we assign the matches in the same cell with shared motion statistical value, rather than calculating motion statistics for each correspondence. With multi-cell generalization, our method also contains global information to some extent.

Rotation & Scale: The grid pattern can be rotated in 8 directions and also could be fitted with a large range of scale change by varying the size of the grid.

 

Sample Results

Wide Baseline: Qualitative wide-baseline matching results on Strecha Dense MVS datasets.

Repetitive Structure: Experiment results of our method on ZuBuD dataset.

Quantitative Evaluation

Matching Performance: Evaluation on the standard VGG and StrechaMVS datasets and the large-scale real-world TUM SLAM sequences.

Pose Estimation

Pose Estimation: Evaluation on all the datasets with the known camera poses. The success ratio and computation time are both evaluated.

Video Matching Demo

 

 

2801total visits,34visits today

Leave a Reply

Your email address will not be published.