Abstract
Matching corresponding local patches between images is a fundamental building block in many computer-vision algorithms, reducing the high-dimensional challenge of recovering geometric relations between images to a series of relatively simple and independent tasks. This approach is geometrically very flexible and has clear computational advantages over more convoluted global solutions. But it also has two major practical shortcomings: 1) Sparsity: the need to rely on high-quality repeatable features for matching drives current local methods to discard low-textured image locations and leave them unanalysed; 2) Reliability: the limited spatial context in which those methods work often does not contain enough information for achieving reliable matches. In this work, we target a major blind spot of local feature matching: ill-textured locations. We observe that while classic methods avoided using poorly localized features (e.g. edges) as matching candidates, due to their low reliability, these features contain highly valuable information for image registration. We show how, given the appropriate geometric context, reliable matches can be produced from these features, contributing to a better coverage of the scene. We present a statistically attractive framework for encoding the uncertainty that stems from using weakly localized matches into a coupled geometric estimation and match extraction process. We examine the practical application of the proposed framework to the problems of guided matching and affine region expansion and show significant improvement over preceding methods.
Original language | English |
---|---|
Pages (from-to) | 1-23 |
Number of pages | 23 |
Journal | Image Processing On Line |
Volume | 10 |
DOIs | |
State | Published - 1 Jan 2020 |
Keywords
- Affine transformation
- Dense matching
- Local matching
- Perspective transformation
- Registration
ASJC Scopus subject areas
- Software
- Signal Processing