Abstract
Significant progress in image segmentation lias been made by viewing the problem in the framework of graph partitioning. In particular, spectral clustering methods such as "normalized cuts" (ncuts) can efficiently calculate good segmentations using eigenvector calculations. However, spectral methods when applied to images with local connectivity often oversegment homogenous regions. More importantly, they lack a straightforward probabilistic interpretation which makes it difficult to automatically set parameters using training data-in this paper we revisit the typical cut criterion proposed in [1, 5]. We show that computing the typical cut is equivalent to performing inference in an undirected graphical model. This equivalence allows us to use the powerful machinery of graphical models for learning and inferring image segmentations. For inferring segmentations we show that the generalized belief propagation (GBP) algorithm can give excellent results with a runtime that is usually faster than the ncut eigensolver. For learning segmentations we derive a maximum likelihood learning algorithm to learn affinity matrices from labelled datasets. We illustrate both learning and inference on challenging real and synthetic images.
Original language | English |
---|---|
Pages (from-to) | 1243-1250 |
Number of pages | 8 |
Journal | Proceedings of the IEEE International Conference on Computer Vision |
Volume | 2 |
DOIs | |
State | Published - 1 Jan 2003 |
Externally published | Yes |
Event | NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION - Nice, France Duration: 13 Oct 2003 → 16 Oct 2003 |
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition