Abstract
We present a novel pairwise clustering method. Given a proximity matrix of pairwise relations (i.e. pairwise similarity or dissimilarity estimates) between data points, our algorithm extracts the two most prominent clusters in the data set. The algorithm, which is completely nonparametric, iteratively employs a two-step transformation on the proximity matrix. The first step of the transformation represents each point by its relation to all other data points, and the second step re-estimates the pairwise distances using a statistically motivated proximity measure on these representations. Using this transformation, the algorithm iteratively partitions the data points, until it finally converges to two clusters. Although the algorithm is simple and intuitive, it generates a complex dynamics of the proximity matrices. Based on this bipartition procedure we devise a hierarchical clustering algorithm, which employs the basic bipartition algorithm in a straightforward divisive manner. The hierarchical clustering algorithm copes with the model validation problem using a general cross-validation approach, which may be combined with various hierarchical clustering methods. We further present an experimental study of this algorithm. We examine some of the algorithm's properties and performance on some synthetic and 'standard' data sets. The experiments demonstrate the robustness of the algorithm and indicate that it generates a good clustering partition even when the data is noisy or corrupted.
Original language | English |
---|---|
Pages (from-to) | 35-61 |
Number of pages | 27 |
Journal | Machine Learning |
Volume | 47 |
Issue number | 1 |
DOIs | |
State | Published - 1 Apr 2002 |
Keywords
- Cross-validation
- Hierarchical clustering
- Jensen-Shannon divergence
- Noise robustness
- Nonparametric methods
- Pairwise distances
ASJC Scopus subject areas
- Software
- Artificial Intelligence