Archive for April, 2014

Strengths and Weaknesses of various Clustering Algorithms

Sunday, April 27th, 2014

Clustering is one of the most fascinating areas in the data mining field for me. For example, it can be shown that under some simple and fairly straightforward it is impossible to find a clustering function (Kleinberg’s impossibility theorem for clustering).That means that some trade-offs and relaxations have to be made in all the existing clustering algorithms. Every clustering algorithm will have a weakness and fail to cluster some data sets in a sensible manner.

The Fundamental Clustering Problem Suite (Ultsch, 2005) contains several synthetic data sets and a variety of clustering problems that algorithms should be able to solve. The data set contains the data and ground-truth labels as well as labels for k-Means, Single-Linkage, Wards-Linkage and a Self-Organizing map. I decided to play around with it a bit, converted the SOM activation matrix to labels using k-Means, added Spectral Clustering and Self-Tuning Spectral Clustering, and an EM Gaussian Mixture model. I was particularly curious how well Spectral Clustering would do. Determining the true number of clusters from the underlying data is an entirely different problem. Hence in all cases the number of clusters was specified unless otherwise noted. The regular Spectral Clustering used the Ng-Jordan-Weiss algorithm with a kernel sigma of 0.04 after linear scaling of the inputs. The Self-Tuning Spectral Clustering used k=15 nearest neighbors. I also used Random Forest’s “clustering”, an extension of the classification algorithm that will generate a distance matrix from the classification tree ensemble. The distance matrix was then used in single linkage to obtain cluster labels. Random Forest is a special case as it derives a distance matrix out of classification using gini – a metric built for classification, not clustering.

Results

(more…)