Spectral clustering is a useful trick for separating clusters of points by connectedness, not by distance from centers as in k-means, as in this example:
Spectral clustering uses the spectrum (eigenvalues) of a matrix to cluster the points. First, you define an N x N adjacency matrix W. One such matrix could set . is the distance between point i and point j, and c is a scale parameter. So decreases as the distance between points i and j increases. To make this matrix sparse and reduce computation, you could set to 0 if point j is not within the K-nearest-neighbors of point i.
Then sum the rows of W to get the degrees of each point. Let G be a diagonal matrix with the degrees down the diagonal and zeros elsewhere. Then define a matrix called the graph Laplacian by L = G – W. Clustering the points in the space of the eigenvectors of L associated with the smallest eigenvalues will group points that are closely connected in the adjacency matrix. (The eigenvector associated with the first smallest eigenvalue should be thrown out, as it is constant.) Here the colors reflect the true assignments from generating the three shapes.
Clustering the points using k-means in the space of these 2 eigenvalues, then using those labels in the original coordinates shows we have grouped by connectedness this time:
Here is the R script used to produce this example.
In the 2nd edition of The Elements of Statistical Learning (available as pdf), the authors show why spectral clustering produces these ‘connected’ clusters (book page 544):
Given an eigenvector f with eigenvalue of the graph Laplacian L we have:
So small values of (small eigenvalues ) are achieved when points i and j with large have coordinates and close together. Large corresponds to points that are close together in the original space.