3. Types of Clustering
• A clustering is a set of clusters
• Important distinction between hierarchical and partitional sets of clusters
• Partitional Clustering: A division data objects into non-overlapping subsets (clusters) such that
each data object is in exactly one subset, such as following :
1
Original Points A Partitional Clustering
4. Types of Clusters
• Well-separated clusters: A cluster is a set of points such that any point in a cluster is closer (or more
similar) to every other point in the cluster than to any point not in the cluster.
• Center-based clusters: A cluster is a set of objects such that an object in a cluster is closer (more
similar) to the “center” of a cluster, than to the center of any other cluster. The center of a cluster is often a
centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a
cluster.
• Contiguous clusters(Nearest neighbor or Transitive): A cluster is a set of points such that a point
in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the
cluster.
• Density-based clusters: A cluster is a dense region of points, which is separated by low-density regions,
from other regions of high density. Used when the clusters are irregular or intertwined, and when noise and
outliers are present.
2
• Property or Conceptual: Finds clusters that share some common property or represent a particular
concept.
3
Algorithm: k-means. The k-means algorithm for partitioning, where each cluster’s center is represented
by the mean value of the objects in the cluster.
Input:
• k: the number of clusters,
• D: a data set containing n objects.
Output: A set of k clusters.
Method:
• arbitrarily choose k objects from D as the initial cluster centers;
• repeat
• (re)assign each object to the cluster to which the object is the most similar, based on the mean
value of the objects in the cluster;
• update the cluster means, i.e., calculate the mean value of the objects for each cluster;
• until no change;
K-means Clustering – Details
1. Initial centroids are often chosen randomly.
• Clusters produced vary from one run to another.
2. The centroid is (typically) the mean of the points in the cluster.
3. ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.
4. K-means will converge for common similarity measures mentioned above.
5. Most of the convergence happens in the first few iterations.
• Often the stopping condition is changed to ‘Until relatively few points change clusters’
6. Complexity is O( n * K * I * d )
Where: n = number of points, K = number of clusters, I = number of iterations, d = number of attributes
Evaluating K-means Clusters
Most common measure is Sum of Squared Error (SSE)
1. For each point, the error is the distance to the nearest cluster
2. To get SSE, we square these errors and sum them.
The main advantages of this algorithm are its simplicity and speed which allows it to run on large
datasets. Its disadvantage is that it does not yield the same result with each run, since the resulting
clusters depend on the initial random assignments. It maximizes inter-cluster (or minimizes intra-cluster)
variance, but does not ensure that the result has a global minimum of variance.