Anda di halaman 1dari 70

Flat Clustering

Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Prasad

L16FlatCluster

Todays Topic: Clustering


Document clustering
Motivations Document representations Success criteria

Clustering algorithms
Partitional (Flat) Hierarchical (Tree)
Prasad L16FlatCluster 2

What is clustering?
Clustering: the process of grouping a set of objects into classes of similar objects
Documents within a cluster should be similar. Documents from different clusters should be dissimilar.

The commonest form of unsupervised learning


Unsupervised learning = learning from raw data, as opposed to supervised data where a classification of examples is given

A common and important task that finds many applications in IR and other places
Prasad L16FlatCluster 3

A data set with clear cluster structure

How would you design an algorithm for finding the three clusters in this case?

Classica*on vs. Clustering


Classica*on: supervised learning Clustering: unsupervised learning Classica*on: Classes are human-dened and part of the input to the learning algorithm. Clustering: Clusters are inferred from the data without human input.

However, there are many ways of inuencing the outcome of clustering: number of clusters, similarity measure, representa*on of documents, . . .

Why cluster documents?


Whole corpus analysis/navigation
Better user interface

For improving recall in search applications


Better search results (pseudo RF)

For better navigation of search results


Effective user recall will be higher

For speeding up vector space retrieval


Faster search
Prasad L16FlatCluster 6

Applications of clustering in IR
Application Search result clustering Scatter-Gather What is clustered? search results (subsets of) collection collection Benefit more effective information presentation to user alternative user interface: search without typing effective information presentation for exploratory browsing higher efficiency: faster search
7

Collection clustering

Cluster-based retrieval 7

collection

Yahoo! Hierarchy isnt clustering but is the kind of output you want from clustering
www.yahoo.com/Science (30) agriculture ... dairy biology ... physics ... CS ... space ... craft missions

botany cell AI courses crops magnetism HCI agronomy evolution forestry relativity

Prasad

L16FlatCluster

Global naviga*on: MESH (upper level)

Global naviga*on: MESH (lower level)

10

10

Naviga*onal hierarchies: Manual vs. automa*c crea*on


Note: Yahoo/MESH are not examples of clustering. But they are well known examples for using a global hierarchy for naviga*on. Some examples for global naviga*on/explora*on based on clustering: Car*a Themescapes Google News

11

11

Google News: automatic clustering gives an effective news presentation metaphor

Scatter/Gather: Cutting, Karger, and Pedersen

For visualizing a document collection and its themes


Wise et al, Visualizing the non-visual PNNL ThemeScapes, Cartia
[Mountain height = cluster size]

For improving search recall


Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs Therefore, to improve search recall:
Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D

Example: The query car will also return docs containing automobile

Because clustering grouped together docs containing car with those containing automobile.
Prasad

Why might this happen?

15

For better navigation of search results


For grouping search results thematically

clusty.com / Vivisimo

Prasad

L16FlatCluster

16

Search result clustering for beRer naviga*on

17

17

Issues for clustering


Representation for clustering
Document representation
Vector space? Normalization?

Need a notion of similarity/distance

How many clusters?


Fixed a priori? Completely data driven?
Avoid trivial clusters - too large or small

Prasad

L16FlatCluster

18

What makes docs related?


Ideal: semantic similarity. Practical: statistical similarity
Docs as vectors. For many algorithms, easier to think in terms of a distance (rather than similarity) between docs. We can use cosine similarity (alternatively, Euclidean Distance).
Prasad L16FlatCluster 19

Clustering Algorithms
Partitional (Flat) algorithms
Usually start with a random (partial) partition Refine it iteratively
K means clustering (Model based clustering)

Hierarchical (Tree) algorithms


Bottom-up, agglomerative (Top-down, divisive)
Prasad L16FlatCluster 20

Hard vs. soft clustering


Hard clustering: Each document belongs to exactly one cluster

More common and easier to do


Soft clustering: A document can belong to more than one cluster.

Makes more sense for applications like creating browsable hierarchies


You may want to put a pair of sneakers in two clusters: (i) sports apparel and (ii) shoes

Partitioning Algorithms
Partitioning method: Construct a partition of n documents into a set of K clusters Given: a set of documents and the number K Find: a partition of K clusters that optimizes the chosen partitioning criterion Globally optimal: exhaustively enumerate all partitions Effective heuristic methods: K-means and Kmedoids algorithms
Prasad L16FlatCluster 22

K-Means
Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c.

Reassignment of instances to clusters is based on distance to the current cluster centroids. (Or one can equivalently phrase it in terms of similarities)
Prasad L16FlatCluster 23

1 (c) = x | c | xc

K-Means Algorithm
Select K random docs {s1, s2, sK} as seeds. Until clustering converges or other stopping criterion: For each doc di: Assign di to the cluster cj such that dist(xi, sj) is minimal. (Update the seeds to the centroid of each cluster) For each cluster cj sj = (cj)
Prasad L16FlatCluster 24

K-means algorithm

25

K Means Example
(K=2)
Pick seeds Reassign clusters Compute centroids Reassign clusters Compute centroids Reassign clusters Converged!

Prasad

L16FlatCluster

26

Worked Example: Set of to be clustered

27

Worked Example: Random selec*on of ini*al

centroids

Exercise: (i) Guess what the op*mal clustering into two clusters is in this case; (ii) compute the centroids of the clusters

28

28

Worked Example: Assign points to closest center

29

Worked Example: Assignment

30

Worked Example: Recompute cluster centroids

31

Worked Example: Assign points to closest centroid

32

Worked Example: Assignment

33

Worked Example: Recompute cluster centroids

34

Worked Example: Assign points to closest centroid

35

Worked Example: Assignment

36

Worked Example: Recompute cluster centroids

37

Worked Example: Assign points to closest centroid

38

Worked Example: Assignment

39

Worked Example: Recompute cluster centroids

40

Worked Example: Assign points to closest centroid

41

Worked Example: Assignment

42

Worked Example: Recompute cluster centroids

43

Worked Example: Assign points to closest centroid

44

Worked Example: Assignment

45

Worked Example: Recompute cluster centroids

46

Worked Example: Assign points to closest centroid

47

Worked Example: Assignment

48

Worked Example: Recompute cluster caentroids

49

Worked Ex.: Centroids and assignments after convergence

50

Termination conditions
Several possibilities, e.g.,
A fixed number of iterations. Doc partition unchanged. Centroid positions dont change.

Does this mean that the docs in a cluster are unchanged?


Prasad 51

Convergence
Why should the K-means algorithm ever reach a fixed point?
A state in which clusters dont change.

K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm.
EM is known to converge. Number of iterations could be large.

Prasad

L16FlatCluster

52

Lower case

Convergence of K-Means
Define goodness measure of cluster k as sum of squared distances from cluster centroid:
Gk = i (di ck)2 (sum over all di in cluster k)

G = k Gk Intuition: Reassignment monotonically decreases G since each vector is assigned to the closest centroid.
Prasad L16FlatCluster 53

Sec. 16.4

Convergence of K-Means
Recomputation monotonically decreases each Gk since (mk is number of members in cluster k): (di a)2 reaches minimum for: 2(di a) = 0 di = a mK a = di a = (1/ mk) di = ck K-means typically converges quickly

Time Complexity
Computing distance between two docs is O(m) where m is the dimensionality of the vectors. Reassigning clusters: O(Kn) distance computations, or O(Knm). Computing centroids: Each doc gets added once to some centroid: O(nm). Assume these two steps are each done once for I iterations: O(IKnm).
Prasad L16FlatCluster 55

Op*mality of K-means

Convergence does not mean that we converge to the op*mal clustering! This is the great weakness of K-means. If we start with a bad set of seeds, the resul*ng clustering can be horrible.

56

Seed Choice
Results can vary based on random seed selection. Some seeds can result in poor convergence rate, or convergence to suboptimal clusterings.
Select good seeds using a heuristic (e.g., doc least similar to any existing mean) Try out multiple L16FlatCluster starting points
Example showing sensitivity to seeds

In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F}. If you start with D and F you converge to {A,B,D,E} {C,F}.

Prasad

57

K-means issues, variations, etc.


Recomputing the centroid after every assignment (rather than after all points are reassigned) can improve speed of convergence of K-means Assumes clusters are spherical in vector space
Sensitive to coordinate changes, weighting etc.

Disjoint and exhaustive


Doesnt have a notion of outliers But can add outlier filtering
Prasad L16FlatCluster 58

How Many Clusters?


Number of clusters K is given
Partition n docs into predetermined number of clusters

Finding the right number of clusters is part of the problem


Given docs, partition into an appropriate number of subsets. E.g., for query results - ideal value of K not known up front - though UI may impose limits. L16FlatCluster

Prasad

59

Hierarchical Clustering

Adapted from Slides by Prabhakar Raghavan, Christopher Manning, Ray Mooney and Soumen Chakrabarti

Prasad

L17HierCluster

60

The Curse of Dimensionality


Why document clustering is difficult
While clustering looks intuitive in 2 dimensions, many of our applications involve 10,000 or more dimensions High-dimensional spaces look different
The probability of random points being close drops quickly as the dimensionality grows. Furthermore, random pair of vectors are all almost perpendicular.
Prasad L17HierCluster 61

Hierarchical Clustering
Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents.
animal vertebrate fish reptile amphib. mammal invertebrate worm insect crustacean

One approach: recursive application of a partitional clustering algorithm.


Prasad L17HierCluster 62

Dendogram: Hierarchical Clustering


Clustering obtained by cutting the dendrogram at a desired level: each connected component forms a cluster.

Prasad

L17HierCluster

63

Hierarchical Clustering algorithms


Agglomerative (bottom-up):

Precondition: Start with each document as a separate cluster. Postcondition: Eventually all documents belong to the same cluster.

Divisive (top-down):
Precondition: Start with all documents belonging to the same cluster. Postcondition: Eventually each document forms a cluster of its own.

Does not require the number of clusters k Prasad L17HierCluster in advance.


Needs a termination/readout condition

64

Hierarchical Agglomerative Clustering (HAC) Algorithm


Start with all instances in their own cluster. Until there is only one cluster: Among the current clusters, determine the two clusters, ci and cj, that are most similar. Replace ci and cj with a single cluster ci cj

Prasad

L17HierCluster

65

Dendrogram: Document Example


As clusters agglomerate, docs likely to fall into a hierarchy of topics or concepts.

d3 d5 d1 d2 d4
d1,d2 d3,d4,d5

d4,d5

d3

Prasad

L17HierCluster

66

Key notion: cluster representative


We want a notion of a representative point in a cluster, to represent the location of each cluster Representative should be some sort of typical or central point in the cluster, e.g.,
point inducing smallest radii to docs in cluster smallest squared distances, etc. point that is the L17HierCluster average of all docs in the 67 cluster

Prasad

Example: n=6, k=3, closest pair of centroids

d6

d4

d5

d3

Centroid after second step.


d1 d2

Centroid after first step.


Prasad 68

Outliers in centroid computation


Can ignore outliers when computing centroid. What is an outlier?
Lots of statistical definitions, e.g.
Say some cluster moment. moment of point to centroid > M 10.

Centroid Outlier

Prasad

L17HierCluster

69

Closest pair of clusters


Many variants to defining closest pair of clusters Single-link
Similarity of the most cosine-similar (singlelink)

Complete-link
Similarity of the furthest points, the least cosine-similar

Centroid
Clusters whose centroids (centers of gravity) are the most cosine-similar
70

Anda mungkin juga menyukai