Tan,Steinbach, Kumar
4/18/2004
Market-Basket transactions
TID
Items
Bread, Milk
2
3
4
5
Tan,Steinbach, Kumar
4/18/2004
Itemset
A collection of one or more items
k-itemset
Support count ( )
Frequency of occurrence of an itemset
E.g.
({Milk, Bread,Diaper}) = 2
Support
TID
Items
Bread, Milk
2
3
4
5
Frequent Itemset
An itemset whose support is greater
than or equal to a minsup threshold
Tan,Steinbach, Kumar
4/18/2004
Association Rule
An implication expression of the form
X
Y, where X and Y are itemsets
Example:
{Milk, Diaper}
{Beer}
Bread, Milk
2
3
4
5
{Milk, Diaper}
s
c
Tan,Steinbach, Kumar
Items
Example:
Confidence (c)
TID
Beer
2
5
2
3
0 .4
0.67
4
Brute-force approach:
List all possible association rules
Compute the support and confidence for each rule
Prune rules that fail the minsup and minconf thresholds
Computationally prohibitive!
Tan,Steinbach, Kumar
4/18/2004
TID
Items
Bread, Milk
2
3
4
5
{Milk,Diaper}
{Beer} (s=0.4, c=0.67)
{Milk,Beer}
{Diaper} (s=0.4, c=1.0)
{Diaper,Beer}
{Milk} (s=0.4, c=0.67)
{Beer}
{Milk,Diaper} (s=0.4, c=0.67)
{Diaper}
{Milk,Beer} (s=0.4, c=0.5)
{Milk}
{Diaper,Beer} (s=0.4, c=0.5)
Observations:
All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Beer}
Rules originating from the same itemset have identical support but
can have different confidence
Thus, we may decouple the support and confidence requirements
Tan,Steinbach, Kumar
4/18/2004
Two-step approach:
1. Frequent Itemset Generation
minsup
2. Rule Generation
Tan,Steinbach, Kumar
4/18/2004
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
ABCD
ABCE
ABDE
ACDE
ABCDE
Tan,Steinbach, Kumar
BCDE
Brute-force approach:
Each itemset in the lattice is a candidate frequent itemset
Count the support of each candidate by scanning the
database
Transactions
TID
1
2
3
4
5
Items
Bread, Milk
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
List of
Candidates
w
Match each transaction against every candidate
4/18/2004
Computational Complexity
d 1
k 1
d
k
2
d k
j 1
d 1
d k
j
Tan,Steinbach, Kumar
4/18/2004
10
Tan,Steinbach, Kumar
4/18/2004
11
Apriori principle:
If an itemset is frequent, then all of its subsets must also
be frequent
X ,Y : ( X
Y)
s( X ) s(Y )
4/18/2004
12
Found to be
Infrequent
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
ABCD
ABCE
Pruned
supersets
Tan,Steinbach, Kumar
ABDE
ACDE
BCDE
ABCDE
4/18/2004
13
Count
4
2
4
3
4
1
Items (1-itemsets)
Itemset
{Bread,Milk}
{Bread,Beer}
{Bread,Diaper}
{Milk,Beer}
{Milk,Diaper}
{Beer,Diaper}
Minimum Support = 3
Pairs (2-itemsets)
(No need to generate
candidates involving Coke
or Eggs)
Triplets (3-itemsets)
Tan,Steinbach, Kumar
Count
3
2
3
2
3
3
Itemset
{Bread,Milk,Diaper}
Count
3
4/18/2004
14
Apriori Algorithm
Method:
Let k=1
Generate frequent itemsets of length 1
Repeat until no new frequent itemsets are identified
Generate length (k+1) candidate itemsets from length k
frequent itemsets
Prune candidate itemsets containing subsets of length k that
are infrequent
Count the support of each candidate by scanning the DB
Eliminate candidates that are infrequent, leaving only those
that are frequent
Tan,Steinbach, Kumar
4/18/2004
15
Candidate counting:
Scan the database of transactions to determine the
support of each candidate itemset
To reduce the number of comparisons, store the
candidates in a hash structure
Instead of matching each transaction against every candidate,
match it against candidates contained in the hashed buckets
Transactions
TID
1
2
3
4
5
Hash Structure
Items
Bread, Milk
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
Buckets
Tan,Steinbach, Kumar
4/18/2004
16
Max leaf size: max number of itemsets stored in a leaf node (if number of
candidate itemsets exceeds max leaf size, split the node)
Hash function
3,6,9
1,4,7
234
567
345
136
145
2,5,8
124
457
Tan,Steinbach, Kumar
125
458
159
356
357
689
4/18/2004
367
368
17
1,4,7
3,6,9
2,5,8
234
567
145
136
345
Hash on
1, 4 or 7
Tan,Steinbach, Kumar
124
125
457
458
159
356
367
357
368
689
4/18/2004
18
1,4,7
3,6,9
2,5,8
234
567
145
136
345
Hash on
2, 5 or 8
Tan,Steinbach, Kumar
124
125
457
458
159
356
367
357
368
689
4/18/2004
19
1,4,7
3,6,9
2,5,8
234
567
145
136
345
Hash on
3, 6 or 9
Tan,Steinbach, Kumar
124
125
457
458
159
356
367
357
368
689
4/18/2004
20
Subset Operation
Given a transaction t, what
are the possible subsets of
size 3?
Transaction, t
1 2 3 5 6
Level 1
1 2 3 5 6
2 3 5 6
3 5 6
Level 2
12 3 5 6
13 5 6
123
125
126
135
136
Level 3
Tan,Steinbach, Kumar
15 6
156
23 5 6
235
236
25 6
256
35 6
356
Subsets of 3 items
Introduction to Data Mining
4/18/2004
21
1 2 3 5 6 transaction
1+ 2356
2+ 356
1,4,7
3+ 56
3,6,9
2,5,8
234
567
145
136
345
124
457
125
458
Tan,Steinbach, Kumar
159
356
357
689
367
368
4/18/2004
22
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56
234
567
15+ 6
145
136
345
124
457
Tan,Steinbach, Kumar
125
458
159
356
357
689
367
368
4/18/2004
23
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56
234
567
15+ 6
145
136
345
124
457
Tan,Steinbach, Kumar
125
458
159
356
357
689
367
368
4/18/2004
24
Size of database
Tan,Steinbach, Kumar
4/18/2004
25
Tan,Steinbach, Kumar
10
k 1
10
k
4/18/2004
26
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
ABCD
Infrequent
Itemsets
Tan,Steinbach, Kumar
ABCE
ABDE
ABCD
E
ACDE
BCDE
Border
4/18/2004
27
Closed Itemset
TID
1
2
3
4
5
Items
{A,B}
{B,C,D}
{A,B,C,D}
{A,B,D}
{A,B,C,D}
Tan,Steinbach, Kumar
Itemset
{A}
{B}
{C}
{D}
{A,B}
{A,C}
{A,D}
{B,C}
{B,D}
{C,D}
Support
4
5
3
4
4
2
3
3
4
3
Itemset Support
{A,B,C}
2
{A,B,D}
3
{A,C,D}
2
{B,C,D}
3
{A,B,C,D}
2
4/18/2004
28
Items
ABC
ABCD
BCE
ACDE
DE
124
123
12
124
AB
12
24
AC
ABE
245
123
4
AE
24
ABD
1234
AD
ABC
ACD
BD
345
D
BC
4
ACE
BE
ADE
BCD
24
CD
BCE
34
CE
45
BDE
4
ABCD
ABCE
Not supported by
any transactions
Tan,Steinbach, Kumar
Transaction Ids
null
ABDE
ACDE
BCDE
ABCDE
4/18/2004
29
DE
CDE
123
12
124
AB
12
ABC
24
AC
AD
ABD
ABE
1234
AE
345
D
2
BC
BD
4
ACD
245
123
24
Closed but
not maximal
null
4
ACE
BE
ADE
BCD
24
CD
BCE
Closed and
maximal
34
CE
BDE
45
DE
CDE
4
ABCD
ABCE
ABDE
ACDE
BCDE
# Closed = 9
# Maximal = 4
ABCDE
Tan,Steinbach, Kumar
4/18/2004
30
Frequent
Itemsets
Closed
Frequent
Itemsets
Maximal
Frequent
Itemsets
Tan,Steinbach, Kumar
4/18/2004
31
null
null
..
..
..
..
{a1,a2,...,an}
{a1,a2,...,an}
(a) General-to-specific
Tan,Steinbach, Kumar
Frequent
itemset
border
..
..
Frequent
itemset
border
(b) Specific-to-general
null
{a1,a2,...,an}
(c) Bidirectional
4/18/2004
32
AB
ABC
AC
AD
ABD
ACD
null
BC
BD
CD
AB
BCD
AC
ABC
BC
AD
ABD
BD
CD
ACD
BCD
ABCD
ABCD
33
Tan,Steinbach, Kumar
4/18/2004
34
Representation of Database
horizontal vs vertical data layout
Horizontal
Data Layout
TID
1
2
3
4
5
6
7
8
9
10
Items
A,B,E
B,C,D
C,E
A,C,D
A,B,C,D
A,E
A,B
A,B,C
A,C,D
B
Tan,Steinbach, Kumar
B
1
2
5
7
8
10
C
2
3
4
8
9
D
2
4
5
9
E
1
3
6
4/18/2004
35
FP-growth Algorithm
Tan,Steinbach, Kumar
4/18/2004
36
FP-tree construction
null
TID
1
2
3
4
5
6
7
8
9
10
Items
{A,B}
{B,C,D}
{A,C,D,E}
{A,D,E}
{A,B,C}
{A,B,C,D}
{B,C}
{A,B,C}
{A,B,D}
{B,C,E}
A:1
B:1
After reading TID=2:
A:1
null
B:1
B:1
C:1
D:1
Tan,Steinbach, Kumar
4/18/2004
37
FP-Tree Construction
TID
1
2
3
4
5
6
7
8
9
10
Items
{A,B}
{B,C,D}
{A,C,D,E}
{A,D,E}
{A,B,C}
{A,B,C,D}
{B,C}
{A,B,C}
{A,B,D}
{B,C,E}
Header table
Item
Pointer
A
B
C
D
E
Tan,Steinbach, Kumar
Transaction
Database
null
B:3
A:7
B:5
C:1
C:3
D:1
D:1
C:3
D:1
D:1
D:1
E:1
E:1
E:1
Pointers are used to assist
frequent itemset generation
4/18/2004
38
FP-growth
null
A:7
B:5
B:1
C:1
C:3
D:1
D:1
D:1
C:1
D:1
D:1
Tan,Steinbach, Kumar
4/18/2004
39
Tree Projection
null
Possible Extension:
E(A) = {B,C,D,E}
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
Possible Extension:
E(ABC) = {D,E}
ABCD
ABCE
ABDE
ACDE
BCDE
ABCDE
Tan,Steinbach, Kumar
4/18/2004
40
Tree Projection
Items are listed in lexicographic order
Each node P stores the following information:
Tan,Steinbach, Kumar
4/18/2004
41
Projected Database
Original Database:
TID
1
2
3
4
5
6
7
8
9
10
Items
{A,B}
{B,C,D}
{A,C,D,E}
{A,D,E}
{A,B,C}
{A,B,C,D}
{B,C}
{A,B,C}
{A,B,D}
{B,C,E}
Projected Database
for node A:
TID
1
2
3
4
5
6
7
8
9
10
Items
{B}
{}
{C,D,E}
{D,E}
{B,C}
{B,C,D}
{}
{B,C}
{B,D}
{}
4/18/2004
E(A)
42
ECLAT
Tan,Steinbach, Kumar
Items
A,B,E
B,C,D
C,E
A,C,D
A,B,C,D
A,E
A,B
A,B,C
A,C,D
B
B
1
2
5
7
8
10
C
2
3
4
8
9
D
2
4
5
9
E
1
3
6
TID-list
4/18/2004
43
ECLAT
A
1
4
5
6
7
8
9
AB
1
5
7
8
3 traversal approaches:
top-down, bottom-up and hybrid
Tan,Steinbach, Kumar
4/18/2004
44
Rule Generation
ABD C,
B ACD,
AC
BD,
CD AB,
ACD B,
C ABD,
AD
BC,
BCD A,
D ABC
BC AD,
Tan,Steinbach, Kumar
L)
4/18/2004
45
Rule Generation
D)
D)
c(AB
CD)
c(A
BCD)
Tan,Steinbach, Kumar
4/18/2004
46
CD=>AB
CD=>AB
Pruned
Rules
Tan,Steinbach, Kumar
ABCD=>{ }
ABCD=>{ }
BCD=>A
BCD=>A
ACD=>B
ACD=>B
BD=>AC
BD=>AC
D=>ABC
D=>ABC
BC=>AD
BC=>AD
C=>ABD
C=>ABD
ABD=>C
ABD=>C
AD=>BC
AD=>BC
B=>ACD
B=>ACD
ABC=>D
ABC=>D
AC=>BD
AC=>BD
AB=>CD
AB=>CD
A=>BCD
A=>BCD
4/18/2004
47
BD=>AC
join(CD=>AB,BD=>AC)
would produce the candidate
rule D => ABC
Prune rule D=>ABC if its
subset AD=>BC does not have
high confidence
Tan,Steinbach, Kumar
D=>ABC
4/18/2004
48
Support
distribution of
a retail data set
Tan,Steinbach, Kumar
4/18/2004
49
Tan,Steinbach, Kumar
4/18/2004
50
Suppose:
Tan,Steinbach, Kumar
4/18/2004
51
MS(I)
Sup(I)
0.10% 0.25%
0.20% 0.26%
0.30% 0.29%
0.50% 0.05%
Tan,Steinbach, Kumar
3%
4.20%
AB
ABC
AC
ABD
AD
ABE
AE
ACD
BC
ACE
BD
ADE
BE
BCD
CD
BCE
CE
BDE
DE
CDE
B
C
4/18/2004
52
MS(I)
Sup(I)
AB
ABC
AC
ABD
AD
ABE
AE
ACD
BC
ACE
BD
ADE
BE
BCD
CD
BCE
CE
BDE
DE
CDE
A
A
0.10% 0.25%
0.20% 0.26%
B
C
0.30% 0.29%
D
0.50% 0.05%
E
Tan,Steinbach, Kumar
3%
4.20%
4/18/2004
53
MS(Milk)=5%,
MS(Coke) = 3%,
MS(Broccoli)=0.1%, MS(Salmon)=0.5%
Ordering: Broccoli, Salmon, Coke, Milk
Tan,Steinbach, Kumar
4/18/2004
54
Modifications to Apriori:
In traditional Apriori,
A candidate (k+1)-itemset is generated by merging two
frequent itemsets of size k
The candidate is pruned if it contains any infrequent subsets
of size k
Tan,Steinbach, Kumar
4/18/2004
55
Pattern Evaluation
Tan,Steinbach, Kumar
4/18/2004
56
Tan,Steinbach, Kumar
4/18/2004
57
Given a rule X
Y, information needed to compute rule
interestingness can be obtained from a contingency table
Contingency table for X
f11
f10
f1+
f01
f00
fo+
f+1
f+0
|T|
Tan,Steinbach, Kumar
4/18/2004
58
Drawback of Confidence
Coffee
Coffee
Tea
15
20
Tea
75
80
90
10
100
Coffee
4/18/2004
59
Statistical Independence
Tan,Steinbach, Kumar
4/18/2004
60
Statistical-based Measures
P(Y | X )
Lift
P(Y )
P( X , Y )
Interest
P( X ) P(Y )
PS P( X , Y ) P( X ) P(Y )
coefficient
Tan,Steinbach, Kumar
P( X , Y ) P( X ) P(Y )
P( X )[1 P( X )]P(Y )[1 P(Y )]
Introduction to Data Mining
4/18/2004
61
Coffee
Tea
15
20
Tea
75
80
90
10
100
Coffee
Tan,Steinbach, Kumar
4/18/2004
62
10
10
90
90
10
90
100
Lift
0.1
10
(0.1)(0.1)
90
90
10
10
90
10
100
Lift
0 .9
1.11
(0.9)(0.9)
Statistical independence:
If P(X,Y)=P(X)P(Y) => Lift = 1
Tan,Steinbach, Kumar
4/18/2004
63
Piatetsky-Shapiro:
3 properties a good measure M must satisfy:
M(A,B) = 0 if A and B are statistically independent
M(A,B) increase monotonically with P(A,B) when P(A)
and P(B) remain unchanged
M(A,B) decreases monotonically with P(A) [or P(B)]
when P(A,B) and P(B) [or P(A)] remain unchanged
Tan,Steinbach, Kumar
4/18/2004
65
Example
f11
E1
E2
E3
E4
E5
E6
E7
E8
E9
E10
8123
8330
9481
3954
2886
1500
4000
4000
1720
61
Tan,Steinbach, Kumar
f10
f01
f00
83
424 1370
2
622 1046
94
127 298
3080
5
2961
1363 1320 4431
2000 500 6000
2000 1000 3000
2000 2000 2000
7121
5
1154
2483
4
7452
4/18/2004
66
A
A
B
q
s
B
B
A
p
q
A
r
s
Asymmetric measures:
Tan,Steinbach, Kumar
4/18/2004
67
Female
High
Low
10
Male
Female
High
30
34
Low
40
42
70
76
2x
10x
Mosteller:
Underlying association should be independent of
the relative number of male and female students
in the samples
Tan,Steinbach, Kumar
4/18/2004
68
.
.
.
.
.
Transaction 1
Transaction N
1
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
1
1
1
1
1
1
1
1
0
1
1
1
1
0
1
1
1
1
1
0
1
1
1
1
1
1
1
1
0
0
0
0
0
1
0
0
0
0
0
(a)
Tan,Steinbach, Kumar
(b)
Introduction to Data Mining
(c)
4/18/2004
69
Example: -Coefficient
60
10
70
10
20
30
70
30
100
20
10
30
10
60
70
30
70
100
4/18/2004
70
A
A
B
p
r
B
q
s
A
A
B
p
r
B
q
s+k
Invariant measures:
Non-invariant measures:
Tan,Steinbach, Kumar
4/18/2004
71
Q
Y
M
J
G
s
c
L
V
I
IS
PS
F
AV
S
Measure
Range
P1
P2
P3
O1
O2
O3
O3'
O4
Correlation
Lambda
Odds ratio
Yule's Q
Yule's Y
Cohen's
Mutual Information
J-Measure
Gini Index
Support
Confidence
Laplace
Conviction
Interest
IS (cosine)
Piatetsky-Shapiro's
Certainty factor
Added value
Collective strength
Jaccard
-1 0 1
01
01
-1 0 1
-1 0 1
-1 0 1
01
01
01
01
01
01
0.5 1
01
0 .. 1
-0.25 0 0.25
-1 0 1
0.5 1 1
01
0 .. 1
Yes
Yes
Yes*
Yes
Yes
Yes
Yes
Yes
Yes
No
No
No
No
Yes*
No
Yes
Yes
Yes
No
No
Yes
No
Yes
Yes
Yes
Yes
Yes
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
No
No
No
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
No
Yes
Yes
Yes
Yes**
Yes
Yes
Yes
No
No
Yes
Yes
No
No
Yes
Yes
Yes
No
No
No
No
No
No
No
No
No
No
No
No
No
No
No
Yes
No*
Yes*
Yes
Yes
No
No*
No
No*
No
No
No
No
No
No
Yes
No
No
Yes*
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
No
No
No
Yes
No
No
Yes
Yes
No
Yes
No
No
No
No
No
No
No
No
No
No
No
Yes
No
No
No
Yes
No
No
No
No
Yes
2
1
2
1 2
3
0
Yes
Introduction
3
3 to Data
3 3 Mining
Yes
Yes
No
No
No
No
Klosgen's
K
Tan,Steinbach, Kumar
4/18/2004
72
No
Support-based Pruning
Tan,Steinbach, Kumar
4/18/2004
73
All Itempairs
1000
900
800
700
600
500
400
300
200
100
0.
0.
0.
0.
0.
0.
0.
0.
1
0.
-1
-0
.9
-0
.8
-0
.7
-0
.6
-0
.5
-0
.4
-0
.3
-0
.2
-0
.1
Correlation
Tan,Steinbach, Kumar
4/18/2004
74
0.
0.
0.
-1
-0
.9
-0
.8
-0
.7
-0
.6
-0
.5
-0
.4
-0
.3
-0
.2
-0
.1
Correlation
0
0.
50
0.
50
100
0.
100
150
0.
150
0.
200
200
250
250
0
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
0.
9
300
-1
-0
.9
-0
.8
-0
.7
-0
.6
-0
.5
-0
.4
-0
.3
-0
.2
-0
.1
300
0.
Correlation
Correlation
Tan,Steinbach, Kumar
4/18/2004
75
0
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
0.
9
0
-1
-0
.9
-0
.8
-0
.7
-0
.6
-0
.5
-0
.4
-0
.3
-0
.2
-0
.1
Support-based pruning
eliminates mostly
negatively correlated
itemsets
Steps:
Generate 10000 contingency tables
Rank each table according to the different measures
Compute the pair-wise correlation between the
measures
Tan,Steinbach, Kumar
4/18/2004
76
Conviction
Odds ratio
0.9
Col Strength
Correlation
0.8
Interest
0.7
PS
CF
0.6
Jaccard
Yule Y
Reliability
Kappa
0.5
0.4
Klosgen
Yule Q
0.3
Confidence
Laplace
0.2
IS
0.1
Support
Jaccard
0
-1
Lambda
Gini
-0.8
-0.6
-0.4
-0.2
0
0.2
Correlation
0.4
0.6
0.8
J-measure
Mutual Info
1
10
11
12
13
14
15
16
17
18
19
20
21
Tan,Steinbach, Kumar
4/18/2004
77
support
50%
1
0.9
Conviction
Odds ratio
0.8
Col Strength
0.7
Laplace
Confidence
0.6
Jaccard
Correlation
Klosgen
Reliability
0.5
0.4
PS
Yule Q
0.3
CF
Yule Y
0.2
Kappa
0.1
IS
Jaccard
0
-1
Support
Lambda
-0.8
-0.6
-0.4
-0.2
0
0.2
Correlation
0.4
0.6
0.8
Gini
J-measure
Mutual Info
1
10
11
12
13
14
15
16
17
18
19
20
21
Tan,Steinbach, Kumar
4/18/2004
78
support
30%
Interest
Reliability
0.9
Conviction
0.8
Yule Q
Odds ratio
0.7
Confidence
CF
Jaccard
0.6
Yule Y
Kappa
Correlation
0.5
0.4
Col Strength
IS
0.3
Jaccard
0.2
Laplace
PS
0.1
Klosgen
Lambda
0
-0.4
Mutual Info
Gini
-0.2
0.2
0.4
Correlation
0.6
0.8
J-measure
1
10
11
12
13
14
15
16
17
18
19
20
21
Tan,Steinbach, Kumar
4/18/2004
79
Objective measure:
Rank patterns based on statistics computed from data
e.g., 21 measures of association (support, confidence,
Laplace, Gini, mutual information, Jaccard, etc).
Subjective measure:
Rank patterns according to users interpretation
Tan,Steinbach, Kumar
4/18/2004
80
+
-
+ - +
Expected Patterns
Unexpected Patterns
Tan,Steinbach, Kumar
4/18/2004
81
lfactor = L / (k
k-1)
Usage evidence
lfactor
P( X X ... X )
P( X X ... X )
1
4/18/2004
82
THANK YOU
Tan,Steinbach, Kumar
4/18/2004
83