Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrews tutorials: http://www.cs.cmu.edu/~awm/tutorials . Comments and corrections gratefully received.
Decision Trees
Andrew W. Moore Professor School of Computer Science Carnegie Mellon University
www.cs.cmu.edu/~awm awm@cs.cmu.edu 412-268-7599
Slide 1
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 2
Here is a dataset
age 39 51 39 54 28 38 50 52 31 42 37 30 24 33 41 34 26 33 38 44 41 : employmen education edun marital State_gov Bachelors Self_emp_ Bachelors Private HS_grad Private 11th Private Bachelors Private Masters Private 9th Self_emp_ HS_grad Private Masters Private Bachelors Private Some_coll State_gov Bachelors Private Bachelors Private Assoc_acd Private Assoc_voc Private 7th_8th Self_emp_ HS_grad Private HS_grad Private 11th Self_emp_ Masters Private Doctorate : : : 13 13 9 7 13 14 5 9 14 13 10 13 13 12 11 4 9 9 7 14 16 Never_mar Married Divorced Married Married Married Married_sp Married Never_mar Married Married Married Never_mar Never_mar Married Married Never_mar Never_mar Married Divorced Married : : job relation race gender hourscountry 40 13 40 40 40 40 16 45 50 40 80 40 30 50 40 45 35 40 50 45 60 wealth Adm_clericNot_in_famWhite Male Exec_man Husband White Male Handlers_cNot_in_famWhite Male Handlers_cHusband Black Male Prof_speci Wife Black Female White Female Exec_man Wife Other_servNot_in_famBlack Female Exec_man Husband White Male Prof_speci Not_in_famWhite Female Exec_man Husband White Male Exec_man Husband Black Male Prof_speci Husband Asian Male Adm_clericOwn_child White Female Sales Not_in_famBlack Male Craft_repai Husband Asian Male Transport_ Husband Amer_IndiaMale Farming_fisOwn_child White Male Machine_oUnmarried White Male Sales Husband White Male Exec_man Unmarried White Female Prof_speci Husband White Male : : : : : United_Stapoor United_Stapoor United_Stapoor United_Stapoor Cuba poor United_Stapoor Jamaica poor United_Starich United_Starich United_Starich United_Starich India rich United_Stapoor United_Stapoor *MissingVarich Mexico poor United_Stapoor United_Stapoor United_Stapoor United_Starich United_Starich : :
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 4
Classification
A Major Data Mining Operation Give one attribute (e.g wealth), try to predict the value of new peoples wealths by means of some of the other available attributes. Applies to categorical outputs
Categorical attribute: an attribute which takes on two or more discrete values. Also known as a symbolic attribute. Real attribute: a column of real numbers
Copyright Andrew W. Moore Slide 5
Todays lecture
Information Gain for measuring association between inputs and outputs Learning a decision tree classifier from data
Slide 6
Slide 7
Marital Status
Slide 8
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 9
Contingency Tables
A better name for a histogram:
Slide 11
Slide 12
Slide 13
Slide 14
Male Female
Slide 15
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 16
Slide 17
Slide 18
Slide 19
Slide 20
10
Looking at ten tables: as much fun as watching CNN Looking at 100 tables: as much fun as watching an
infomercial
Slide 21
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 22
11
Data Mining
Data Mining is all about automating the process of searching for patterns in the data. Which patterns are interesting? Which might be mere illusions? And how can they be exploited?
Slide 23
Data Mining
Data Mining is all about automating the process of searching for patterns in the data. Which patterns are interesting? Which might be mere illusions? And how can they be exploited?
Thats what well look at right now. And the answer will turn out to be the engine that drives decision tree learning.
Slide 24
12
Slide 25
13
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 27
Slide 28
14
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 29
Slide 30
15
40 Records
16
A Decision Stump
Slide 33
Recursion Step
Records in which cylinders =4 Records in which cylinders =5 Records in which cylinders =6 Records in which cylinders =8
Copyright Andrew W. Moore Slide 34
17
Recursion Step
Slide 35
Recursively build a tree from the seven records in which there are four cylinders and the maker was based in Asia
Slide 36
18
Slide 37
Dont split a node if all matching records have the same output value
Slide 38
19
Slide 39
Slide 40
20
Base Cases
Base Case One: If all records in current data subset have the same output then dont recurse Base Case Two: If all records have exactly the same set of input attributes then dont recurse
Slide 41
21
y = a XOR b
Slide 43
y = a XOR b
Slide 44
22
Create and return a non-leaf node with nX children. The ith child should be built by calling BuildTree(DSi,Output) Where DSi built consists of all those records in DataSet for which X = ith distinct value of X.
Slide 45
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 46
23
This quantity is called the training set error. The smaller the better.
Slide 47
Slide 48
24
Slide 49
Slide 50
25
Slide 51
Slide 52
26
Slide 53
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 54
27
Slide 56
28
The test set error is much worse than the training set error
why?
Slide 57
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 58
29
An artificial example
Well create a training dataset
Five inputs, all bits, are generated in all 32 possible combinations Output y = copy of e, Except a random 25% of the records have y set to the opposite of e
a 0 0 0 0 0 : 1
Copyright Andrew W. Moore
b 0 0 0 0 0 : 1
c 0 0 0 0 1 : 1
d 0 0 1 1 0 : 1
e 0 1 0 1 0 : 1
y 0 0 0 1 1 : 1
Slide 59
32 records
Slide 60
30
Slide 61
Slide 62
31
Slide 63
Slide 64
32
a 0 0 0 0 0 : 1
b 0 0 0 0 0 : 1
c 0 0 0 0 1 : 1
32 records
Slide 66
33
In about 12 of the 16 records in this node the output will be 0 So this will almost certainly predict 0
Copyright Andrew W. Moore
In about 12 of the 16 records in this node the output will be 1 So this will almost certainly predict 1
Slide 67
1/4 of the test set will be wrongly predicted because the test record is corrupted 3/4 of the test predictions will be fine
n/a
Slide 68
34
Overfitting
Definition: If your machine learning algorithm fits noise (i.e. pays attention to parts of the data that are irrelevant) it is overfitting. Fact (theoretical and empirical): If your machine learning algorithm is overfitting then it may perform less well on test set data.
Slide 69
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 70
35
Avoiding overfitting
Usually we do not know in advance which are the irrelevant variables and it may depend on the context
For example, if y = a AND b then b is an irrelevant variable only in the portion of the tree in which a=0
Slide 72
36
A chi-squared test
Suppose that mpg was completely uncorrelated with maker. What is the chance wed have seen data of at least this apparent level of association anyway?
Slide 73
A chi-squared test
Suppose that mpg was completely uncorrelated with maker. What is the chance wed have seen data of at least this apparent level of association anyway?
37
Slide 75
Pruning example
With MaxPchance = 0.1, you will see the following MPG decision tree:
Note the improved test set accuracy compared with the unpruned tree
Slide 76
38
MaxPchance
Good news: Bad news:
The decision tree can automatically adjust its pruning decisions according to the amount of apparent noise and data. The user must come up with a good value of MaxPchance. (Note, Andrew usually uses 0.05, which is his favorite value for any magic parameter). But with extra work, the best MaxPchance value can be estimated automatically by a technique called cross-validation.
Good news:
Slide 77
MaxPchance
Technical note (dealt with in other lectures): MaxPchance is a regularization parameter.
Expected Test set Error
Decreasing
MaxPchance
High Bias
Slide 78
39
This is not the same as saying the simplest classification scheme for which Decision trees are biased to prefer classifiers that can be expressed as trees.
Slide 79
Assume all inputs are Boolean and all outputs are Boolean. What is the class of Boolean functions that are possible to represent by decision trees? Answer: All Boolean functions.
Take any Boolean function Convert it into a truth table Construct a decision tree in which each row of the truth table corresponds to one path through the decision tree.
Simple proof:
Slide 80
40
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 81
Real-Valued inputs
What should we do if some of the inputs are real-valued?
mpg good bad bad bad bad bad bad bad : : : good bad good bad cylinders displacemen horsepower weight acceleration modelyear maker 4 6 4 8 6 4 4 8 : : : 4 8 4 5 : : : 120 455 107 131 97 199 121 350 198 108 113 302 : : : 79 225 86 103 75 90 110 175 95 94 95 139 : : : 2625 4425 2464 2830 2265 2648 2600 4100 3102 2379 2228 3570 : : : 18.6 10 15.5 15.9 18.2 15 12.8 13 16.5 16.5 14 12.8 : : : 82 70 76 78 77 70 77 73 74 73 71 78 asia america europe america america asia asia america : : : america america europe europe
Slide 82
41
Hopeless: with such high branching factor will shatter the dataset and over fit Note pchance is 0.222 in the aboveif MaxPchance was 0.05 that would end up pruning away to a single root node.
Copyright Andrew W. Moore Slide 83
IG(Y|X:t) is the information gain for predicting Y if all you know is whether X is greater than or less than t
Then define IG*(Y|X) = maxt IG(Y|X:t) For each real-valued attribute, use IG*(Y|X) for assessing its suitability as a split
Copyright Andrew W. Moore Slide 84
42
Computational Issues
You can compute IG*(Y|X) in time
R log R + 2 R ny
Where
R is the number of records in the node under consideration ny is the arity (number of distinct values of) Y How?
Sort records according to increasing values of X. Then create a 2xny contingency table corresponding to computation of IG(Y|X:xmin). Then iterate through the records, testing for each threshold between adjacent values of X, incrementally updating the contingency table as you go. For a minor additional speedup, only test between values of Y that differ.
Slide 85
Slide 86
43
Slide 87
Slide 88
44
LearnUnprunedTree(X,Y) Input: X a matrix of R rows and M columns where Xij = the value of the jth attribute in the ith input datapoint. Each column consists of either all real values or all categorical values. Input: Y a vector of R elements, where Yi = the output class of the ith datapoint. The Yi values are categorical. Output: An Unpruned decision tree If all records in X have identical values in all their attributes (this includes the case where R<2), return a Leaf Node predicting the majority output, breaking ties randomly. This case also includes If all values in Y are the same, return a Leaf Node predicting this value as the output Else For j = 1 .. M If jth attribute is categorical IGj = IG(Y|Xj) Else (jth attribute is real-valued) IGj = IG*(Y|Xj) from about four slides back Let j* = argmaxj IGj (this is the splitting attribute well use) If j* is categorical then For each value v of the jth attribute Let Xv = subset of rows of X in which Xij = v. Let Yv = corresponding subset of Y Let Childv = LearnUnprunedTree(Xv,Yv) Return a decision tree node, splitting on jth attribute. The number of children equals the number of values of the jth attribute, and the vth child is Childv Else j* is real-valued and let t be the best split threshold Let XLO = subset of rows of X in which Xij <= t. Let YLO = corresponding subset of Y Let ChildLO = LearnUnprunedTree(XLO,YLO) Let XHI = subset of rows of X in which Xij > t. Let YHI = corresponding subset of Y Let ChildHI = LearnUnprunedTree(XHI,YHI) Return a decision tree node, splitting on jth attribute. It has two children corresponding to whether the jth attribute is above or below the given threshold.
Copyright Andrew W. Moore Slide 89
LearnUnprunedTree(X,Y) Input: X a matrix of R rows and M columns where Xij = the value of the jth attribute in the ith input datapoint. Each column consists of either all real values or all categorical values. Thwhere ings to no Input: Y a vector of R elements, Yi = the output class of the ith datapoint. The Yi values are categorical. te: Output: An Unpruned decision tree
Below the root no de, there is no po int testing categorical If all records in X have identical values in all their attributes (this case where R<2), return a Leaf Node attrib uteincludes s that the ha ve alr ea dy been predicting the majority output, breaking ties randomly. also includes spl it uponThis furcase the r up the tree. Th is is a If all values in Y are the same, return Leaf Node predicting this value as the output be cau se all the values of that Else attribute will be the same and IG must For j = 1 .. M therefore be zero. If jth attribute is categorical But j) its worth retesting IGj = IG(Y|X real-valued attrib Else (jth attribute is real-valued) utes, since they ma y ha IGj = IG*(Y|X about four slides back ve different valuej) s from below the binary split, an d may Let j* = argmaxj IGj be (this isfit the splitting attribute well use) ne from splitting furthe r. If j* is categorical then To ac hie For each value v of the jth veattribute the above optimiza tio v = yo u ould pa Let Xv sh = subset of rows ofwn X in which Xij = v. Let Yn, corresponding subset of Y ss do thrv ou gh the recursion a Let Child = LearnUnprunedTree(X ,Yv) cuvrre nt active set of att ributes. The number of children equals the number of Return a decision tree node, splitting on jth attribute. values of Pe theda jth attribute, vth child is Childv nti c detaiand l: athe thi rd termination co Else j* is real-valued and let tn be the best split threshold nditio sh ou ld occur if the LO it Let XLO = subset ofute rows of X in which Xij <= t. Letbe Y st = spl corresponding subset of Y attrib puts all its rec LO LO,Y LO) ords in exac Let Child = LearnUnprunedTree(X tly one ch ild (note that this me ans it HI = Let XHI = subset of rows of X in which X dcorresponding all other subset of Y ij > t. Let Yan attrib utes have IGHI =0 ). ,YHI ) Let ChildHI = LearnUnprunedTree(X
Return a decision tree node, splitting on jth attribute. It has two children corresponding to whether the jth attribute is above or below the given threshold.
Copyright Andrew W. Moore Slide 90
45
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real Valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 91
Warning: unlike what went before , this is an editor trade: not part ial trick of the of the official De cision Tree algori thm.
Slide 92
46
Machine Learning Datasets What is Classification? Contingency Tables OLAP (Online Analytical Processing) What is Data Mining? Searching for High Information Gain Learning an unpruned decision tree recursively Training Set Error Test Set Error Overfitting Avoiding Overfitting Information Gain of a real valued input Building Decision Trees with real valued Inputs Andrews homebrewed hack: Binary Categorical Splits Example Decision Trees
Slide 93
Slide 94
47
Slide 95
Slide 96
48
Conclusions
Decision trees are the single most popular data mining tool
Easy to understand Easy to implement Easy to use Computationally cheap
Its possible to get in trouble with overfitting They do classification: predict a categorical output from categorical and/or real inputs
Copyright Andrew W. Moore Slide 97
49
Slide 99
50
Discussion
Instead of using information gain, why not choose the splitting attribute to be the one with the highest prediction accuracy? Instead of greedily, heuristically, building the tree, why not do a combinatorial search for the optimal tree? If you build a decision tree to predict wealth, and marital status, age and gender are chosen as attributes near the top of the tree, is it reasonable to conclude that those three inputs are the major causes of wealth? ..would it be reasonable to assume that attributes not mentioned in the tree are not causes of wealth? ..would it be reasonable to assume that attributes not mentioned in the tree are not correlated with wealth?
Copyright Andrew W. Moore Slide 101
51