Anda di halaman 1dari 38

Slide 21.

Discriminant analysis

Discriminant analysis is used to estimate the relationship between a categorical dependent variable and a set of interval scaled, independent variables.

Naresh Malhotra and David Birks, Marketing Research, 3rd Edition, Pearson Education Limited 2007

Slide 21.2

Chapter outline
2. 3. 4. 5.

Basic concept
Relation to regression and ANOVA Discriminant analysis model Statistics associated with Discriminant analysis Conducting Dscriminant analysis


Multiple Discriminant analysis

Stepwise Discriminant analysis

Slide 21.3

Table 21.1 Similarities and differences among ANOVA, regression and discriminant analysis

Y = b0 + b1 X1 + b2 X2 + b3 X3

Slide 21.4

Discriminant analysis
Y = b0 + b1 X1 + b2 X2 + b3 X3
Discriminant analysis is a technique for analyzing data when the dependent variable is categorical and the independent variables are interval in nature. The objectives of discriminant analysis are as follows: Development of Discriminant functions, or linear combinations of independent variables, which will best discriminate between the categories of the dependent variable (groups). Examination of whether significant differences exist among the groups, in terms of the Independent variables. Determination of which predictor variables contribute to most of the intergroup differences. Classification of cases to one of the groups based on the values of the Independent variables. Evaluation of the accuracy of classification.

Slide 21.5

Discriminant analysis (Continued)

When the criterion variable has two categories, the technique is known as two-group discriminant analysis. When three or more categories are involved, the technique is referred to as multiple discriminant analysis. The main distinction is that, in the two-group case, it is possible to derive only one discriminant function. In multiple discriminant analysis, more than one function may be computed. In general, with G groups and k predictors, it is possible to estimate up to the smaller of G 1, or k, discriminant functions. The first function has the highest ratio of between-groups to within-groups sum of squares. The second function, uncorrelated with the first, has the second highest ratio, and so on. However, not all the functions may be statistically significant.

Slide 21.6

Geometric interpretation
Independent Variable


1 1 1 1 1 1 1 1 1
2 11 1 1 2 22 21

2 2 2 2 2 2 22 22


G1 G2 X1 Variable D


Slide 21.7

Discriminant analysis model

The discriminant analysis model involves linear combinations of the following form:

D = b0 + b1X1 + b2X2 + b3X3 + . . . + bkXk

Where D bs = = discriminant score discriminant coefficient or weight

Xs =

predictor or independent variable

The coefficients or weights (b), are estimated so that the groups differ as much as possible on the values of the discriminant function. This occurs when the ratio of between-group sum of squares to within-group sum of squares for the discriminant scores is at a maximum.
Between group sum of squares Within group sum of squares

Slide 21.8

Statistics associated with discriminant analysis

Canonical correlation. Canonical correlation measures the extent of association between the discriminant scores and the groups. It is a measure of association between the single discriminant function and the set of dummy variables that define the group membership. Centroid. The centroid is the mean values for the discriminant scores for a particular group. There are as many centroids as there are groups, as there is one for each group. The means for a group on all the functions are the group centroids.

Classification matrix. Sometimes also called confusion or prediction matrix, the classification matrix contains the number of correctly classified and misclassified cases.

Slide 21.9

Statistics associated with discriminant analysis (Continued)

Discriminant function coefficients. The discriminant function coefficients (unstandardised) are the multipliers of variables, when the variables are in the original units of measurement. Discriminant scores. The unstandardised coefficients are multiplied by the values of the variables. These products are summed and added to the constant term to obtain the discriminant scores. Eigenvalue. For each discriminant function, the Eigenvalue is the ratio of between-group to within-group sums of squares. Large Eigenvalues imply superior functions. Between group sum of squares
Within group sum of squares

Slide 21.10

Statistics associated with discriminant analysis (Continued)

F values and their significance. These are Between group sum of squares calculated from a one-way
Within group sum of squares

ANOVAGroup means and group standard deviations. These are computed for each predictor for each group. Pooled within-group correlation matrix. The pooled within-group correlation matrix is computed by averaging the separate covariance matrices for all the groups.

Slide 21.11

Statistics associated with discriminant analysis (Continued)

Standardised discriminant function coefficients. The standardised discriminant function coefficients are the discriminant function coefficients and are used as the multipliers when the variables have been standardised to a mean of 0 and a variance of 1. Structure correlations. Also referred to as discriminant loadings, the structure correlations represent the simple correlations between the predictors and the discriminant function. Total correlation matrix. If the cases are treated as if they were from a single sample and the correlations computed, a total correlation matrix is obtained. Wilks l . Sometimes also called the U statistic, Wilks l for each predictor is the ratio of the within-group sum of squares to the total sum of squares. Its value varies between 0 and 1. Large values of l (near 1) indicate that group means do not seem to be different. Small values of l (near 0) indicate that the group means seem to be different. Within group sum of squares
Total sum of squares

Slide 21.12

Figure 21.1 Conducting discriminant analysis

Slide 21.13

Conducting discriminant analysis formulate the problem

Identify the objectives, the dependent variable and the independent variables. The dependent variable must consist of two or more mutually exclusive and collectively exhaustive categories. The predictor variables should be selected based on a theoretical model or previous research, or the experience of the researcher. One part of the sample, called the estimation or analysis sample, is used for estimation of the discriminant function. The other part, called the holdout or validation sample, is reserved for validating the discriminant function. Often the distribution of the number of cases in the analysis and validation samples follows the distribution in the total sample.

Slide 21.14

Example: We want to determine the salient characteristics of families that have visited a skiing resort during the last 2 years. The households that visited the resort during the last two years are coded as 1 and those that did not visit are coded as 2.

Table 21.2

Slide 21.15

Information on resort visits: analysis sample (Continued)

Table 21.2 (Continued)

Slide 21.16

Information on resort visits: holdout sample

Table 21.3

Slide 21.17

Conducting discriminant analysis Estimate the discriminant function coefficients

The direct method involves estimating the discriminant function so that all the predictors are included simultaneously.
In stepwise discriminant analysis, the predictor variables are entered sequentially, based on their ability to discriminate among groups.

Slide 21.18

Results of two-group discriminant analysis

Table 21.4
The two groups are separated in terms of income than other variables

Low correlation among indepenedent variables Small values of lambda means groups are different on these variables

Only income, holiday and house hold size significantly differentiate between those who visited a resort and those who did not.

Slide 21.19

Results of two-group discriminant analysis (Continued)

Because there are two groups, only one discriminant function is estimated. The eigen value associated with this function is 1.782, and it accounts for 100% of the explained variance.

Table 21.4 (Continued)

The canonical correlation associated with this function is 0.8007. The square of this correlation is .64, which indicates that 64% of the variance in the dependent variable is explained by this model.
The sig indicates that the predictors significantly discriminate the groups.

The most important independent variable in discriminating between groups. Large standardised coefficients contribute more to the discriminating power of the function.
The signs of coefficients of all the independent variables are positive, which suggest that higher family income, household size, importance attached to family skiing holiday, attitude towards travel and age are more likely to result in the family visiting the resort.
Structure matrix: tells the relative importance of the predictors

Slide 21.20

Table 21.4 (Continued)

Results of two-group discriminant analysis (Continued)

Group centroids give the value of the discriminant function evaluated at the group means.

Slide 21.21

Results of two-group discriminant analysis (Continued)

Table 21.4 (Continued)

Slide 21.22

Conducting discriminant analysis Determine the significance of discriminant function

The null hypothesis that, in the population, the means of all discriminant functions in all groups are equal can be statistically tested.
In SPSS this test is based on Wilks l . If several functions are tested simultaneously (as in the case of multiple discriminant analysis), the Wilks l statistic is the product of the univariate for each function. The significance level is estimated based on a chisquare transformation of the statistic. If the null hypothesis is rejected, indicating significant discrimination, one can proceed to interpret the results.

Slide 21.23

Conducting discriminant analysis interpret the results

The interpretation of the discriminant weights, or coefficients, is similar to that in multiple regression analysis. Given the multicollinearity in the predictor variables, there is no unambiguous measure of the relative importance of the predictors in discriminating between the groups. With this caveat in mind, we can obtain some idea of the relative importance of the variables by examining the absolute magnitude of the standardised discriminant function coefficients. Some idea of the relative importance of the predictors can also be obtained by examining the structure correlations, also called canonical loadings or discriminant loadings. These simple correlations between each predictor and the discriminant function represent the variance that the predictor shares with the function. Another aid to interpreting discriminant analysis results is to develop a Characteristic profile for each group by describing each group in terms of the group means for the predictor variables.

Slide 21.24

Conducting discriminant analysis Assess validity of discriminant analysis

Many computer programs, such as SPSS, offer a leave-one-out cross-validation option. The discriminant weights, estimated by using the analysis sample, are multiplied by the values of the predictor variables in the holdout sample to generate discriminant scores for the cases in the holdout sample. The cases are then assigned to groups based on their discriminant scores and an appropriate decision rule. The hit ratio, or the percentage of cases correctly classified, can then be determined by summing the diagonal elements and dividing by the total number of cases. It is helpful to compare the percentage of cases correctly classified by discriminant analysis to the percentage that would be obtained by chance. Classification accuracy achieved by discriminant analysis should be at least 25% greater than that obtained by chance.

Slide 21.25

Results of three-group discriminant analysis

Table 21.5

Slide 21.26

Results of three-group discriminant analysis (Continued) Table 21.5 (Continued)

Slide 21.27

Results of three-group discriminant analysis (Continued)

Table 21.5 (Continued)

Slide 21.28

Results of three-group discriminant analysis (Continued)

Table 21.5 (Continued)

Slide 21.29

Figure 21.2 All-groups scattergram

Slide 21.30

Figure 21.3 Territorial map

Slide 21.31

Stepwise discriminant analysis

Stepwise discriminant analysis is analogous to stepwise multiple regression in that the predictors are entered sequentially based on their ability to discriminate between the groups. An F ratio is calculated for each predictor by conducting a univariate analysis of variance in which the groups are treated as the categorical variable and the predictor as the criterion variable. The predictor with the highest F ratio is the first to be selected for inclusion in the discriminant function, if it meets certain significance and tolerance criteria. A second predictor is added based on the highest adjusted or partial F ratio, taking into account the predictor already selected.

Slide 21.32

Stepwise discriminant analysis (Continued)

Each predictor selected is tested for retention based on its association with other predictors selected. The process of selection and retention is continued until all predictors meeting the significance criteria for inclusion and retention have been entered in the discriminant function. The selection of the stepwise procedure is based on the optimizing criterion adopted. The Mahalanobis procedure is based on maximising a generalised measure of the distance between the two closest groups. The order in which the variables were selected also indicates their importance in discriminating between the groups.

Slide 21.33

SPSS Windows
The DISCRIMINANT program performs both twogroup and multiple discriminant analysis. To select this procedure using SPSS for Windows click: Analyze>Classify>Discriminant Then run logit analysis or logistic regression using SPSS for Windows, click:

Analyze > Regression>Binary Logistic

Slide 21.34

SPSS Windows: two-group discriminant

1. 2. 3. 4. 5. Select ANALYZE from the SPSS menu bar. Click CLASSIFY and then DISCRIMINANT. Move visit in to the GROUPING VARIABLE box. Click DEFINE RANGE. Enter 1 for MINIMUM and 2 for MAXIMUM. Click CONTINUE. Move income, travel, vacation, hsize and age into the INDEPENDENTS box. Select ENTER INDEPENDENTS TOGETHER (default option). Click on STATISTICS. In the pop-up window, in the DESCRIPTIVES box check MEANS and UNIVARIATE ANOVAS. In the MATRICES box check WITHIN-GROUP CORRELATIONS. Click CONTINUE. Click CLASSIFY.... In the pop-up window in the PRIOR PROBABILITIES box check ALL GROUPS EQUAL (default). In the DISPLAY box check SUMMARY TABLE and LEAVE-ONE-OUT CLASSIFICATION. In the USE COVARIANCE MATRIX box check WITHIN-GROUPS. Click CONTINUE. Click OK.

6. 7.



Slide 21.35

HBAT Case Studies

The X4 variable indicates the region in which the firm was located i.e North America or outside North America. The HBAT management team is interested in any differences in perceptions between those customers served by the US sales force versus those customers outside US which are served by independent distributors. The managment team is interested to see whether the other areas of operation (variables X6 to X18) are viewed differently between these two sets of customers. This inquiry follows the obvious need by management to always strive to better understand their customer, in this instance by focusing on any differences that may occur between geographic areas. If any perceptions of HBAT are found to differ significantly between firms in these two regions, the company then would be able to develop strategies to remedy any perceieved deficiencies and develop differentiated strategies to accomodate different perceptions.

Slide 21.36

Analysis using SPSS

Dependent Variable is X4 Independent Variable is X6 to X 18 to discriminate between firms in each area. Estimation model : The objective is to identify the set of independent variables (HBAT perceptions) that maximally differentiates between the two groups of customers.

Slide 21.37

Assessing Group differences: In profiling the two groups, we can identify variables with larges differences in the group mean. (X6, X11, X12, X13 and X17) Repeat the same using step method. Carry out profiling of each group on these variables to understand the differences between them.

Slide 21.38

Analysis and managerial implications

We see varied profile between these two groups on these five variables. Group 0 : Customers in US have higher perceptions on three variables X6 Product quality X13 Competitve pricing X11 Product line Group 1 Customers outside US have higher perceptions on these variables X7 E-commerce X17 Price flexibility The US customers have much better perceptions of the HBAT products, whereas the outside US customers feel better about pricing issues and ecommerce.

Management should use these results to develop strategies that accentuate these strengths and develop additional strengths to complement them.