Anda di halaman 1dari 12

ANanlysis Of VAriance (ANOVA)

The basic purpose of the analysis of variance is to test the homogeneity of several population means. Why do we need a new procedure, the analysis of variance, to compare the population means when we already have the t-test available? In comparing 3 means, you could test each of the three pairs of hypotheses: H0: 1 = 2 H0: 1 = 3 H0: 2 = 3

To compare 4 sample means, we would need 6 tests and for 5 sample means we need 10 tests. But ANOVA provides one overall test to judge the equality of several means. i.e., In a situation when we have three or more samples to consider at a time, this ANOVA

method is used for testing the hypothesis that all the samples are drawn from the populations with the same mean. For e.g., a farmer may be interested in comparing the average yield of more than two fertilizers, a hospital might want to compare three brands of pain killer before adopting one.

The term Analysis of Variance was introduced by Prof. R. A. Fisher.

The quantity of information contained in a sample is affected by various causes that the experimenter may or may not be able to control. This ANOVA is used to determine how the different causes affect the average response, i.e., mean.

Variation is inherent in nature. The total variation is any set of numerical data is due to a no. of causes which may be classified as: (i) (ii) Assignable causes Chance causes

Variation due to assignable causes can be detected and measured whereas the variation due to chance causes is beyond the control of human hand and cannot be traced separately.

Definition: According to R. A. Fisher, Analysis of Variance is the separation of variance due to one group of causes from the variance due to the other group. By this technique, the total variation in the sample data is expressed as the sum of its nonnegative components where each of these components is a measure of the variation due to

some specific independent cause. The ANOVA consists of two steps: (i) The estimation of the amount of variation due to each of the independent causes separately Comparing these estimates due to assignable causes with the estimates due to chance causes, the later being known as error.


ANOVA test is based on the test statistic F (or variance ratio). It should be clearly understood that ANOVA technique is not designed to test the quality of several population variances. Rather, its objective is to test the equality of several means.

OBJECTIVES: (i) ANOVA technique enables us to compare several population means simultaneously and thus results in lot of savings in terms of time and money as compared to several


experiments required for comparing two population means at a time. ANOVA technique is used in agricultural experiments, all types of design of experiments (i.e., the way that a sample is selected) in various diverse fields such as industry, education, psychology, business, etc.,

ANOVA has two categories: (i) (ii) One-way classification Two-way classification

ONE WAY CLASSIFICATION: Assumptions in one way classification: (i) The populations from which the samples were drawn must be normally distributed. (ii) The samples must be independent (iii) Variances of the populations must be equal.

Let us suppose that n sample observations of a random variable X are divided into k classes (samples) on the basis of some criterion. i.e., the k samples are classified by a single criterion. So, it is named one way. Xij = jth member of the ith sample or class. n = n1 + n2 +.....+ nk = i = mean of the ith class = where i = 1, 2,...., k. = grand mean = =

Null and alternative hypothesis: H0: 1 = 2 =....... = k H1: at least one of the means is different from others

Steps involved in One Way:

Step1: compute Grand total of all observations G= Step2: compute Correction Factor or Error for the mean CF =

Step3: compute the Raw Sum of Squares RSS = observations = sum of the squares of all the

Step4: Total S.S. = RSS CF

Step5: Compute Ti = = the sum of all observations in the ith class, i = 1, 2,..., k. Step6: Between Classes S.S. = This measures the variation due to the interaction between the k samples Step7: Within Classes S.S. or Error S.S. = Total S.S. Between classes S.S. This measures the variation due to differences within individual samples, i.e., the pooled variation within the k individual samples. Degrees of freedom: (i) For Total SS: Since Total SS involves n squared observations, its degrees of freedom are (n 1). For between classes: Since this involves k squared observations, its degrees of freedom are (k 1).


(iii) For within classes: Here the df is df = (n1 1) + (n2 1) +..... + (nk 1) = (n k) Note that df(total) = df(between classes) + df(within classes) Step8: Draw the ANOVA table as given below: Sources of variation (1) Between Samples k1 d.f. (2) Sum of Mean S.S. Squares (4) = (S.S.) (3) BSS F= WSS Variance Ratio (F)

Within nk Samples (i.e., Error) Total n1


Step9: Find the critical value of the test statistic F for (k 1, n k) d.f. and at the desired level of significance, say, , find the tabulated value of F. If computed value of test-statistic is greater than the tabulated value, reject H0, otherwise accept it. Step10: Write the conclusion in simple language. CRITICAL DIFFERENCE: If there is any significant difference between mean, we can find which pair(s) of means differ significantly. For this, we calculate the least significant difference (LSD) at the given level of significance. This least significant difference is known as the critical difference. CD or LSD ( i - j) = tn k (/2). ( )

| between any two If the difference | means is greater than the CD or LSD, it is said to be significant. Otherwise it is not significant.