Anda di halaman 1dari 52

FACTOR ANALYSIS

Introduction:
Any time it is desirable to collect as much data as possible in a research to find out a
result. Thus, we find to collect data on many variables. But after collecting the data, one
will face a situation on how to analyze those many variables. As the number of variables
increases the number of correlations will increase faster than that. Then the problem of
comprehending those variables into a manageable number comes into picture. This task
of data reduction or summarization of data is achieved by Factor Analysis.
Factor Analysis is a multivariate statistical technique, which is used for data
reduction by identifying an underlying structure in the data. For example, consider that
we have collected data on the causes of breakdowns in a bus depot. The causes become
variables. Assume, the number of variables has been found to be 40 variables. Now,
factor analysis can reduce these 40 variables into 5 variables or factors or components.
These 5 factors are weighted components or factors of those 40 variables.
Purpose of factor Analysis:
It is used as a data reduction technique and at the same time we can maintain as
much of the original information as possible. That is the variance in the original data
(say 100%) can be explained by an optimum number of reduced variables to an extent
(say around 80%). But when we want to explain 100% variance, then in most cases the
number of original variables will be equal to the number of reduced variables.
Qualitative and Quantitative distinction can be made in a large set of data e.g. Price,
quality, faster). Factor analysis is used to find an underlying structure in a set of data.
Factor analysis can be used to find the scoring & researcher can find the weights to
corresponding variables. These weights along with responses can be used to arrive at the
overall scores of each respondent.
Factor analysis can transform the data into variables that are uncorrected or mostly have
no correlation between them. Then this data can be used for further analysis such as
multiple regression.
1
STEPS IN ARRIVING AT PRINCIPAL COMPONENTS/FACTORS:
In general the process of factor analysis can be divided into three major steps
i) Formulation of the data set.
ii) Estimation of correlation/covariance matrix
iii) Extraction and rotation of factors
Now, let us look at the above these steps in detail and the working Methodology. Here a
single example is taken throughout this chapter to illustrate the various calculations
involved.
i) Formulation of the data set:
Data set is to be formulated in accordance to the objective of the research. The scale used
in the variables must be an interval scale or ratio scale. Then appropriate sample size has
to be taken. Generally (9) it is better to take a sample size of about 4 or 5 times the
number of variables. Though it is not mandatory
The following example analyzes socioeconomic data provided by Harman (1976). The
five variables represent total population, median school years, total employment,
miscellaneous professional services, and median house value. Each observation
represents one of twelve census tracts in the Los Angeles Standard Metropolitan
Statistical Area.
Table 1
Total
population
Median
school
years
Total
employment
Miscellaneous
professional
services
Median
house
value
5700 12.8 2500 270 25000
1000 10.9 600 10 10000
3400 8.8 1000 10 9000
3800 13.6 1700 140 25000
4000 12.8 1600 140 25000
8200 8.3 2600 60 12000
1200 11.4 400 10 16000
9100 11.5 3300 60 14000
9900 12.5 3400 180 18000
9600 13.7 3600 390 25000
9600 9.6 3300 80 12000
9400 11.4 4000 100 13000
2
ii) Formulation of Correlation or covariance matrix:
The data set in the above step is converted into a correlation or a covariance matrix. Here
we will see how to form a correlation matrix in our example. A correlation matrix is the
matrix showing how the variables are correlated and respond with each other. Thus the
diagonal elements will be correlations of each variable with itself and the value will be 1.
since, factor analysis groups the variables which are correlated with each other,
correlation matrix should not be an identify matrix. This is tested using Bartlett test of
sphericity (discussed later in the chapter)
A =
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
]
1

13000 100 4000 11 9400


12000 80 3300 10 9600
25000 390 3600 14 9600
18000 180 3400 13 9900
14000 60 3300 12 9100
16000 10 400 11 1200
12000 60 2600 08 8200
25000 140 1600 13 4000
25000 140 1700 14 3800
9000 10 1000 9 3400
10000 10 600 11 1000
25000 270 2500 13 5700
mean(A) = 6242 11 2333 121 17000
3
x =1.0e+003 *
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
]
1

4.0000 - 0.0208 - 1.6667 0.0000 - 3.1583


5.0000 - 0.0408 - 0.9667 0.0018 - 3.3583
8.0000 0.2692 1.2667 0.0023 3.3583
1.0000 0.0592 1.0667 0.0011 3.6583
3.0000 - 0.0608 - 0.9667 0.0001 2.8583
1.0000 - 0.1108 - 1.9333 - 0.0000 - 5.0417 -
5.0000 - 0.0608 - 0.2667 0.0031 - 1.9583
8.0000 0.0192 0.7333 - 0.0014 2.2417 -
8.0000 0.0192 0.6333 - 0.0022 2.4417 -
8.0000 - 0.1108 - 1.3333 - 0.0026 - 2.8417 -
7.0000 - 0.1108 - 1.7333 - 0.0005 - 5.2417 -
8.0000 0.1492 0.1667 0.0014 0.5417 -
S=x'*x=1.0e+008 *
1
1
1
1
1
1
]
1

4.4600 0.0626 0.1060 0.0011 0.0540


0.0626 0.0015 0.0081 0.0000 0.0191
0.1060 0.0081 0.1695 0.0000 0.4567
0.0011 0.0000 0.0000 0.0000 0.0000
0.0540 0.0191 0.4567 0.0000 1.3017

std(A) = 1.0e+003 *
[ ] 6.3675 0.1149 1.2412 0.0018 3.4400
D
-1/2
=
1
1
1
1
1
1
]
1

0.0002 0 0 0 0
0 0.0087 0 0 0
0 0 0.0008 0 0
0 0 0 0.5597 0
0 0 0 0 0.0003
4
R=1/11*D
-1/2
*S*D
-1/2
=
1
1
1
1
1
1
]
1

1.0000 0.7777 0.1219 0.8631 0.0224


0.7777 1.0000 0.5147 0.6914 0.4389
0.1219 0.5147 1.0000 0.1543 0.9724
0.8631 0.6914 0.1543 1.0000 0.0098
0.0224 0.4389 0.9724 0.0098 1.0000
While using covariance matrix the method is entirely similar except that instead of
formulating correlation matrix, covariance matrix of the variables is formulated.
1/11*x*x =
1
1
1
1
1
1
]
1

5 40545454.5 569090.91 963636.36 9818.18 490909.09


569090.91 13208.33 73424.24 141.96 173507.58
963636.36 73424.24 1540606.06 342.12 4152121.21
9818.18 141.96 342.12 3.19 59.92
490909.09 173507.58 4152121.21 59.92 1 11833560.6
Before binding out the correlation matrix, we should standardize the observations to get
scale invariant. Then for the standardized values, we should find the correlation matrix.
Method of Extraction
There are various methods of extracting factors from the correlation/covariance matrix.
They are
1. Principal component analysis
2. Common factor analysis
5
a) Principal factor analysis
b) Maximum likelihood method
c) Alpha method
d) Image factoring method
e) Unweighted least square method
f) Generalized least square method
In our context we will discuss principal component analysis, principal factor analysis and
the maximum likelihood method.
Principal component analysis
This method finds out factors or components which are a linear combination or weighted
components of the variables in the data set. Mathematically, it can be represented as
PC
1
= W
11
X
1
+ W
12
X
2
+ W
13
X
3
+ + W
1p
X
p
Where,
PC
1
= Principal component
W
11
, W
12
, W
13
= weights of the respective variable
Here, it is seen that the first principal component accounts for the maximum variance in
the data set. The variance accounted by these p components will be equal to the total
variance in the data. Thus, an m
th
component an be represented
PCm = W
m1
X
1
+ W
m2
X
2
+ W
m3
X
3
+ + W
mp
X
p
So, if there are p variables there will be p components or factors.
Thus, the data set with p variables and n observations
X
1
X
2
Xp
1
2
6
3
. : : : :
: . . . .
n
np
Geometric Interpretation of Principal Components:
X
2
Pc
1

X
1
X
3
Assume that we have N observations on three variables. The observations are
plotted on the three dimensions.
X
1
X
2
X
3
1 X
11
X
12
X
13
2 X
21
X
22
X
23
: : : :
i X
i1
X
i2
X
i3
: : : :
N X
N1
X
N2
X
N3
X
1
2
X
2
2
X
3
2
7
The resultant figure will be spherical nature as shown above. The objective is to capture
the sample variances through different Principal components. From the figure, Pc
1
component captures most of the variances or It reflects the spread of the data. If the Pc
1
passes through the ( X
1
, X
2
,X
3
) and it makes angles
2
and
3
with X
1
, X
2
and X
3
axes.
The principal component orientation is determined completely by the cosine of the
angles. Now the equivalent point for ( X
i1
,X
i2
,X
i3
) on Pc
1
namely Y
i1.
Y
i1
= cos
1
( X
i1
- X
1
) + cos
2
( X
i2
- X
2
) + cos
3
( X
i3
- X
3
)
Where (cos
1
)
2
+ (cos
2
)
2
+ (cos
3
)
2
= 1
Now it is possible to arrive at for each Triplet (X
i1
,X
i2
,X
i3
) on Y
i1
. The mean of Y
1
values is zero.
_ 1 N
Y
i =
------- Y
i1.
N i=1
_ 1 3 1
Y
i
=

------- Y
i1
= ----- [ Y
11
+ Y
21
]
3 i=1 2
1
= ------ [cos
1
( X
11
- X
1
) + cos
2
( X
12
- X
2
) + cos
3
( X
13
- X
3
)
2
cos
1
( X
21
- X
1
) + cos
2
( X
22
- X
2
) + cos
3
( X
23
- X
3
) ]
1
= cos
1
[ ----- (( X
11
+ X
21
) -X
1
) ]
2
cos
1
( X
1
- X
1
) = 0.
Now the variance on the principal component is the variance of X
1
, X
2
and X
3
variable.
So the variance of the Principal Component
1 N
= ------- ( Y
i1
Y
1
)
2
[ Y
1
= 0 ]
N-1 i=1
1 N
= ------- Y
i1
2

N-1 i=1
Differentiate this variance by the angles
1
,
2
and
3
and equate it to zero, because we
want to maximize the variance.
8
[ cos
1,
cos
2 ,
cos
3
] constitute the Principal Eigen vector. This will
be the Eigen vector of the maximum characteristic root, arrived at by the highest Eigen
value of the variance- covariance matrix.
[ cos
1,
cos
2 ,
cos
3
]s cos
1
cos
2
=
max
cos
3

In this similar manner, we can exhaust the next highest major axis and name it Pc
2
and
proceed as above, we can arrive at the second highest variance and we can prove that

2
= [ cos
1,
cos
2 ,
cos
3
]s cos
1
cos
2
cos
3

where
1
,
2
and
3
are the angles made by the second highest major axis makes angle
with the original axis X
1
,X
2
and X
3
. This will be the second highest characteristic root of
the variance, covariance matrix of the original data.
The same logic could be extended to arrive at further Principal Components.
The variance =
1

1
= variance =
Pc1
2
Subject to
1

1 = 1.
To determine the coefficient and solve it by Legranjian method.

------- [
Pc1
2
+
1
( 1 -
1

1
)]

1

= ------- [
1

1

+
1
( 1 -
1

1
)] = 0.

1
( - I )
1
= 0
( - I )
1
= 0
the value of
1
is calculated from | - I | = 0

1
-
1
I
1
= 0

1

1
-
1
I
1

1
= 0

1

1
=
1
= variance.
Similarly same approach may be extended to arrive at the second, thirdand ***
Principal Eigen vector. But we should keep in mind that

i
= 1
9
1

i
.
j
= 0
This property help us to understand that the principal components are orthogonal.
Is converted into a correlation or covariance matrix of size pp. Then further it is
transformed into
Pc
1
Pc
2
Pc p
1 w
11
w
21
wp
1
2 w
12
w
22
wp
2
3 w
13
w
23
wp
3
. : : : :
: . . . .
p w
1
p w
2
p wpp
pp
Since our objective was to reduce the number of variables, the number of components to
be retained; which account for max variance with least number; can be done by various
methods, for eg. Screenplot. They are discussed in further sections:
Principal Components from correlation matrix:
The correlation matrix as found in step 2 is
10
1
1
1
1
1
1
]
1

1.000 0.778 0.122 0.863 0.022


0.778 1.000 0.515 0.691 0.439
0.122 0.515 1.000 0.154 0.972
0.863 0.691 0.154 1.000 0.010
0.022 0.439 0.972 0.010 1.000
For this correlation matrix, eigen values and corresponding eigen vectors are calculated
from the equation
| R - I | = 0
Here are the eigen values
The eigen values are
1
1
1
1
1
1
]
1

2.8732 0 0 0 0
0 1.7965 0 0 0
0 0 0.2151 0 0
0 0 0 0.0994 0
0 0 0 0 0.0159
For each eigen value the corresponding eigen vectors are
1
1
1
1
1
1
]
1

0.4668 0.4165 0.1409 0.7631 - 0.0801 -


0.5501 0.0776 0.6633 0.5014 0.0002 -
0.3967 0.5416 - 0.2467 - 0.0253 - 0.6984 -
0.4524 0.4064 0.6897 - 0.3531 0.1726
0.3427 0.6016 - 0.0606 - 0.2024 - 0.6899
Thus each principal component is one eigen vector.
11
The principal components are: -
PC
1
= 0.34265X1+ 0.45244X2 + 0.39666X3 + 0.55014X4+ 0.4668X5
PC
2
= -0.60162X1+ 0.40638X2 - 0.54165X3 + 0.077608X4+ 0.41654X5
PC
3
= -0.06062X1 - 0.6897X2 - 0.2467X3 + 0.66326X4+ 0.14094X5
PC
4
= -0.2024X1+ 0.35308X2 - 0.02528X3 + 0.50141X4 - 0.7631X5
PC
5
= 0.68992X1+ 0.17264X2 - 0.69842X3 - 0.00018X4 - 0.08009X5
The variance explained by each principal component is the percent of eigen values. The
individual and cumulative variances explained by the different principal components are
shown in the following table.
Total Variance Explained


Initial Eigenvalues Extraction Sums of Squared Loadings
Component Total % of Variance Cumulative % Total % of Variance Cumulative %
1 2.8732 57.464 57.464 2.8732 57.464 57.464
2 1.7965 35.93 93.394 1.7965 35.93 93.394
3 0.2151 4.302 97.696
4 0.0994 1.987 99.683
5 0.0158 0.316 100.000
Principal Components from covariance matrix
Consider the covariance matrix obtained in step 2
S =
1
1
1
1
1
1
]
1

5 40545454.5 569090.91 963636.36 9818.18 490909.09


569090.91 13208.33 73424.24 141.96 173507.58
963636.36 73424.24 1540606.06 342.12 4152121.21
9818.18 141.96 342.12 3.19 59.92
490909.09 173507.58 4152121.21 59.92 1 11833560.6
12
similar to what was done in the previous sections, here eigen values and corresponding
eigen vectors are calculated.
The eigen values obtained are
1
1
1
1
1
1
]
1

0.57 0.0000 0.0000 0.0000 0.0000


0.0000 2835.13 0.0000 0.0000 0.0000
0.0000 0.0000 60785.51 0.0000 0.0000
0.0000 0.0000 0.0000 6 13279399.5 0.0000
0.0000 0.0000 0.0000 0.0000 7 40589811.9
Eigen vectors are: -
1
1
1
1
1
1
]
1

0.00 0.01 0.02 - 0.03 - 1.00


0.00 1.00 - 0.02 0.01 0.01
0.00 0.02 0.94 0.33 0.03
1.00 - 0.00 - 0.00 0.00 - 0.00
0.00 - 0.01 0.33 - 0.94 0.02
Initial Eigenvalues(a)
Extraction Sums of Squared
Loadings
Component Total
% of
Variance
Cumulative
%
Total
% of
Variance
Cumulative
%
1 40589811.97 75.260 75.260 40589811.97 75.260 75.260
2 13279399.56 24.622 99.882 13279399.56 24.622 99.882
3 60785.51 0.113 99.995
4 2835.14 5.25710
-5
99.9951
5 0.57 1.05710
-8
100.000
13
Test of significance:
If the eigen values are nearly zero, then we can not retain the principal components
arrived due to those eigen values. So we need to test weather the zero eigen values are
significant or not. For this kind of test Bartlett (1947) developed a procedure to test the
hypothesis. Let us see how this test is used for the variance-covariance input as well as
correlation input.
Variance Co-variance Input
Bartletts approximate
2
value for the above testing is given below
1
]
1

+ +

k
j
j
l q l S M
1
) (
2
ln ln ln
In the p variable matrix, p-k eigen values have nearly zero or zero values then we
should see weather it is better to retain k components.
l
(j)
= j
th
eigen value from S
q = p k
M = n k
6
1
(2q + 1 + 2/q)
( )

,
_

k
j
j
l S tr
q
l
1
) (
1
The degrees of freedom are (p k 1) (p k + 2)
For the given example,
p = 5, q = p-k = 2, k = 3, n = 12
M = 12 3 1/6(2*2 + 1 +2/2)= 11
|S| = 5.2689 10
22
, ln|S| = 54.62

k
j
j
l
1
) (
= ln 40589811.97 + ln 13279399.56 + ln 60785.51 = 44.92
14
( )

,
_

k
j
j
l S tr
q
l
1
) (
1
= (53932832.74 53929997.04 = 1417.85
q ln l = 14.514

2
= 11( 54.62 + 44.92 + 14.514) = 52.95,
d.f. = (5 3 1)(5 3 + 2) = 2
Correlation Input
If the correlation matrix is arrived by the standardized data matrix, then we should
test whether the population correlation matrix is identity matrix. If we prove that it
is not identity matrix, then we could conclude that it has some eigen values which
will be significant. The test used for this purpose is known as Bartletts Sphericity test
H
O
: = I
The test statistic


( ) ln 5 2
6
1
1
2
1
]
1

+ p n
With degrees of freedom p (p-1)
For the given example,

2
= [12 1
6
1
(2 5 + 5)] ln 0.0018 = 53.72
degrees of freedom =
2
1
(5)(4) = 10
We can reject the null hypothesis that population correlation matrix is the identity matrix
at the 0.01 significance level.
No. of Principal components to be retained
Variance covariance input
First identify the number of significant components using Bartletts test. We cannot have
all the principal components. So we should have to retain only less number of principal
components from the significantly proved components.
The number of components to be retained may be determined by the cumulative
percentage of variation:
15

p
i
i
m
j
j
1
1

Where

m
j
j
1

represents the cumulative percentage of variation


Correlation Input:
The eigen values lacks meaning Kaiser (1958)
In this method the principal component should represent more variance than the single
variable & the eigen value should be greater than one. If it is less than one that means
that the factor does not even represents the variation of one variable.
Scree Test
Cattell (1966) has proposed a method to return the number of principal components. This
test is called scree test. In this test, after getting all the eigen values, they are arranged in
ascending order. Then the eigen values are taken on the Y-axis and the factors on the X-
axis. If you plot usually, there is a chance that you will get elbow shaped curve.
16
For the given example, we get the following curve.
Draw a straight line for the lowest eigen values as shown in the figure. Then include
those eigen values whose values are above the straight line and also the highest values in
the straight line. From the figure it is clear that we should keep three factors.
Horn Test
Horn (1965) has suggested a method to retain the no. of principal components. The
procedure is as follows. First, arrange the eigen values in ascending order and plot the
eigen values against the number of factors. Then develop m sets of np normally
distributed random variate, then we may have the p eigen values for each set. Then find
out the average eigen values for each component and plot another curve with the average
eigen value against the factors. Both the curves may intersect at one point. We should
draw the line parallel to the Y-axis and if that line touches the number of component axis
then retain those components.
17
No. of Factors
Ei
gen
va
l
ues
0
0.5
1
1.5
2
2.5
3
3.5
1 2 3 4 5
COMMON FACTOR ANALYTIC MODEL
The task is to identify the inter dependency between the variables x
1
, x
2
, , x
p
and
representing the inter dependency by the m factors. Here the observable variables
follow multi normal distribution. In common factor analytic model. Each variable could
be written is the format given below:
1 1 2 12 1 11 1
e F F F
m m
+ + + +
2 2 2 22 1 21 2
e F F F
m m
+ + + +

p m pm p p p
e F F F + + + +
2 2 1 1
In a matrix form it could be written as
+ F X
Where,
1
1
1
1
]
1


P
X
X
X

2
1
, [ ]
m m
F F F F

1 2 1

m p
pm p p
m
m

1
1
1
1
1
]
1

2 1
2 22 21
1 12 11
,
1
2
1

1
1
1
1
1
]
1

p
p
e
e
e

Here the variances are normally distributed with mean 0 and variance 1. Similarly the


are normally distributed with mean 0 and variance Var (e
i
) =
i

Where i

is called unique or specific variance. In this process the matrix is formed


as follows:
1
1
1
1
1
]
1


p
0 0 0
0 0 0
0 0 0
2
1

The variance of i
th
response variances can be written as
18
i iq i i i
+ + + +
2 2
2
2
1
2
if the correlation matrix is used as input, otherwise

ij iq j i ij
+ +
1 1
The above said relation could be expressed in matrix form as
+
'


q
j
ij i i
1
2 2 2

This expression is used to find the communalities of the response.
PRINCIPLE FACTOR MODELS
Estimation of Factor loading by Principal factor method
We are not going into the detail of the mathematics of principal factor method and
maximum likelihood method derivations. But how these two methods work are
explained by the flow chart form in a lucid manner.
Principal factor method
For the given example (table 1)
19
Replace the correlation matrix
by the communality values
Find out the ordinary Eigen
vector for max
Find (exigent vector) (eigen vector)
T
and
subtract from the original matrix. This
matrix is called residue matrix
For the extraction of second eigen vector, proceed
as follows, from the residue matrix, find out the
eigen vector. This is the factor. Repeat, this process
if you require more eigen Vector.
Rnew (Diagonal elements replaced by the communalities)
=
1
1
1
1
1
1
]
1

0.8470 0.7777 0.1219 0.8631 0.0224


0.7777 0.7860 0.5147 0.6914 0.4389
0.1219 0.5147 0.9690 0.1543 0.9724
0.8631 0.6914 0.1543 0.8220 0.0098
0.0224 0.4389 0.9724 0.0098 0.9690
Eigen Vectors
c =
1
1
1
1
1
1
]
1

0.4488 0.4413 0.2249 - 0.0644 0.7410 -


0.5316 0.1210 0.5384 - 0.3701 - 0.5253
0.4320 0.5185 - 0.4526 0.5351 - 0.2309 -
0.4316 0.4237 0.6373 0.3263 0.3487
0.3782 0.5850 - 0.2205 - 0.6827 0.0069
Eigen Values
d =
1
1
1
1
1
1
]
1

2.7344 0 0 0 0
0 1.7161 0 0 0
0 0 0.0395 0 0
0 0 0 0.0243 - 0
0 0 0 0 0.0726 -
First factor is found out by multiplying the eigen vector corresponding to highest eigen
value by the square root of eigen value i.e. 7344 . 2
We get,
20
W=
0.6254
0.7136
0.7144
0.8791
0.7421
Residue=Rnew-W*W'=
1
1
1
1
1
1
]
1

0.2962 0.1253 0.4083 - 0.3335 0.4418 -


0.1253 0.0132 0.1133 - 0.0641 0.1109 -
0.4083 - 0.1133 - 0.4586 0.3555 - 0.5256
0.3335 0.0641 0.3555 - 0.3128 0.4365 -
0.4418 - 0.1109 - 0.5256 0.4365 - 0.5778
Eigen vectors for Residue
c =
1
1
1
1
1
1
]
1

0.4413 0.2249 0.4488 - 0.0644 0.7410


0.1210 0.5384 0.5316 - 0.3701 - 0.5253 -
0.5185 - 0.4526 - 0.4320 - 0.5351 - 0.2309
0.4237 0.6373 - 0.4316 - 0.3263 0.3487 -
0.5850 - 0.2205 0.3782 - 0.6827 0.0069 -
Eigen values for Residue
21
d =
1
1
1
1
1
1
]
1

1.7161 0 0 0 0
0 0.0395 0 0 0
0 0 0.0000 0 0
0 0 0 0.0243 - 0
0 0 0 0 0.0726 -
Second factor is found out by multiplying the eigen vector corresponding to highest eigen
value by the square root of eigen value i.e. 7161 . 1
We get,
W=
-0.7663
0.5551
-0.6792
0.1586
0.5781
Summary calculations for the given example:
Table 2
Variables
Factors
Communality
Specific
variance F1 F2
X
1
X
2
X
3
X
4
X
5
0.6254
0.7136
0.7144
0.8791
0.7421
-0.7663
0.5551
-0.6792
0.1586
0.5781
0.98
0.82
0.97
0.80
0.88
0.02
0.18
0.03
0.20
0.12
Total variance%
Common variance%
Eigen value
54.68
61.44
2.7344
34.32
38.56
1.7161
89 11
There is no change in the communality value between the unrotated factors and
orthogonal rotated factors. But in the case of oblique rotation, there will be change in
22
communality values arrived due to oblique rotation as well as communality values
arrived due to unrotated factors. The reason may be that the angles between the factors
are not constant that results in different loading that leads to different communality.
23
Maximum likelihood method:
24
S1= S
T
S is the initialization matrix
is the single factor loading of S
C * W
T
* C * W =
Max
= [1 W]
Here the first column is the single
factor loading of S and second column
is w, the scaled matrix as obtained
above
= diagonal of (S-
T)

-1/2
* (S- ) *
-1/2
c * w
T
* c * w =
Max

-1/2
= (eigen vectors) * diag( 1\square root
of eigen values) * ( transpose of eigen
vectors )
The scaled vectors of Max and Max-1
Are used as the W value in the next stage
= * W
= (eigen vectors of ) * diag( square
root of eigen values of ) * ( transpose of
eigen vectors of )
START
This is the initialization matrix.
S=
1.0000 0.5840 0.6150 0.6010 0.5700 0.6000
0.5840 1.0000 0.5760 0.5300 0.5260 0.5550
0.6150 0.5760 1.0000 0.9400 0.8750 0.8780
0.6010 0.5300 0.9400 1.0000 0.8770 0.8860
0.5700 0.5260 0.8750 0.8770 1.0000 0.9240
0.6000 0.5550 0.8780 0.8860 0.9240 1.0000
The single factor solution is obtained to form the residual matrix.
=
0.6370
0.5840
0.9590
0.9600
0.9290
0.9360
The residual matrix is obtained by the formula:
S1= S
T

T
=
0.4058 0.3720 0.6109 0.6115 0.5918 0.5962
0.3720 0.3411 0.5601 0.5606 0.5425 0.5466
0.6109 0.5601 0.9197 0.9206 0.8909 0.8976
0.6115 0.5606 0.9206 0.9216 0.8918 0.8986
0.5918 0.5425 0.8909 0.8918 0.8630 0.8695
0.5962 0.5466 0.8976 0.8986 0.8695 0.8761
S1=
0.5942 0.2120 0.0041 -0.0105 -0.0218 0.0038
0.2120 0.6589 0.0159 -0.0306 -0.0165 0.0084
0.0041 0.0159 0.0803 0.0194 -0.0159 -0.0196
-0.0105 -0.0306 0.0194 0.0784 -0.0148 -0.0126
-0.0218 -0.0165 -0.0159 -0.0148 0.1370 0.0545
0.0038 0.0084 -0.0196 -0.0126 0.0545 0.1239
Find the Eigen vector and Eigen values of S1
25
Eigen vectors of S1
0.0022 0.0237 0.0137 0.0331 -0.7589 0.6498
-0.0586 0.0041 0.0184 -0.0066 0.6494 0.7579
0.6848 0.0473 0.6862 -0.2394 0.0176 0.0188
-0.6905 -0.2669 0.6400 -0.1989 -0.0362 -0.0382
-0.1300 0.6133 0.3331 0.7029 0.0248 -0.0366
0.1843 -0.7415 0.0895 0.6387 0.0140 0.0097
0.3474 0.3264 0.4434 0.4400 0.4345 0.4402
Eigen values of S1
0.0571 0 0 0 0 0
0 0.0754 0 0 0 0
0 0 0.0886 0 0 0
0 0 0 0.1953 0 0
0 0 0 0 0.4128 0
0 0 0 0 0 0.8435
The scaled matrix is obtained by using the value:
C * W
T
* C * W =
Max
W=
0.6498
0.7580
0.0189
-0.0383
-0.0366
0.0096
The transpose of W is found out
0.6498 0.7580 0.0189 -0.0383 -0.0366 0.0096
W
T
W = 1.0001
Find the Square root of (
Max
/ W
T
W)

C= 0.9184 and C * W * C * W1= 0.8435 =
Max

=
0.6370 0.5968
0.5840 0.6962
0.9590 0.0174
0.9600 -0.0352
0.9290 -0.0336
0.9360 0.0088
26

T
=
0.6370 0.5840 0.9590 0.9600 0.9290 0.9360
0.5968 0.6962 0.0174 -0.0352 -0.0336 0.0088
Iteration 1
Now = diagonal of (S-
T)

T
=
0.7619 0.7875 0.6213 0.5905 0.5717 0.6015
0.7875 0.8258 0.5722 0.5361 0.5191 0.5528
0.6213 0.5722 0.9200 0.9200 0.8903 0.8978
0.5905 0.5361 0.9200 0.9228 0.8930 0.8983
0.5717 0.5191 0.8903 0.8930 0.8642 0.8692
0.6015 0.5528 0.8978 0.8983 0.8692 0.8762
=
0.2381 0 0 0 0 0
0 0.1742 0 0 0 0
0 0 0.0800 0 0 0
0 0 0 0.0772 0 0
0 0 0 0 0.1358 0
0 0 0 0 0 0.1238
Find the eigen vector and eigen values of
Eigen vectors of
0 0 0 0 0 1
0 0 0 0 1 0
0 1 0 0 0 0
1 0 0 0 0 0
0 0 0 1 0 0
0 0 1 0 0 0
Eigen values of
0.0772 0 0 0 0 0
0 0.0800 0 0 0 0
0 0 0.1238 0 0 0
0 0 0 0.1358 0 0
0 0 0 0 0.1742 0
0 0 0 0 0 0.2381
Next we find the product of (
-1/2
* (S- ) *
-1/2
)

-1/2
= (eigen vectors) * diag( 1\square root of eigen values) * ( transpose of eigen
vectors )
27

-1/2
=
3.5991 0 0 0 0 0
0 3.5355 0 0 0 0
0 0 2.8421 0 0 0
0 0 0 2.7136 0 0
0 0 0 0 2.3959 0
0 0 0 0 0 2.0494
S- =
0.7619 0.5840 0.6150 0.6010 0.5700 0.6000
0.5840 0.8258 0.5760 0.5300 0.5260 0.5550
0.6150 0.5760 0.9200 0.9400 0.8750 0.8780
0.6010 0.5300 0.9400 0.9228 0.8770 0.8860
0.5700 0.5260 0.8750 0.8770 0.8642 0.9240
0.6000 0.5550 0.8780 0.8860 0.9240 0.8762


-1/2
* (S- ) *
-1/2
=
9.8693 7.4312 6.2908 5.8697 4.9152 4.4256
7.4312 10.3223 5.7878 5.0848 4.4556 4.0213
6.2908 5.7878 7.4313 7.2496 5.9582 5.1140
5.8697 5.0848 7.2496 6.7952 5.7018 4.9273
4.9152 4.4556 5.9582 5.7018 4.9608 4.5370
4.4256 4.0213 5.1140 4.9273 4.5370 3.6801
Find the eigen vectors and eigen values of this above matrix
Eigen vectors=
0.4624 0.3688 0.8056 0.0243 -0.0191 0.0144
0.4428 0.6901 -0.5694 -0.0110 -0.0218 -0.0534
0.4418 -0.3178 -0.1019 -0.4643 0.1724 0.6694
0.4154 -0.3704 -0.0467 -0.4105 -0.1218 -0.7104
0.3546 -0.3076 -0.0980 0.5604 -0.6554 0.1626
0.3109 -0.2344 -0.0681 0.5487 0.7246 -0.1328
Eigen values =
35.0152 0 0 0 0 0
0 5.5485 0 0 0 0
0 0 2.5090 0 0 0
0 0 0 0.4149 0 0
0 0 0 0 -0.2722 0
0 0 0 0 0 -0.1564
Similarly find the C value for
Max
and
Max
-1
C = 5.9174 and C= 2.3555
28
The new W matrix is
2.7362 0.8687
2.6202 1.6255
2.6143 -0.7486
2.4581 -0.8724
2.0983 -0.7246
1.8397 -0.5521
= (eigen vectors of ) * diag( square root of eigen values of ) * ( transpose of
eigen vectors of )
=
0.2778 0 0 0 0 0
0 0.2828 0 0 0 0
0 0 0.3519 0 0 0
0 0 0 0.3685 0 0
0 0 0 0 0.4174 0
0 0 0 0 0 0.4880

Finally find which is the product of and W
=
0.7601 0.2413
0.7410 0.4597
0.9200 -0.2634
0.9058 -0.3215
0.8758 -0.3024
0.8978 -0.2694
This process is continued.
Iteration 2

T
=
0.7601 0.7410 0.9200 0.9058 0.8758 0.8978
0.2413 0.4597 -0.2634 -0.3215 -0.3024 -0.2694

T
=
0.6360 0.6742 0.6357 0.6109 0.5927 0.6174
0.6742 0.7604 0.5606 0.5234 0.5100 0.5414
0.6357 0.5606 0.9158 0.9180 0.8854 0.8969
0.6109 0.5234 0.9180 0.9238 0.8905 0.8998
0.5927 0.5100 0.8854 0.8906 0.8585 0.8678
0.6174 0.5414 0.8969 0.8998 0.8677 0.8786
= diag (S -
T
)
29
0.3640 0 0 0 0 0
0 0.2396 0 0 0 0
0 0 0.0842 0 0 0
0 0 0 0.0762 0 0
0 0 0 0 0.1415 0
0 0 0 0 0 0.1214
Find the eigen vector and eigen values of
Eigen vectors of
0 0 0 0 0 1
0 0 0 0 1 0
0 1 0 0 0 0
1 0 0 0 0 0
0 0 0 1 0 0
0 0 1 0 0 0
Eigen values of
0.0762 0 0 0 0 0
0 0.0842 0 0 0 0
0 0 0.1214 0 0 0
0 0 0 0.1415 0 0
0 0 0 0 0.2396 0
0 0 0 0 0 0.3640
Next we find the product of (
-1/2
* (S- ) *
-1/2
)

-1/2
= (eigen vectors) * diag( 1\square root of eigen values) * ( transpose of eigen
vectors )

-1/2
=
3.6226 0 0 0 0 0
0 3.4462 0 0 0 0
0 0 2.8701 0 0 0
0 0 0 2.6584 0 0
0 0 0 0 2.0429 0
0 0 0 0 0 1.6575
S- =
0.6360 0.5840 0.6150 0.6010 0.5700 0.6000
0.5840 0.7604 0.5760 0.5300 0.5260 0.5550
0.6150 0.5760 0.9158 0.9400 0.8750 0.8780
0.6010 0.5300 0.9400 0.9238 0.8770 0.8860
0.5700 0.5260 0.8750 0.8770 0.8585 0.9240
0.6000 0.5550 0.8780 0.8860 0.9240 0.8786
30

-1/2
* (S- ) *
-1/2
=
8.3464 7.2908 6.3943 5.7878 4.2183 3.6027
7.2908 9.0307 5.6972 4.8555 3.7032 3.1702
6.3943 5.6972 7.5439 7.1721 5.1304 4.1768
5.7878 4.8555 7.1721 6.5286 4.7629 3.9040
4.2183 3.7032 5.1304 4.7629 3.5829 3.1288
3.6027 3.1702 4.1768 3.9040 3.1288 2.4138
Find the eigen vectors and eigen values of this above matrix
Eigen vectors=
0.4720 0.3301 0.8154 0.0405 -0.0405 -0.0112
0.4493 0.7056 -0.5419 -0.0275 0.0627 -0.0442
0.4700 -0.3431 -0.1344 -0.4047 -0.5472 0.4243
0.4290 -0.3986 -0.0429 -0.3710 0.6187 -0.3672
0.3171 -0.2774 -0.1273 0.5847 -0.3888 -0.5596
0.2642 -0.1991 -0.0736 0.5952 0.4012 0.6082
Eigen values =
31.7649 0 0 0 0 0
0 4.5782 0 0 0 0
0 0 1.1586 0 0 0
0 0 0 0.3121 0 0
0 0 0 0 -0.1625 0
0 0 0 0 0 -0.2051
Similarly find the C value for
Max
and
Max
-1
C=5.6363 and C=2.1397
The new W matrix is
2.6603 0.7063
2.5324 1.5098
2.6491 -0.7341
2.4180 -0.8529
1.7873 -0.5936
1.4891 -0.4260
= (eigen vectors of ) * diag( square root of eigen values of ) * ( transpose of
eigen vectors of )
=
0.2760 0 0 0 0 0
0 0.2902 0 0 0 0
0 0 0.3484 0 0 0
0 0 0 0.3762 0 0
0 0 0 0 0.4895 0
0 0 0 0 0 0.6033
31
Finally find which is the product of and W
=
0.7342 0.1949
0.7349 0.4381
0.9229 -0.2558
0.9097 -0.3209
0.8749 -0.2906
0.8984 -0.2570
Iteration 3:

T
=
0.7342 0.7349 0.9229 0.9097 0.8749 0.8984
0.1949 0.4381 -0.2558 -0.3209 -0.2906 -0.2570

T
=
0.5771 0.6250 0.6278 0.6054 0.5857 0.6095
0.6250 0.7320 0.5662 0.5279 0.5156 0.5476
0.6278 0.5662 0.9172 0.9217 0.8818 0.8949
0.6053 0.5279 0.9216 0.9305 0.8891 0.8997
0.5857 0.5157 0.8818 0.8891 0.8499 0.8607
0.6095 0.5476 0.8949 0.8997 0.8607 0.8731
= diag (S -
T
)
0.4229 0 0 0 0 0
0 0.2680 0 0 0 0
0 0 0.0828 0 0 0
0 0 0 0.0695 0 0
0 0 0 0 0.1501 0
0 0 0 0 0 0.1269
Find the eigen vector and eigen values of
Eigen vectors of
0 0 0 0 0 1
0 0 0 0 1 0
0 1 0 0 0 0
1 0 0 0 0 0
0 0 0 1 0 0
0 0 1 0 0 0
Eigen values of
0.0695 0 0 0 0 0
0 0.0828 0 0 0 0
32
0 0 0.1269 0 0 0
0 0 0 0.1501 0 0
0 0 0 0 0.2680 0
0 0 0 0 0 0.4229
Next we find the product of (
-1/2
* (S- ) *
-1/2
)

-1/2
= (eigen vectors) * diag( 1\square root of eigen values) * ( transpose of eigen
vectors )

-1/2
=
3.7932 0 0 0 0 0
0 3.4752 0 0 0 0
0 0 2.8072 0 0 0
0 0 0 2.5811 0 0
0 0 0 0 1.9317 0
0 0 0 0 0 1.5377
S- =
0.5771 0.5840 0.6150 0.6010 0.5700 0.6000
0.5840 0.7320 0.5760 0.5300 0.5260 0.5550
0.6150 0.5760 0.9172 0.9400 0.8750 0.8780
0.6010 0.5300 0.9400 0.9305 0.8770 0.8860
0.5700 0.5260 0.8750 0.8770 0.8499 0.9240
0.6000 0.5550 0.8780 0.8860 0.9240 0.8731

-1/2
* (S- ) *
-1/2
=
8.3035 7.6984 6.5487 5.8842 4.1766 3.4997
7.6984 8.8404 5.6192 4.7540 3.5311 2.9658
6.5487 5.6192 7.2279 6.8109 4.7448 3.7900
5.8842 4.7540 6.8109 6.1991 4.3726 3.5165
4.1766 3.5311 4.7448 4.3726 3.1714 2.7446
3.4997 2.9658 3.7900 3.5165 2.7446 2.0645
Find the eigen vectors and eigen values of this above matrix
Eigen vectors=
0.4928 -0.3041 -0.8065 0.0915 -0.0712 -0.0293
0.4603 -0.7017 0.5303 -0.0660 0.0991 -0.0200
0.4646 0.3580 0.1509 -0.3932 -0.6377 0.2682
0.4209 0.4163 0.0033 -0.3688 0.6950 -0.1747
0.3020 0.2753 0.1878 0.5730 -0.2087 -0.6526
0.2473 0.1946 0.1016 0.6069 0.2277 0.6859
Eigen values =
33
31.0110 0 0 0 0 0
0 4.2810 0 0 0 0
0 0 0.5792 0 0 0
0 0 0 0.2692 0 0
0 0 0 0 -0.1368 0
0 0 0 0 0 -0.1968
Similarly find the C value for
Max
and
Max
-1
C=5.5685 and C=2.0692
The new W matrix is
2.7442 -0.6292
2.5632 -1.4520
2.5871 0.7408
2.3438 0.8614
1.6817 0.5697
1.3771 0.4027
= (eigen vectors of ) * diag( square root of eigen values of ) * ( transpose of
eigen vectors of )
=
0.2636 0 0 0 0 0
0 0.2877 0 0 0 0
0 0 0.3562 0 0 0
0 0 0 0.3874 0 0
0 0 0 0 0.5177 0
0 0 0 0 0 0.6503
Finally find which is the product of and W
=
0.7234 -0.1659
0.7374 -0.4177
0.9215 0.2639
0.9080 0.3337
0.8706 0.2949
0.8955 0.2619
Iteration 4:

T
=
0.7234 0.7374 0.9215 0.9080 0.8706 0.8955
-0.1659 -0.4177 0.2639 0.3337 0.2949 0.2619

T
=
34
0.5508 0.6027 0.6228 0.6015 0.5809 0.6043
0.6028 0.7183 0.5693 0.5302 0.5188 0.5510
0.6229 0.5693 0.9188 0.9248 0.8801 0.8943
0.6015 0.5302 0.9248 0.9358 0.8889 0.9005
0.5809 0.5188 0.8801 0.8889 0.8449 0.8569
0.6044 0.5510 0.8943 0.9005 0.8569 0.8705
= diag (S -
T
)
0.4492 0 0 0 0 0
0 0.2817 0 0 0 0
0 0 0.0812 0 0 0
0 0 0 0.0642 0 0
0 0 0 0 0.1551 0
0 0 0 0 0 0.1295
Find the eigen vector and eigen values of
Eigen vectors of
0 0 0 0 0 1
0 0 0 0 1 0
0 1 0 0 0 0
1 0 0 0 0 0
0 0 0 1 0 0
0 0 1 0 0 0
Eigen values =
0.0642 0 0 0 0 0
0 0.0812 0 0 0 0
0 0 0.1295 0 0 0
0 0 0 0.1551 0 0
0 0 0 0 0.2817 0
0 0 0 0 0 0.4492
Next we find the product of (
-1/2
* (S- ) *
-1/2
)

-1/2
= (eigen vectors) * diag( 1\square root of eigen values) * ( transpose of eigen
vectors )

-1/2
=
3.9467 0 0 0 0 0
0 3.5093 0 0 0 0
0 0 2.7789 0 0 0
0 0 0 2.5392 0 0
0 0 0 0 1.8841 0
0 0 0 0 0 1.4920
35
S- =
0.5508 0.5840 0.6150 0.6010 0.5700 0.6000
0.5840 0.7183 0.5760 0.5300 0.5260 0.5550
0.6150 0.5760 0.9188 0.9400 0.8750 0.8780
0.6010 0.5300 0.9400 0.9358 0.8770 0.8860
0.5700 0.5260 0.8750 0.8770 0.8449 0.9240
0.6000 0.5550 0.8780 0.8860 0.9240 0.8705

-1/2
* (S- ) *
-1/2
=
8.5795 8.0885 6.7450 6.0229 4.2385 3.5331
8.0885 8.8460 5.6171 4.7227 3.4778 2.9059
6.7450 5.6171 7.0952 6.6328 4.5813 3.6403
6.0229 4.7227 6.6328 6.0336 4.1957 3.3566
4.2385 3.4778 4.5813 4.1957 2.9993 2.5974
3.5331 2.9059 3.6403 3.3566 2.5974 1.9378
Find the eigen vectors and eigen values of this above matrix
Eigen vectors=
0.5086 -0.2921 -0.6952 0.3936 -0.1233 -0.0519
0.4659 -0.6952 0.4560 -0.2683 0.1401 0.0021
0.4588 0.3680 0.0345 -0.4357 -0.6497 0.2023
0.4130 0.4267 -0.1779 -0.3202 0.7109 -0.0891
0.2932 0.2759 0.4181 0.4379 -0.1188 -0.6762
0.2392 0.1946 0.3180 0.5375 0.1538 0.7009
Eigen values =
31.0718 0 0 0 0 0
0 4.1792 0 0 0 0
0 0 0.3142 0 0 0
0 0 0 0.2402 0 0
0 0 0 0 -0.1173 0
0 0 0 0 0 -0.1967
Similarly find the C value for
Max
and
Max
-1
C=5.5745 and C =2.0442
The new W matrix is
2.8352 -0.5971
2.5972 -1.4211
2.5576 0.7523
2.3023 0.8723
1.6344 0.5640
1.3334 0.3978
= (eigen vectors of ) * diag( square root of eigen values of ) * ( transpose of
eigen vectors of )
36
=
0.2534 0 0 0 0 0
0 0.2850 0 0 0 0
0 0 0.3599 0 0 0
0 0 0 0.3938 0 0
0 0 0 0 0.5308 0
0 0 0 0 0 0.6702
Finally find which is the product of and W
=
0.7184 -0.1513
0.7402 -0.4050
0.9205 0.2708
0.9066 0.3435
0.8675 0.2994
0.8936 0.2666
Iteration 5:

T
=
0.7184 0.7402 0.9205 0.9066 0.8675 0.8936
-0.1513 -0.4050 0.2708 0.3435 0.2994 0.2666

T
=
0.5390 0.5931 0.6204 0.5994 0.5779 0.6017
0.5930 0.7119 0.5717 0.5319 0.5209 0.5535
0.6203 0.5717 0.9206 0.9275 0.8796 0.8947
0.5994 0.5320 0.9276 0.9400 0.8894 0.9018
0.5779 0.5209 0.8796 0.8893 0.8422 0.8550
0.6017 0.5535 0.8948 0.9018 0.8551 0.8696
= diag (S -
T
)
0.4610 0 0 0 0 0
0 0.2881 0 0 0 0
0 0 0.0794 0 0 0
0 0 0 0.0600 0 0
0 0 0 0 0.1578 0
0 0 0 0 0 0.1304
Find the eigen vector and eigen values of
Eigen vectors of
0 0 0 0 0 1
0 0 0 0 1 0
0 1 0 0 0 0
1 0 0 0 0 0
37
0 0 0 1 0 0
0 0 1 0 0 0
Eigen values =
0.0600 0 0 0 0 0
0 0.0794 0 0 0 0
0 0 0.1304 0 0 0
0 0 0 0.1578 0 0
0 0 0 0 0.2881 0
0 0 0 0 0 0.4610
Next we find the product of (
-1/2
* (S- ) *
-1/2
)

-1/2
= (eigen vectors) * diag( 1\square root of eigen values) * ( transpose of eigen
vectors )

-1/2
=
4.0825 0 0 0 0 0
0 3.5489 0 0 0 0
0 0 2.7692 0 0 0
0 0 0 2.5174 0 0
0 0 0 0 1.8631 0
0 0 0 0 0 1.4728
S- =
0.5390 0.5840 0.6150 0.6010 0.5700 0.6000
0.5840 0.7119 0.5760 0.5300 0.5260 0.5550
0.6150 0.5760 0.9206 0.9400 0.8750 0.8780
0.6010 0.5300 0.9400 0.9400 0.8770 0.8860
0.5700 0.5260 0.8750 0.8770 0.8422 0.9240
0.6000 0.5550 0.8780 0.8860 0.9240 0.8696

-1/2
* (S- ) *
-1/2
=
8.9834 8.4612 6.9527 6.1766 4.3355 3.6076
8.4612 8.9662 5.6607 4.7350 3.4779 2.9009
6.9527 5.6607 7.0596 6.5529 4.5144 3.5809
6.1766 4.7350 6.5529 5.9571 4.1133 3.2850
4.3355 3.4779 4.5144 4.1133 2.9234 2.5354
3.6076 2.9009 3.5809 3.2850 2.5354 1.8863
Find the eigen vectors and eigen values of this above matrix
Eigen vectors=
-0.5211 -0.2847 0.2327 -0.7416 -0.1934 -0.0766
-0.4697 -0.6889 -0.1449 0.4968 0.1908 0.0234
38
-0.4535 0.3758 0.2500 0.4005 -0.6335 0.1701
-0.4060 0.4335 0.3655 0.0518 0.7136 -0.0421
-0.2872 0.2773 -0.6089 -0.0096 -0.0596 -0.6828
-0.2341 0.1958 -0.5983 -0.2000 0.1104 0.7048
Eigen values =
31.4847 0 0 0 0 0
0 4.1710 0 0 0 0
0 0 0.2635 0 0 0
0 0 0 0.1580 0 0
0 0 0 0 -0.1036 0
0 0 0 0 0 -0.1977
Similarly find the C value for
Max
and
Max
-1
C=5.6116 and C =2.0423
The new W matrix is
-2.9242 -0.5814
-2.6358 -1.4069
-2.5449 0.7675
-2.2783 0.8853
-1.6117 0.5663
-1.3137 0.3999

= (eigen vectors of ) * diag( square root of eigen values of ) * ( transpose of
eigen vectors of )
=
0.2449 0 0 0 0 0
0 0.2818 0 0 0 0
0 0 0.3611 0 0 0
0 0 0 0.3972 0 0
0 0 0 0 0.5367 0
0 0 0 0 0 0.6790
Finally find which is the product of and W
=
-0.1424 -0.7161
-0.3965 -0.7428
0.2771 -0.9190
0.3516 -0.9049
0.3039 -0.8650
0.2715 -0.8920
39
Iteration 6:

T
=
-0.1424 -0.3965 0.2771 0.3516 0.3039 0.2715
-0.7161 -0.7428 -0.9190 -0.9049 -0.8650 -0.8920

T
=
0.5331 0.5884 0.6187 0.5980 0.5762 0.6001
0.5884 0.7089 0.5727 0.5327 0.5220 0.5549
0.6186 0.5727 0.9213 0.9290 0.8791 0.8950
0.5980 0.5328 0.9291 0.9425 0.8896 0.9027
0.5761 0.5220 0.8792 0.8896 0.8406 0.8541
0.6001 0.5549 0.8950 0.9026 0.8541 0.8694
= diag (S -
T
)
0.4669 0 0 0 0 0
0 0.2911 0 0 0 0
0 0 0.0787 0 0 0
0 0 0 0.0575 0 0
0 0 0 0 0.1594 0
0 0 0 0 0 0.1306
Find the eigen vector and eigen values of
Eigen vectors of
0 0 0 0 0 1
0 0 0 0 1 0
0 1 0 0 0 0
1 0 0 0 0 0
0 0 0 1 0 0
0 0 1 0 0 0
Eigen values =
0.0575 0 0 0 0 0
0 0.0787 0 0 0 0
0 0 0.1306 0 0 0
0 0 0 0.1594 0 0
0 0 0 0 0.2911 0
0 0 0 0 0 0.4669
Next we find the product of (
-1/2
* (S- ) *
-1/2
)

-1/2
= (eigen vectors) * diag( 1\square root of eigen values) * ( transpose of eigen
vectors )
40

-1/2
=
4.1703 0 0 0 0 0
0 3.5646 0 0 0 0
0 0 2.7671 0 0 0
0 0 0 2.5047 0 0
0 0 0 0 1.8534 0
0 0 0 0 0 1.4635
S- =
0.5331 0.5840 0.6150 0.6010 0.5700 0.6000
0.5840 0.7089 0.5760 0.5300 0.5260 0.5550
0.6150 0.5760 0.9213 0.9400 0.8750 0.8780
0.6010 0.5300 0.9400 0.9425 0.8770 0.8860
0.5700 0.5260 0.8750 0.8770 0.8406 0.9240
0.6000 0.5550 0.8780 0.8860 0.9240 0.8694

-1/2
* (S- ) *
-1/2
=
9.2714 8.6814 7.0969 6.2777 4.4057 3.6619
8.6814 9.0075 5.6814 4.7320 3.4751 2.8953
7.0969 5.6814 7.0542 6.5149 4.4875 3.5556
6.2777 4.7320 6.5149 5.9128 4.0712 3.2477
4.4057 3.4751 4.4875 4.0712 2.8875 2.5063
3.6619 2.8953 3.5556 3.2477 2.5063 1.8621
Find the eigen vectors and eigen values of this above matrix
Eigen vectors=
-0.5290 -0.2817 0.1415 -0.7351 -0.2657 -0.0991
-0.4705 -0.6850 -0.0832 0.4919 0.2424 0.0416
-0.4507 0.3801 0.2815 0.4296 -0.6030 0.1577
-0.4018 0.4370 0.3797 -0.0584 0.7069 -0.0174
-0.2840 0.2781 -0.6067 0.0796 -0.0177 -0.6836
-0.2313 0.1965 -0.6177 -0.1529 0.0836 0.7043
Eigen values =
31.7739 0 0 0 0 0
0 4.1657 0 0 0 0
0 0 0.2578 0 0 0
0 0 0 0.0973 0 0
0 0 0 0 -0.1002 0
0 0 0 0 0 -0.1990
Similarly find the C value for
Max
and
Max
-1
C=0.1774 and C = 0.4899
The new W matrix is
41
-0.0938 -0.1380
-0.0835 -0.3356
-0.0800 0.1862
-0.0713 0.2141
-0.0504 0.1362
-0.0410 0.0963
= (eigen vectors of ) * diag( square root of eigen values of ) * ( transpose of
eigen vectors of )
=
0.2398 0 0 0 0 0
0 0.2805 0 0 0 0
0 0 0.3614 0 0 0
0 0 0 0.3992 0 0
0 0 0 0 0.5395 0
0 0 0 0 0 0.6833
Finally find which is the product of and W
=
-0.0225 -0.0331
-0.0234 -0.0941
-0.0289 0.0673
-0.0285 0.0855
-0.0272 0.0735
-0.0280 0.0658
Factor Rotation: -
After extraction of the factors one needs to discriminate and say that these variables come
under these factors. There should be (in an ideal case) no ambiguity or a single variable
explained by more than one factor. One factor can explain the variance in data which
was there by more than one variable, but the variance in one variable should be explained
by one factor. This is an ideal situation, but in most cases this cannot be obtained. To
achieve this end we will go for rotation of factors. These are basically two broad
categories of rotation orthogonal rotation and oblique rotation. Under Orthogonal
rotation we have methods / algorithms like varimax, quartimax and equimax methods.
Some oblique rotational methods are oblimax, quartimin, covarimin, biquartimin and
oblimin methods. In this chapter, varimax method and oblimax methods are discussed.
42
Once the attachment of a variable to a particular factor is a problem i.e. when it is
difficult to interpret which variable is attached to which factor, the factors are subjected
to rotation.
Time to ponder some of the features of factor loading.
We know
Consider the orthogonal matrix T we know that in the orthogonal matrix T = T
-1
.
Rearrange the expression (1) by multiplying as follows:
+

+

+

T T T T ) (
This helps us to create more T matrix where the T and are having equivalence
property. But the elements inside T is different from the . This provision helps us in
the factor rotation.
This is done by multiplying the factor loadings by orthogonal matrix T given below: -
T=
1
]
1




cos sin
sin cos
This will assist in getting higher value for the factor loading and also that the small factor
loading is becoming further negligible. The rotated loading will help us to assign the
variables to the factors without much problem.
Thurstone (1945, Chap 14) has suggested simple structure principle:
1. Each row of matrix should contain at least one zero
2. Each column of should contain at least q zeros
3. Every pair of columns of should contain several responses whose loadings
vanish in one column but not in the other.
4. If the number of factors q is four or more, every pair of columns of should
contain a large number of responses with zero loadings in both columns.
5. Conversely, for every pair of columns of only a small number of responses
should have non zero loadings in both columns.
In factor rotation we try to achieve the simple structure principle.
43
( ) 1
'
+
There are two types of rotation one is orthogonal rotation and another is oblique
rotation.
Orthogonal Rotation - In orthogonal rotation, all the factors are rotated by the same
angle. For example if there are two factors, then the angle by which the first factor is
rotated is equal to the angle by which the second factor will be rotated as shown in the
figure above. There are different methods available through which the orthogonal rotation
is ensued. They are varimax, quartimax and equimax. Varimax method is explained in
detail in the next section.
Oblique Rotation In oblique rotation, the angle of rotation of different factors are
different i.e. in the case of two factors, the angle by which the first factor is rotated will
be different from the angle by which the second factor is rotated. The methods available
for the oblique rotation are oblimax, quartimin, biquartimin and oblimin.
Varimax rotation
Kaiser (1956, 1958, 1959)
Step 1: Get the normal loadings for the factors. Create
2
C
m
pairs of factors. Select first
pair.
Step 2: Calculate the angle (say, ) at which the factors are rotated. Kaiser has shown
that the angle must satisfy the relation.
44
Orthogonal rotation
Oblique rotation
Factor 1 Factor 1
Factor 2 Factor 2
Factor 1 after
rotation
Factor 1 after
rotation
Factor 2 after
rotation
Factor 2 after
rotation


( ) ( ) ( ) [ ]
( ) ( ) [ ] ( ) [ ] ( ) { }

2
2
2 2 2
2
2 2
2 2 2 2
2 2
2 2 2
4 tan
hs hr hs hr hs hr hs hr
hs hr hs hr hs hr hs hr
x x x x x x x x p
x x x x x x x x p


The summation is carried out for p responses.
Identify the quadrant for 4 . This is done with the help of following table: -
Sign of
denominator
Sign of numerator
+ -
+ I. 4 0
0
<
0
90 IV. -
0
90 4 <
0
0
- II.
0
90 4 <
0
180 III. -
0
180 4 -
0
90
If 4 falls exactly in the given angle range then we can keep that angle to
calculate . Otherwise we should subtract
0
180 form that angle. Then we will
calculate the angle from the value got after subtraction.
Step 3: The pair is subjected to rotation by multiplying the pair by the orthogonal matrix
T mentioned previously, based on the angle arrived in the previous step. Thus
new pair will be created. When we consider the second pair, if there are old
factors, the old factors are replaced by previously calculated new factors. And
then this pair is subjected to rotation. This process is carried out for all the pairs
to arrive at completely new factors. This constitutes one cycle.
Step 4: Again with the help of new set of factors, step 2 and 3 is continued to create
another set of new factors. This process continues until the factor loading
converges.
For example if we have three factors then we will have three pairs, namely (F
1
, F
2
), (F
1
,
F
3
) and (F
2
, F
3
). First (F
1
, F
2
) will be subjected to rotation to give (F
1n
, F
2n
). Then (F
1n
, F
3
)
will be rotated to give (F
1nn
, F
3n
). Now the (F
2n
, F
3n
) will be rotated to give (F
2nn
, F
3nn
)
thus one cycle is complete. This process is repeated until the angle converges to a
specified limit.
45
Example of three factors: -
Factor 1 Factor 2 Factor 3
0.74 -0.16 -0.14
0.65 -0.19 -0.11
0.57 0.13 0.01
0.62 -0.16 -0.05
0.53 0.68 -0.1
0.8 -0.24 -0.26
0.42 0.07 0.1
0.64 -0.08 0.24
0.57 -0.01 0.52
0.54 -0.02 0.16
0.44 -0.05 0.43
Step 1: - There are 3 factors, hence we will get
2
3
C = 3 pairs. We will get (F
1
, F
2
), (F
1
,
F
3
) and (F
2
, F
3
). First (F
1
, F
2
) will be selected.
Step 2: -
tan 4
473593 . 2
90634 . 4
1363 . 12


4 = 68
0
Here numerators and denominators both have negative value. i.e. they are in the third
quadrant But for the third quadrant the angle should be between 90
0
to 180
0
. Hence we
will subtract 180
0
from 68
0
.
Hence = - 28
0
This value will be used in the transformation matrix T for the rotation.
Step3: - First pair (F
1
, F
2
) will be subjected to rotation. This will give
F1n F2n
0.728497 0.206137
0.663116 0.137396
0.442249 0.382382
0.622543 0.149801
0.148722 0.849224
46
0.819031 0.16367
0.337975 0.258984
0.602644 0.229826
0.507975 0.258769
0.486181 0.235856
0.411971 0.16242
Now in the second pair (F
1
, F
3
) F
1
will be replaced by F
1n
. It means that the pair (F
1n
, F
3
)
will be multiplied with the orthogonal matrix. This will give
F1nn F3n
0.713404 0.203376
0.641519 0.200687
0.390273 0.208258
0.578258 0.23596
0.177831 -0.02223
0.848308 0.137062
0.256626 0.241599
0.429786 0.485864
0.219076 0.693141
0.361881 0.361959
0.173941 0.56953
Now in the third pair (F
2
, F
3
) both F
2
and F
3
will be replaced by F
2n
and F
3n
respectively.
After multiplication with orthogonal matrix we get,
F2nn F3nn
0.169334 0.234905
0.102004 0.220791
0.342296 0.269111
0.108352 0.257638
0.84104 0.119701
0.13852 0.162438
0.215066 0.281406
0.145581 0.517388
0.139551 0.726589
0.172189 0.396223
0.065166 0.588641
47
Thus first cycle is completed. After first cycle, the rotated factors are: -
Step 4: - The new set of factors arrived in the previous step is used for further operations
mentioned in the steps 2 and 3. The results after different cycle are shown below.
After second cycle,
F
1nnnn
F
2nnnn
F
3nnnn
0.716248 0.184315 0.214048
0.645188 0.115608 0.202649
0.391189 0.352227 0.254573
0.582806 0.121358 0.241059
0.1664 0.845375 0.104657
0.849734 0.154859 0.138458
0.260113 0.222929 0.271926
0.440511 0.159383 0.50413
0.235439 0.152467 0.718834
0.369036 0.183298 0.384467
0.188022 0.075581 0.58304
After third cycle,
F
1nnnnnn
F
2nnnnnn
F
3nnnnnn
0.713041 0.18362295 0.22506557
F
1nn
F
2nn
F
3nn
0.713404 0.169334 0.234905
0.641519 0.102004 0.220791
0.390273 0.342296 0.269111
0.578258 0.108352 0.257638
0.177831 0.84104 0.119701
0.848308 0.13852 0.162438
0.256626 0.215066 0.281406
0.429786 0.145581 0.517388
0.219076 0.139551 0.726589
0.361881 0.172189 0.396223
0.173941 0.065166 0.588641
48
0.642082 0.11499448 0.21261845
0.3876 0.35197828 0.26034114
0.579113 0.12085801 0.25004213
0.165807 0.84526461 0.10646917
0.847657 0.15392973 0.15160171
0.256105 0.22286036 0.27575947
0.432766 0.15930866 0.51081767
0.224349 0.15285104 0.72229081
0.363203 0.18319995 0.39002783
0.178967 0.07589558 0.5858418
The evaluation of Factors:
As it is mentioned, the factor analysis is a data reduction technique, where in the factor
score for each respondent is arrived to do further analysis. This factor score for each
respondent represents less plethora of data compared to the considering the response of
all the respondents for all the variables. This data can be inputted for further analysis. In
the case of principal components the factor score is directly arrived as shown in the
previous chapter. But in the case of common factor analysis, you can not get directly the
factor score. For that Thomson (1951), has deduced a model that to assists in getting the
factor score the factor score for i
th
respondent is arrived at as follows:
( )
i i
x
1
+
In the above case of and are obtained by the correlation matrix, then would be
replaced by the z values.
Example:
49

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
]
1

0.4400 0.0500 - 0.5500


0.2600 0.0200 - 0.6600
0.4800 0.0200 - 0.6600
0.2800 0.0900 - 0.7200
0.2200 0.0500 0.6300
0.2200 - 0.2700 - 0.8600
0.1000 - 0.6200 0.6900
0.0100 0.1800 - 0.7500
0.0300 - 0.0300 0.7000
0.0900 - 0.2200 - 0.7500
0.1500 - 0.2000 - 0.8400

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
]
1

0.5000 0 0 0 0 0 0 0 0 0 0
0 0.4900 0 0 0 0 0 0 0 0 0
0 0 0.3400 0 0 0 0 0 0 0 0
0 0 0 0.4000 0 0 0 0 0 0 0
0 0 0 0 0.5500 0 0 0 0 0 0
0 0 0 0 0 0.1500 0 0 0 0 0
0 0 0 0 0 0 0.1300 0 0 0 0
0 0 0 0 0 0 0 0.4100 0 0 0
0 0 0 0 0 0 0 0 0.5100 0 0
0 0 0 0 0 0 0 0 0 0.3900 0
0 0 0 0 0 0 0 0 0 0 0.2300
50
x
i =
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
]
1

37
39
33
26
59
4 -
24
25
1
4
4

( ) +

i i
x
1
1
1
1
]
1

42.2322
19.9949
20.9435
For the principal factor models also we can calculate the factor score. We have already
calculated the communality values and the specific variances as given in table 2 for the
example given in table 1.
We have,

1
1
1
1
1
1
]
1

0.5781 0.7421
0.1586 0.8791
0.6792 - 0.7144
0.5551 0.7136
0.7663 - 0.6254
51

1
1
1
1
1
1
]
1

0.8800 0 0 0 0
0 0.8000 0 0 0
0 0 0.9700 0 0
0 0 0 0.8200 0
0 0 0 0 0.9800
x
i =
1
1
1
1
1
1
]
1

0.1200
0.2000
0.0300
0.1800
0.0200
( )
i i
x
1
+ =
1
]
1

0.0646
0.1214
52

Anda mungkin juga menyukai