Los Angeles
by
Jia Chen
2010
c Copyright by
Jia Chen
2010
Hongquan Xu
Yingnian Wu
ii
To my parents
for their permanent love.
And to my friends and teachers
who have given me precious memory and tremendous encouragement.
iii
Table of Contents
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Factor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1
2.1.1
2.1.2
Estimation Methods . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1
2.2.2
2.2.3
11
12
2.3.1
Mathematical Approaches . . . . . . . . . . . . . . . . . .
12
2.3.2
Statistical Approach . . . . . . . . . . . . . . . . . . . . .
13
2.3.3
14
15
2.2
2.3
3.1
16
3.1.1
Loss Function . . . . . . . . . . . . . . . . . . . . . . . . .
16
3.1.2
Projections . . . . . . . . . . . . . . . . . . . . . . . . . .
16
3.1.3
Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . .
16
iv
3.2
19
3.2.1
Loss Function . . . . . . . . . . . . . . . . . . . . . . . . .
19
3.2.2
Projection . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
3.2.3
Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . .
20
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
4.1
25
4.2
28
4.3
30
4.4
32
4.5
35
37
A Augmented Procrustus . . . . . . . . . . . . . . . . . . . . . . . . .
39
B Implementation Code . . . . . . . . . . . . . . . . . . . . . . . . . .
41
C Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
4 Examples
List of Tables
4.1
4.2
4.3
4.4
29
4.5
30
4.6
30
4.7
31
4.8
4.9
32
34
34
36
36
vi
. .
Jia Chen
Master of Science in Statistics
University of California, Los Angeles, 2010
Professor Jan de Leeuw, Chair
vii
CHAPTER 1
Introduction
A major objective of scientific or social activities is to summarize, by theoretical
formulations, the empirical relationships among a given set of events and discover
the natural laws behind thousands of random events. The events can be investigated are almost infinite, so it is difficult to make any general statement about
phenomena. However, it could be stated that scientists analyze the relationships
among a set of variables, while these relationships are evaluated across a set of
individuals under specified conditions. The variables are the characteristic being
measured and could be anything that can be objectively identified or scored.
Factor analysis can be used for theory instrument development and assessing
construct validity of an established instrument when administered to a specific
population. Through factor analysis, the original set of variables is reduced to
a few factors with minimum loss of information. Each factor represents an area
of generalization that is qualitatively distinct from that represented by any other
factors. Within an area where data can be summarized, factor analysis first represents that area by a factor and then seeks to make the degree of generalization
between each variable and the factor explicit [6].
There are many methods available to estimate a factor model, and the purpose
of this paper is to present and implement a new least squares algorithm, and
then compare its speed of convergence and model accuracy to some existing
approaches. To begin, we provide some matrix background and assumptions on
CHAPTER 2
Factor Analysis
Many statistical methods are used to study the relation between independent and
dependent variables. Factor analysis is different; the purpose of factor analysis is
data reduction and summarization with the goal understanding causation. It aims
to describe the covariance relationships among a large set of observed variables in
terms of a few underlying, but unobservable, random quantities called factors.
Factor analysis is a branch of multivariate analysis that was invented by psychologist Charles Spearman. He discovered that school childrens scores on a
wide variety of seemingly unrelated subjects were positively correlated, which led
him to postulate that a general mental ability, or g, underlies and shapes human
cognitive performance. Raymond Cattell expanded on Spearmans idea of a twofactor theory of intelligence after performing his own tests and factor analysis.
He used a multi-factor theory to explain intelligence. Factor analysis was developed to analyze test scores so as determine if intelligence is made up of a single
underlying general factor or of several more limited factors measuring attributes
like mathematical ability. Today factor analysis is the most widely used brach
of multivariate analysis in the psychological field, and helped by the advent of
electronic computers, it has been quickly spreading to economics, botany, biology
and social sciences.
Factor analysis has two main motivations to study it. One of the purposes of
factor analysis is to reduced the number of variables. In multivariate analysis, one
2.1
The factor analysis is generally presented with the framework of the multivariate
linear model for data analysis. Two classical linear common factor models are
briefly reviewed. For a more comprehensive discussion should refer to Anderson
and Rubin [1956] and to Anderson [1984] [1] [2].
Common factor analysis (CFA) starts with the assumption that the variance
in a given variable can be explained by a small number of underlying common
factors. For the common factor model, the factor score matrix can be divided
into two parts. The common factor part and unique factor part. The model
in matrix algebra form as:
Y = F + U,
nm
nm
nm
F = H A ,
nm
np pm
U = E
nm
D .
nm mm
(2.1)
where the common factor part is linear combination of p common factors (Hnp )
and factor loadings (Amp ). The unique factor part is linear combination of m
unique factors (Enm ) and unique factor scores (Dmm ) is a diagonal matrix.
In the common factor model, common factor and unique factor are assumed
to be orthogonal and follow a multivariate normal distribution with mean zero
and scaled to have unit length. The assumption of normality means they are
statistically independent random variables. The common factor are assumed to
be independent of the unique factor. Therefore, E(H) = 0, H H = Ip , E(E) =
0, E E = Im , E H = 0mp , and D is a diagonal matrix.
The common factor model (2.1) and assumptions imply the following model
correlation structure for the observed variables:
= AA + D2
2.1.1
(2.2)
nm
nm
F = H A ,
nm
np pm
U = E
nm
D .
nm mm
Each row of Y are corresponding with an individual observation, and these observations are assumed to be independent. Moreover the specific parts are assumed
to be uncorrelated with the common factors, and with the other specific parts.
2.1.2
The random factor model explained above was criticized soon after it was formally
introduced by Lawley. The point is that in factor analysis different individuals
are regarded as drawing their scores from different k-way distributions, and in
these distributions the mean for each test is the true score of the individual on
that test. Nothing is implied about the distribution of observed scores over
a population of individuals, and one makes assumptions only about the error
distributions [25].
There is a fixed factor model, which assumes
Y= F+U
The common part is a bilinear combination of a number of a number of
common factor loadings and common factor scores
F = HA
In the fixed model we merely assume the specific parts are uncorrelated withe
the other specific parts.
2.2
Estimation Methods
where A is the loading matrix of order m n and D2 is the matrix of unique variance of order m which is diagonal and non-negative definite. These parameters
are nearly always unknown and need to be estimated from the sample data. The
estimation are relatively straightforward method of breaking down a covariance
or correlation matrix into a set of orthogonal components or axes equal in number
to the number of variate methods. The sample covariance matrix is occasionally
used, but it is much more common to work with the sample correlation matrix.
There are many different methods have been developed for estimating, the
best known of these is principal factor method. It extracts the maximum amount
of variance that can be possibly extracted by a given number of factors. This
method chooses the first factor so as to account for as much as possible of the
variance from the correlation matrix, the second factor to account for as much
as possible of the remaining variance, and so on.
In 1940, a major step forward was made by D. N. Lawley, who developed
the Maximum Likelihood equations. These are fairly complicated and difficult
..
2.2.1
where
b1
b2 ...
bm )
(
Eigenvector of
b
e1 , b
e2 , ..., b
em
m
X
i ei ei
i=1
|
1 e1
1 e1
2 e2
= AA
2 e2 ...
m em
{z
}
.
m em
{z
}
|
(2.3)
b =
p
X
i=1
i ei ei =
1 e1
1 e1
2 e2
.
p
bA
b
=A
2 e2 ...
p ep
{z
}
.
b
A
p
p ep
| {z }
b
A
(2.4)
The equation (2.4) yields the estimator for the factor loadings:
b = e e ... p e
A
1 1
2 2
p p
(2.5)
2.2.2
b2 = A
bA
b
D
(2.6)
(A, D) =
1
nm
n
log2 log AA + D2 Y (AA + D2 )1 Y
2
2
2
(2.7)
There are two types of maximum likelihood methods, one is called Covariance Matrix Methods. This method was first proposed by Lawley [14], and
(2.8)
(2.9)
where D is known diagonal matrices with column (variable) weights. The solution
is given by a weighted singular value decomposition of Y.
The basic problem with Youngs method is that it assumes the weights to
be known. One solution, suggested by Lawley, is to estimate them along with
10
the loadings and uniquenesses [15]. If there are no person-weights, Lawley suggests to alternate minimization over (H, A), which is done by weighted singular
value decomposition, and minimization over diagonal D, which simply amounts
to computing the average sum of squares of the residuals for each variable. However, iterating two minimizations produces a block relaxation algorithm intended
to minimize the negative log-likelihood does not work. Although the algorithm
produces a decreasing sequence of loss function values. A rather disconcerting
feature of the new method is, however, that iterative numerical solutions of the
estimation equations either fail to converge, or else converge to unacceptable solutions in which one of more of the measurements have zero error variance. It is
apparently impossible to estimate scale as well as location parameters when so
many unknowns are involved [24].
In fact, if we look at the loss function we can see it is unbounded below. We
can choose scores to fit one variable perfectly, and then let the corresponding
variance term approach zero [1].
In 1952, Whittle suggested to take D proportional to the variance of the variables. This amounts to doing a singular value decomposition of the standardized
variables. J
oreskog makes the more reasonable choice of setting D proportional
to the reciprocals of the diagonals of the inverse of the covariance matrix of the
variables [11].
2.2.3
The least-squares method is one of the most important estimation methods, which
attempts to obtain such values of the factor loading A and the unique variance
D2 that minimizes a different loss function. The least squares loss function either
used in minimizing the residual of covariances matrix or residual of data matrix.
11
2.3
A factor model is not solveable without first determining the number of factors
p. How many common factors should be included in the model? This requires a
determination of how parameters are going to be involved. There are statistical
and mathematical approaches to determine the number of factors.
2.3.1
Mathematical Approaches
The mathematical approach to the number of factors is concerned with the number of factors for a particular sample of variables in the population, and the
theories of approaches are based on a population correlation matrix. Mathematically, the number of factors underlying any given correlation matrix is a function
of its rank. Estimating the minimum rank of the correlation matrix is
the same as estimating the number of factors [6].
1. The percentage of variance criterion. This method applies particularly to the principal component method. The percentage of common
12
2.3.2
Statistical Approach
13
2 + 5
) ln ||
6
(2.10)
2.3.3
14
CHAPTER 3
Algorithms of Least Squares Methods
Given a multivariate sample of n independent observations on each of taking m
variables. Collect these data in an nm matrix Y. In common factor analysis
model can be written as:
Y = HA + ED
(3.1)
Minimizing the least squares loss function is a form of factor analysis, but it
is not the familiar one. In classical least squres factor analysis, as described in
Young [1941], Whittle [1952] and J
oreskog [1962], the unique factors E are not
parameter in the loss function [25] [24] [11]. Instead the unique variances are used
to weight the residuals of each observed variable. Two different loss function will
be illustrated.
15
3.1
3.1.1
(3.2)
Projections
We also define the two projected or concentrated loss functions, in which one set
of parameters is minimized out,
(A) = minm (A, D) =
DD
(cjl aj al )2 ,
(3.3)
m
1 X 2
(C D 2 ),
2 s=p+1 s
(3.4)
1j<lm
and
(D) = minm (A, D) =
DD
Note that is used as a generic symbol for these LSFAC loss function, because
it will be clear from the context which we are using.
3.1.3
Algorithms
16
over A for D fixed at its current value and minimizing over D for A fixed
at its current value.
The basic procedure starts with the choice of the number of factors p and
the selection of an arbitrary set of unique variance D2 estimates for the m
variables. If C - D2 = KK is the eigen-decomposition of C - D2 , and
we write p and Kp for the p largest eigenvectors and corresponding eigen1
b = Kp p2 where p is the
Step 3. Determine the first p principal factors: A
p p submatrix of containing the p largest eigenvalues, and Kp is
17
(3.5)
where C0 = C diag(C).
Zegers and Berge [1983] show equation (3.5) is minimized by taking,
aik =
X
a2jk
1 X
(k)
cij aij ,
(3.6)
(k)
where cij is the i, jth element of the matrix of residuals resulting from
partialling all factors except the kth from C.
The basic procedures given some starting matrix A, the elements of the
first column are, each in turn, replaced once according to equation 16. then
in the same way, the elements of the second column are replaced, and so
on, until all elements of A have been replaced once. This constitutes one
iteration cycle. This iterative procedure will be terminated when, in one
iteration cycle, the value of the function in equation 15 decreases less than
some specified small value [26].
3. Harmans MINRES. In minimum residual [Harman and Jones, 1966; Harman and Fukuda, 1966] we project out D [8][7]. Thus we define
Cp (A, ) =
X
1
(cjl aj al )2 ,
min SSQ(C AA D2 ) =
2 D
1j<lm
and then use Alternate Least Squares to minimize the projected loss function over A, using the m rows as blocks.
18
4. Gradient and Newton Methods. Gradient methods [de Leeuw 2010] can be
applied by projecting out A. Thus we define
Cp (, D) =
X
1
min SSQ(C AA D2 ) =
2s (C D2 )2 .
2 A
s=p+1
Now use
2
Dj s (D) = zjs
,
CD2 s I. This directly gives formulas for the first and second derivatives
of the loss function.
Dj C(, D) =
Djl C(, D) =
m
X
s=p+1
m
X
s=p+1
3.2
3.2.1
2
s zjs
,
2 2
zls
zjs 2s zjs zls (C D s I)+
jl
(3.7)
19
3.2.2
Projection
The result in Appendix A can be use to define a projected version of the LSFAY
loss function.
1
min SSQ(Y HA ED)
2 H,E
m
X
1
1
= SSQ(Y ) + (A|D)
s (YA|YD),
2
2
s=1
(A, D) =
where the s (YA|YD) are the ordered singular values of (YA YD). Note that
(YA YD) is n (m + p), but its rank is less than or equal to m. Thus at
least p p of the singular values are zero.
The singular values are the square roots of the ordered eigenvalues of s of
A CA A CD
.
U=
DCA DCD
Thus we can write
m
Xp
1
1
1
s (E).
(A, D) = tr(C) + SSQ(A) + SSQ(D)
2
2
2
s=1
3.2.3
Algorithms
Our approach may seem to be quite similar to the approach proposed by Paul
Horst in his book [9]. Where we differ from Horst is in the additional assumptions
that D is diagonal and that E has the same size as the data Y. This puts us solidly
in the common factor analysis framework. Horst, on the contrary, only makes
the assumption that there is a small number of common and residual factors, and
he then finds them by truncating the singular value decomposition. Separating
common and unique factors is be done later by using rotation techniques. For
Horst factor analysis is just principal component analysis with some additional
interpretational tools.
20
i
h
i
H(k) | E(k) procrustus YA(k) | YD(k) ,
A(k+1) = Y H(k) ,
D(k+1) = diag(Y E(k) ).
(3.8)
(3.9)
(3.10)
21
H
tr(B BU U) = tr
E
Ip
= tr
0
mp
A A
= tr
DA
A
[H : E] [A : D]
0pm
AA AD
2
Im
DA D
A D
2
D
= tr(A A) + tr(D2 )
After solving the Procrustes problem for B = [H : E], one can update the
values of A and D by A = Y H and D = diag(Y E) using the identities,
H Y = H (HA + ED) = H HA + H ED H Y = A
(3.12)
E Y = E (HA + ED) = E HA + E ED E Y = D
(3.13)
which follow from the model (3.1). The alternating least squares process is
continued until the loss function (3.2) cannot be reduced further.
Alternatively, it can be used the Moore-Penrose inverse and then matrix
symmetric square root, because
Procrustus(X) =
(XX )+ X = X
p
(XX )+ .
22
(k+1)
D(k+1)
q
= CA (U(k) )+ ,
q
(k)
= diag(CD
(U(k) )+ ).
k
(3.14)
(3.15)
2. Gradient and Newton Methods. Suppose the eigenvector zs of U corresponding with s , is partitioned, by putting the first p elements in vs and
the last m elements in ws . Then [De Leeuw, 2007]
p
s (U)
1
=p
vrs cj (Avs + Dws ),
ajr
s (U)
p
s (U)
1
wjs cj (Avs + Dws ),
=p
djj
s (U)
where cj is column j of C. Collecting terms gives
p
D1 (A, D) = A CA U+ ,
p
D2 (A, D) = D diag(CD U+ ),
which shows that the alternating least squares algorithm (3.14) and (3.15)
can be written as a gradient algorithm with constant step-size
A(k+1) = A(k) D1 (A(k), D(k) ),
D(k+1) = D(k) D2 (A(k), D(k) ).
23
24
CHAPTER 4
Examples
Algorithms derived in earlier chapter has been programmed and applied to a
number of problems. In all programs the same criterion was used to stop the
iterations, that either the loss function decrease less than 1e - 6 or iteration equal
to 1000. In this section we can compare these solutions of applying least squares
method to some classic data sets. Results for these data sets are given here. Five
sets are considered: a) 9 mental tests from Holzinger and Swineford (1939); b) 9
mental tests from Thurstone (McDonald, 1999; Thurstone & Thurstone, 1941);
c) 17 mental tests from Thurstone and Bechtoldt (Bechtoldt, 1961); d) 14 tests
from Holzinger and Swineford (1937); e) 9 tests from Brigham (Thurstone, 1933).
The first data sets is included in the HolzingerSwineford1939 data set in the
lavaan package. The last four data sets are included in the bifactor data set in
the psych package.
4.1
A small subset with 9 variables of the classic Holzinger and Swineford (1939)
dataset which is discussed in detail by J
oreskog (1969). This dataset consists of
mental ability test scores of seventh and eighth-grade children from two different
schools (Pasteur and Grant-White). These nine tests were grouped into three
factors. Ten different solutions were computed, six of them we iterate until the
25
The user time is the CPU time charged for the execution of user instructions of the calling
process, and the system time is the CPU time charged for execution by the system on behalf
of the calling process
2
The BFGS method approximates Newtons method, a class of optimization techniques that
seeks a stationary point of a function.
3
Method CG is a conjugate gradients method based on that by Fletcher and Reeves (1964).
26
In addition, each entry for all pairs of loading matrices comparison is 1.00, that
verifies all algorithms are similarity to each other.
Table 4.1: LSFA methods Summary - 9 Mental Tests from Holzinger-Swineford
LSFAC
Newton
LSFAY
CG InvSqrt
SVD BFGS
ML
CG
ML
loss 0.01307 0.01307 0.01307 0.01307 0.01307 0.01307 0.0074 0.0074 0.0074 0.0074
iteration 3.00000 57.0000 17.0000 16.0000
94.000 94.000
user.self 0.02000 0.07200 0.01700 0.05800 0.59000 0.72700 0.1070 0.1180 6.2120 6.7300 0.085
sys.self 0.00000 0.00000 0.00000 0.00100 0.00300 0.00400 0.0030 0.0060 0.0250 0.0290 0.002
LSFAYf
0.01306885
0.007427571
PFA 0.01306885
0.007427550
Newton
Comrey
0.01306885
0.007427556
Harman
0.01306885
0.007427547
InvSqrt 0.01313887
0.007396346
ML
0.01333244
27
0.007454523
LSFAY
CG
Newton
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
PFA
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
Comrey
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
Harman
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
InvSqrt
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
SVD
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00
1.00 1.00
4.2
A classic data set is the 9 variable Thurstone problem which is discussed in detail
by R. P. McDonald (1985, 1999). These nine tests were grouped by Thurstone,
1941 into three factors: Verbal Comprehension, Word Fluency, and Reasoning.
The original data came from Thurstone and Thurstone (1941) but were reanalyzed by Bechthold (1961) who broke the data set into two. McDonald, in turn,
selected these nine variables from a larger set of 17. Nine different solutions were
computed, five of them we iterate until the loss function decrease less than 1e 6.
For the LSFAC case, in table 4.4, the PFA beats BFGS and CG by a factor
28
of 10. In addition, going from PFA to Comrey algorithm makes convergence 1.2
times faster and from PFA to Harman algorithm makes convergence 18% times
faster. Further analysis, going from Comrey algorithm to Newton algorithm
again makes convergenc 50% times as fast (observe that Newton algorithm starts
with a small number of Comrey algorithm iterations to get into an area where
quadratic approximation is safe). For the LSFAY case, direct alternating least
squares beats BFGS and CG by 50 times. The PFA algorithm is twice faster
than the direct alternating least squares.
In table 4.5 we compute the LSFAC loss function (LSFACf) of applying the
solutions from LSFAC and LSFAY algorithms and we do the same to the LSFAY
loss function (LSFAYf). We see the results of LSFAC loss function applying
different algorithms are almost identical, and similarly as LSFAY loss function.
In addition, each entry for all pairs of loading matrices comparison is 1.00, that
verifies all algorithms are similarity to each other.
Table 4.4: LSFA methods Summary - 9 Mental Tests from Thurstone
LSFAC
Newton
LSFAY
CG InvSqrt BFGS
ML
CG
ML
loss 0.00123 0.00123 0.00123 0.00123 0.00123 0.00123 0.00098 0.00098 0.00098
iteration 4.00000 66.0000 28.0000 22.0000
144.000
user.self 0.02400 0.07700 0.03500 0.06500 0.64200 0.81500 0.14600 6.72300 7.23700 0.037
sys.self 0.00100 0.00100 0.00000 0.00000 0.00300 0.00400 0.00000 0.03100 0.04600 0.000
29
LSFAYf
Newton
0.001228405
0.0009998733
PFA
0.001228405
0.0009998909
Comrey
0.001228405
0.0009998581
Harman
0.001228405
0.0009998984
InvSqrt
0.001266686
0.0009804552
ML
0.0013511019
0.0009924787
LSFAY
4.3
Newton
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
PFA
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
Comrey
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
Harman
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
InvSqrt
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
This set is the 17 variables from which the clear 3 factor solution used by McDonald (1999) is abstracted. Nine different solutions were computed, five of them we
30
LSFAY
CG InvSqrt BFGS
ML
CG
ML
loss
0.496 0.496
0.496
iteration
3.000 24.00
18.00
17.00
43.00
user.self
0.030 0.030
0.039
sys.self
0.001 0.00
0.001
31
LSFAYf
Newton
0.4959954
0.1916967
PFA
0.4959954
0.1916968
Comrey
0.4959954
0.1916968
Harman
0.4959954
0.1916968
InvSqrt 0.5071997
0.1873351
ML
0.5300513
0.1929767
LSFAY
4.4
Newton
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
PFA
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
Comrey
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
Harman
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
InvSqrt
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
The Reise data set is a correlation matrix based upon > 35,000 observations to the
consumer Assessment of Health Care Providers and System survey instrument.
Reise, Morizot, and Hays (2007) describe a bifactor solution based upon 1,000
32
cases. The five factors from Reise et al. reflect Getting care quickly (1-3), Doctor
communicates well (4-7), Courteous and helpful staff (8, 9), Getting need care
(10-13), and Health plan customer service (14-16). In all LSFAC case we iterate
until the loss function decrease less than 1e - 6, and in LSFAY case we iterate
until 1000 iterations.
For the LSFAC case, in table 4.10, the PFA beats BFGS and CG by a factor
of 5. In addition, going from Comrey to PFA algorithm makes convergence 13%
times faster, and going from Harman to PFA algorithm makes convergence 7%
times faster. Further analysis, going from Comrey algorithm to Newton algorithm
again makes convergenc 2.7 times as fast (observe that Newton algorithm starts
with a small number of Comrey algorithm iterations to get into an area where
quadratic approximation is safe). For the LSFAY case, the direct alternating
least squares beats BFGS and CG by 20 times. The PFA algorithm is 44% times
faster than the direct alternating least squares.
In table 4.11 we compute the LSFAC loss function (LSFACf) of applying the
solutions from LSFAC and LSFAY algorithms and we do the same to the LSFAY
loss function (LSFAYf). We see the results of LSFAC loss function applying
different algorithms are almost identical, and similarly as LSFAY loss function.
In addition, each entry for all pairs of loading matrices comparison is 1.00, that
verifies all algorithms are similarity to each other.
33
Table 4.10: LSFA methods Summary - 16 Health Satisfaction items from Reise
LSFAC
Newton
LSFAY
CG InvSqrt BFGS
ML
CG
ML
loss 0.00329 0.00329 0.00329 0.00329 0.00329 0.00329 0.00179 0.00179 0.00179
iteration 6.00000 734.000 326.000 168.000
1000.00
user.self 0.28400 0.93100 1.05200 1.00000 2.22300 4.15700 1.34000 14.6200 21.7400 0.063
sys.self 0.00200 0.00500 0.00400 0.00400 0.00800 0.01400 0.00500 0.13600 0.12900 0.000
Table 4.11: Loss function Summary - 16 Health Satisfaction items from Reise
LSFACf
LSFAYf
Newton
0.003287921
0.001813492
PFA
0.003287921
0.001813506
Comrey
0.003287921
0.001813501
Harman
0.003287921
0.001813467
InvSqrt 0.003339447
0.001789870
ML
0.003621737
0.001834006
Table 4.12: Loading Matrices Summary - 16 Health Satisfaction items from Reise
LSFAC
LSFAY
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
PFA
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
Comrey
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
Harman
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
InvSqrt
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
CG
1.00 1.00
1.00
1.00
1.00 1.00
1.00
1.00 1.00
34
4.5
The Burt nine emotional variables are taken from Harman (1967, p 164) who in
turn adapted them from Burt (1939). they are said be from 172 normal children
aged nine to twelve. As pointed out by Harman, this correlation matrix is singular
and has squared multiple correlations > 1. Note this correlation matrix has a
negative eigenvalue, but the LSFAY still works. In all LSFAC case we iterate
until the loss function decrease less than 1e - 6, and in LSFAY case we iterate
until 1000 iterations.
For the LSFAC case, in table 4.13, simple the PFA beats BFGS and CG by a
factor of 8. In addition, going from PFA to Comrey algorithm makes convergence
1.6 times faster, and going from Harman to PFA algorithm makes convergence
0.02 times faster. Further analysis, going from Comrey algorithm to Newton algorithm again makes convergenc 1.4 times faster (observe that Newton algorithm
starts with a small number of Comrey algorithm iterations to get into an area
where quadratic approximation is safe). For the LSFAY case, the direct alternating least squares beats BFGS and CG by 13 times. The PFA algorithm is 6
times faster than the direct alternating least squares.
In table 4.14, each entry for all pairs of loading matrices comparison is 1.00,
that verifies all algorithms are similarity to each other.
35
LSFAY
CG InvSqrt BFGS
CG
0.1696
48.000
39.000
0.0600
0.0000
1000.0
LSFAY
1.00 1.00
1.00
1.00
1.00 1.00
0.98
0.98 0.98
PFA
1.00 1.00
1.00
1.00
1.00 1.00
0.98
0.98 0.98
Comrey
1.00 1.00
1.00
1.00
1.00 1.00
0.98
0.98 0.98
Harman
1.00 1.00
1.00
1.00
1.00 1.00
0.98
0.98 0.98
BFGS
1.00 1.00
1.00
1.00
1.00 1.00
0.98
0.98 0.98
CG
1.00 1.00
1.00
1.00
1.00 1.00
0.98
0.98 0.98
InvSqrt
0.98 0.98
0.98
0.98
0.98 0.98
1.00
1.00 1.00
BFGS
0.98 0.98
0.98
0.98
0.98 0.98
1.00
1.00 1.00
CG
0.98 0.98
0.98
0.98
0.98 0.98
1.00
1.00 1.00
36
CHAPTER 5
Discussion and Conclusion
This paper demonstrates the feasibility of using an Alternating Least Squares
algorithm to solve the minimizing the least squares loss function from an common
factor analysis perspective. The algorithm leads to clean and solid convergence,
and accumulation points of the sequences it generates will be stationary points.
In addition, the Procrustus algorithm provides a means of verifying that the
solution obtained is at least a local minimum of the loss function.
The Procrustus algorithm was applied to the classical Holzinger-Swineford
mental tests problem and a simple structure of the loadings was achieved. The
illustrative results verify that the steps of iteration generated from Projection
algorithm are exactly the same as those from the Procrustus algorithm. The
optimization was easily carried out using the gradient projection algorithm. The
Procrustus algorithm has to be compared to other least squares methods.
Current algorithm for fitting common factor analysis models to data are quite
good. In almost every case the least squares on the covariance matrix (LSFAC)
method has a faster convergence speed than least squares on the data matrix
(LSFAY) method, and results from two different methods are almost identical.
The Newton algorithm (with a small number of Comrey algorithm iterations to
get into an area where quadratic approximation is safe) is the fastest algorithm
in least squares method. However, the LSFAY has two advantages over LSFAC:
1. the independence is more realistic, optima scaling easy (FACTALS), and 2.
37
the square root of the estimating unique variance are always result positive.
The illustrative results verify that an common factor analysis solution can
be surprisingly similar to the classical maximum likelihood and least squares
solutions on those data set we analysis earlier, suggestion that further research
into its properties may be of interest in the future.
Although we showed that the proposal factor analysis methodology can yield
results that are equivalent to those from standard methods, an important question to consider s whether any variants of this methodology actually can yield
improvements over existing methods. If not, results will be of interest mainly in
providing a new theoretical perspective on the relations between components and
factor analysis.
38
APPENDIX A
Augmented Procrustus
Suppose X is an n m matrix of rank r. Consider the problem of maximizing
tr(U X) over the n m matrices U satisfying U U = I. This is known as the
Procrustus problem, and it is usually studied for the case n m = r. We want
to generalize to n m r. For this, we use the singular value decomposition
L
0
1
rr
r(mr) rm
K0
X = K1
nr n(nr)
L
0
0
0
(nr)r
(nr)(mr)
(mr)m
Now
M=
L1
mr
L0
m(mr)
rr
(mr)r
39
r(mr)
(mr)(mr)
L1
rm
L0
(mr)m
U = K1
K0
nr
n(nr)
U1
rm
U0
(nr)m
V
L0 ,
(nr)(mr)(mr)m
with V V = I. Thus U = K1 L1 + K0 V L0 .
40
APPENDIX B
Implementation Code
B.1. Examples Dataset
1
libra ry (MASS)
libra ry ( psych )
libra ry ( optimx )
libra ry ( s e r i a t i o n )
data ( Harman )
data ( b i f a c t o r )
data ( Psych24 )
11
13
# haho a r e H o l z i n g e r s 9 p s y c h o l o g i c a l t e s t s
15
b u r t [ 3 , 1 : 2 ] < c ( . 8 1 , . 8 7 )
b u r t [ 4 , 1 : 3 ] < c ( . 8 0 , . 6 2 , . 6 3 )
17
b u r t [ 5 , 1 : 4 ] < c ( . 7 1 , . 5 9 , . 3 7 , . 4 9 )
b u r t [ 6 , 1 : 5 ] < c ( . 7 0 , . 4 4 , . 3 1 , . 5 4 , . 5 4 )
19
b u r t [ 7 , 1 : 6 ] < c ( . 5 4 , . 5 8 , . 3 0 , . 3 0 , . 3 4 , . 5 0 )
b u r t [ 8 , 1 : 7 ] < c ( . 5 3 , . 4 4 , . 1 2 , . 2 8 , . 5 5 , . 5 1 , . 3 8 )
41
21
b u r t [ 9 , 1 : 8 ] < c ( . 5 9 , . 2 3 , . 3 3 , . 4 2 , . 4 0 , . 3 1 , . 2 9 , . 5 3 )
b u r t [ 1 0 , 1 : 9 ] < c ( . 2 4 , . 4 5 , . 3 3 , . 2 9 , . 1 9 , . 1 1 , . 2 1 , . 1 0 , . 0 9 )
23
b u r t [ 1 1 , 1 : 1 0 ] <
c (.13 ,.21 ,.36 , .06 , .10 ,.10 ,.08 , .16 , .10 ,.41)
b u r t < b u r t + t ( b u r t ) + diag ( 1 1 )
25
# b u r t a r e th e Burt ( 1 9 1 5 ) e m o t i o n a l v a r i a b l e s :
27
29
rownames( b u r t ) < colnames ( b u r t ) < c ( " Sociality " , " Sorrow" , "
Tenderness " , "Joy" , " Wonder" , " Elation" , " Disgust" , " Anger"
, "Sex" , " Fear " , " Subjection " )
31
r e i s e < as . matrix ( R e i s e )
33
b e a l l <
37
structure ( l i s t ( gen d er = c ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2,
39
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2) , p i c t o r i a l a b s u r d i t i e s = c ( 1 5 ,
41
17 , 15 , 13 , 20 , 15 , 15 , 13 , 14 , 17 , 17 , 17 , 15 , 18 , 18 , 15 ,
18 ,
10 , 18 , 18 , 13 , 16 , 11 , 16 , 16 , 18 , 16 , 15 , 18 , 18 , 17 , 19 ,
13 ,
42
43
14 , 12 , 12 , 11 , 12 , 10 , 10 , 12 , 11 , 12 , 14 , 14 , 13 , 14 , 13 ,
16 ,
14 , 16 , 13 , 2 , 14 , 17 , 16 , 15 , 12 , 14 , 13 , 11 , 7 , 12 , 6) ,
paper form boards = c ( 1 7 ,
45
15 , 14 , 12 , 17 , 21 , 13 , 5 , 7 , 15 , 17 , 20 , 15 , 19 , 18 , 14 , 17 ,
14 , 21 , 21 , 17 , 16 , 15 , 13 , 13 , 18 , 15 , 16 , 19 , 16 , 20 , 19 ,
14 ,
47
12 , 19 , 13 , 20 , 9 , 13 , 8 , 20 , 10 , 18 , 18 , 10 , 16 , 8 , 16 , 21 ,
17 , 16 , 16 , 6 , 16 , 17 , 13 , 14 , 10 , 17 , 15 , 16 , 7 , 15 , 5) ,
tool recognition = c (24 ,
49
32 , 29 , 10 , 26 , 26 , 26 , 22 , 30 , 30 , 26 , 28 , 29 , 32 , 31 , 26 ,
33 ,
19 , 30 , 34 , 30 , 16 , 25 , 26 , 23 , 34 , 28 , 29 , 32 , 33 , 21 , 30 ,
12 ,
51
14 , 21 , 10 , 16 , 14 , 18 , 13 , 19 , 11 , 25 , 13 , 25 , 8 , 13 , 23 , 26 ,
14 , 15 , 23 , 16 , 22 , 22 , 16 , 20 , 12 , 24 , 18 , 18 , 19 , 7 , 6) ,
vocabulary = c (14 ,
53
26 , 23 , 16 , 28 , 21 , 22 , 22 , 17 , 27 , 20 , 24 , 24 , 28 , 27 , 21 ,
26 ,
17 , 29 , 26 , 24 , 16 , 23 , 16 , 21 , 24 , 27 , 24 , 23 , 23 , 21 , 28 ,
21 ,
55
26 , 21 , 16 , 16 , 18 , 24 , 23 , 23 , 27 , 25 , 26 , 28 , 14 , 25 , 28 ,
26 ,
14 , 23 , 24 , 21 , 26 , 28 , 14 , 26 , 9 , 23 , 20 , 28 , 18 , 28 , 13) ) , .
Names = c ( " gender" ,
57
43
59
HS . name < c ( "vis perc " , " cubes" , " lozenges" , "par comp " , "sen
comp " , " wordmean" , " addition" , " cont dot" , "s c caps " )
65
HS < H o l z i n g e r S w i n e f o r d 1 9 3 9 [ , 7 : 1 5 ]
67
69
44
APPENDIX C
Program
C.1. Main.
1
h < i n i t i a l A D ( cov , p )
o l d a < h $ a
i t e l < 1
}
i f ( fm == " lsfacPFA" ) {
old d < h $ d
repeat {
11
15
ea < ed ec $ v a l u e s [ 1 : p ]
newa < ev matrix ( sqrt (pmax ( 0 , ea ) ) , m, p , byrow=
TRUE)
17
45
i f ( verbose ) {
cat ( " Iteration : " , formatC ( i t e l , d i g i t s =4, width=6) ,
19
21
23
format="f" ) , "\n\n" )
}
25
27
i t e l < i t e l + 1
29
31
}
l o s s < sum ( ( cov diag ( newd ) t c r o s s p r o d ( newa ) ) 2)
/ 2
33
35
37
newa < o l d a
for ( i i n 1 : m) {
39
46
}
41
43
45
}
47
49
i t e l < i t e l + 1
51
o l d a < newa
}
53
55
57
59
newa < o l d a
for ( s i n 1 : p ) {
61
for ( i i n 1 : m) {
c i s < newa [ , s ] [ i ]
63
r i s < newa [ i , ] [ s ]
d i s < cov
65
[ i , ] [ i ]
47
67
}
69
71
73
}
75
77
i t e l < i t e l + 1
79
o l d a < newa
}
81
83
85
l o s s < function ( d ) {
eval < eigen ( cov diag ( d ) , o n l y . v a l u e s=
87
TRUE) $ v a l u e s
sum ( eval [ ( 1 : p ) ] 2 ) /2
89
48
91
93
o l d a < h $ a
old d < h $ d
95
i t e l < 1
repeat {
97
99
ev < ed ec $ v e c t o r s [ , 1 : p ]
ea < ed ec $ v a l u e s [ 1 : p ]
101
103
105
}
i t e l < i t e l + 1
107
109
}
l o s s < sum ( ( cov diag ( newd ) t c r o s s p r o d ( newa ) ) 2)
111
/ 2
return ( l i s t ( i t e l = i t e l , l o s s = l o s s , chg = chg , a = old a
, d = old d ) )
113
49
115
117
d < u n l i s t ( r e s $ par )
l o s s < as . numeric ( r e s $ f v a l u e s )
119
121
ev < ec $ v e c t o r s [ , 1 : p ]
a < ev matrix ( sqrt ( ed ) , m, p , byrow=TRUE)
123
r e s u l t < l i s t ( l o s s = l o s s , a = a , d = d )
}
125
127
ep s = 1 e 6, v e r b o s e = FALSE) {
cov < as . matrix ( cov )
129
131
i t e l < 1
repeat {
133
newa < o l d a
for ( s i n 1 : p ) {
135
for ( i i n 1 : m) {
c i s < newa [ , s ] [ i ]
137
r i s < newa [ i , ] [ s ]
d i s < cov
139
[ i , ] [ i ]
50
141
}
chg < max ( abs ( o l d a newa ) )
143
145
}
i t e l < i t e l + 1
147
o l d a < newa
}
149
151
}
pp < p + (1 : (m p ) )
155
157
repeat {
ed ec < eigen ( cov diag ( old d ) )
159
ev < ed ec $ v e c t o r s
ea < ed ec $ v a l u e s
161
eu < ev [ , pp ]
gg < drop ( ( eu 2) %% ea [ pp ] )
163
h1 < t c r o s s p r o d ( eu 2)
h2 < matrix ( 0 , m, m)
165
for ( s i n pp ) {
51
ee < ea ea [ s ]
ee < i f e l s e ( ee == 0 , 0 , 1/ ee )
167
ew < ev [ , s ]
h2 < h2 + ea [ s ] outer ( ew , ew ) ( ev %% ( ee
169
t ( ev ) ) )
}
171
173
i f ( verbose ) {
cat ( " Iteration : " , formatC ( i t e l , d i g i t s =4, width=6) ,
" Change: " , formatC ( chg , d i g i t s =6, width =10 ,
175
format="f" ) ,
"\n" )
cat ( " oldd : " , formatC ( oldd , d i g i t s =6, width =10 ,
177
format="f" ) , "\n" )
cat ( " newd : " , formatC ( newd , d i g i t s =6, width =10 ,
format="f" ) , "\n\n" )
}
179
181
}
183
i t e l < i t e l + 1
old d < newd
185
}
a < ev [ , 1 : p ] matrix ( sqrt (pmax ( 0 , ea [ 1 : p ] ) ) , m,
p , byrow=TRUE)
187
52
189
193
repeat {
ad < cbind ( old a , diag ( old d ) )
195
197
199
newa < g [ , 1 : p ]
newd < diag ( g [ , p + ( 1 :m) ] )
201
203
205
209
211
53
i t e l < i t e l + 1
213
215
}
s v a l < sqrt (pmax ( 0 , eigen ( e , o n l y . v a l u e s = TRUE) $
values ) )
217
219
a = old a , d = old d )
}
221
i f ( fm == " lsfaySVD" ) {
y < as . matrix ( y )
223
225
h < i n i t i a l A D ( crossprod ( y ) , p )
o l d a < h $ a
227
229
repeat {
emat < y %% cbind ( old a , diag ( old d ) )
231
233
235
54
237
i f ( verbose ) {
cat ( " Iteration : " , formatC ( i t e l , d i g i t s =4, width=6) ,
" Change: " , formatC ( chg , d i g i t s =6, width =10 ,
239
format="f" ) ,
"\n" )
cat ( " oldd : " , formatC ( oldd , d i g i t s =6, width =10 ,
241
format="f" ) , "\n" )
cat ( " newd : " , formatC ( newd , d i g i t s =6, width =10 ,
format="f" ) , "\n\n" )
}
243
245
}
247
i t e l < i t e l + 1
old d < newd
249
o l d a < newa
}
251
253
}
i f ( fm == " lsfayOptim " ) {
255
257
l o s s < function ( x ) {
sx < sum ( x 2)
259
a < matrix ( x [ 1 : (m p ) ] , m, p )
55
d < x [ (1 : (m p ) ) ]
ad < cbind ( a , diag ( d ) )
261
263
TRUE) $ v a l u e s ) )
return ( ( ( sx + t c ) / 2) sum ( lb d ) )
265
}
l s f a y I n v S q r t < function ( cov , p , itmax =
1000 , ep s = 1 e 6, v e r b o s e = FALSE) {
267
269
h < i n i t i a l A D ( cov , p )
o l d a < h $ a
271
273
repeat {
ad < cbind ( old a , diag ( old d ) )
275
277
279
newa < g [ , 1 : p ]
newd < diag ( g [ , p + ( 1 :m) ] )
281
283
break
}
56
i t e l < i t e l + 1
285
287
}
s v a l < sqrt (pmax ( 0 , eigen ( e , o n l y . v a l u e s =
289
TRUE) $ v a l u e s ) )
l o s s < ( (sum ( diag ( cov ) ) + sum ( ad 2) ) /
2) sum ( s v a l )
return ( l i s t ( i t e l = i t e l , l o s s = l o s s , chg =
291
chg , s v a l = s v a l ,
a = old a , d = old d ) )
}
293
x < c ( as . vector ( ad $ a ) , ad $ d )
r e s < optimx ( x , l o s s , method = method )
297
rp < u n l i s t ( r e s $ par )
a < matrix ( rp [ 1 : (m p ) ] , m, p )
299
d < rp [ (1 : (m p ) ) ]
names ( d ) < rownames ( cov )
301
303
}
return ( r e s u l t )
305
C.2. Auxilaries.
1
57
ev < ec $ v e c t o r s
a < ev [ , 1 : p ] matrix ( sqrt ( ed [ 1 : p ] ) , m, p , byrow=TRUE)
11
}
denman . b e a v e r s < function (mat, itmax = 50 , ep s = 1 e 6,
v e r b o s e = FALSE) {
s t o p i f n o t (nrow (mat) == ncol (mat) )
13
i t e l < 1
y o l d < mat
15
17
ynew < 0 . 5 ( y o l d + g i n v ( z ) )
z < 0 . 5 ( z + g i n v ( y o l d ) )
19
21
23
Change:
",
25
}
27
58
29
i t e l < i t e l + 1
y o l d < ynew
31
}
return ( l i s t ( y s q r t = ynew , y s q i n v = z ) )
33
}
35
e < eigen ( a )
ev < e $ v e c t o r s
39
ea < abs ( e $ v a l u e s )
ea < i f e l s e ( ea == 0 , 0 , 1 / sqrt ( ea ) )
41
return ( ev %% ( ea t ( ev ) ) )
}
43
i f ( fm == "LSFAC" ) {
LSFACf < r e s u l t $ l o s s
47
a < r e s u l t $a
d < sqrt ( r e s u l t $d )
49
51
53
}
else {
59
i f ( fm == "LSFAY" ) {
55
a < r e s u l t $a
d < ( r e s u l t $d ) 2
57
59
}
else {
61
i f ( fm == "ML" )
{
63
a < r e s u l t $ l o a d i n g s
d < sqrt ( r e s u l t $ u n i q u e n e s s e s )
65
67
69
) 2) / 2
LSFAYf < ( (sum ( diag (RHS) ) + sum ( ad 2) ) / 2)
sum ( s v a l )
}
71
}
}
73
75
77
60
79
81
for ( j i n 1 : k ) {
lmc < NULL
83
g < 1 + ( j 1) p
X < A[ , g : ( g + ( p 1) ) ]
85
for ( i i n 1 : k ) {
87
t < 1 + ( i 1) p
Y < A[ , t : ( t + ( p 1) ) ]
89
91
}
93
}
return (LMC)
97
61
Bibliography
62
[11] K. G. J
oreskog. On the statistical treatment of residuals in factor analysis.
Psychometrika, 27:435454, 1962.
[12] K. G. J
oreskog. Some contribution to maximum likelihood factor analysis.
Psychometrika, 32:443482, 1967.
[13] Richard A. Johnson and Dean W. Wichern. Applied Multivariate Statistical
Analysis. Prentice Hall, New Jersey, 2002.
[14] D. N. Lawley. The estimation of factor loadings by the method of maximum
likelihood. Proc. R. Soc. Edinburgh, 60:6482, 1940.
[15] D. N. Lawley. Further investigations in factor estimation. Proceeding of the
Royal Society, Edinburgh, 62:176185, 1942.
[16] J. De Leeuw. Least squares optimal scaling of partially observed linear
systems. Recent Developments on Structural Equation Models: Theory and
Applications, pages 121134, 2004.
[17] J. De Leeuw. Derivatives of generalized eigen systems with applications.
Preprint 528, Department of Statistics, UCLA, 2007.
[18] J. De Leeuw. Least squares methods for factor analysis. Preprint, Department of Statistics, UCLA, 2010.
[19] Eni Pan Gao and Zhiwei Ren. Introduction to Multivariate Analysis for the
social sciences. W.H. Freeman and Company, New Jersey, 1971.
[20] H. Rutishauser. Computational aspects of f.l. bauers simultaneous iteration
method. Numerische Mathematik, 13:413, 1969.
[21] L. L. Thurstone. Multiple-factor analysis. Univ. Chicago Press, IV:535,
1947.
63
64