1. Introduction
(1)
XA
(2)
= PR(x)
We recall t h a t [14]
(3)
A * A A + = A*.
--
(4)
ELIMINATION
METHOD
FOR
COMPUTING
THE
GENERALIZED
INVERSE
,~33
EA*APP*A + = EA*
P*A + = ( E A * A P ) + E A * + Z
EA*
., O)EA*.
(7)
From (4) and (5) it follows that the last (n - r) rows of E A * are zero; from
the definition of H* it therefore follows that the matrix H * E A * consists of the
first r rows of E A * . Therefore
HH+EA * = H+*H*EA * = (H*+iO)EA *.
(8)
From (7), (8) and the fact that P is a permutation matrix it follows that
A + = P H H + E A *.
(9)
(10)
H H + = I~ -- DD +,
(11)
A + = P ( L , - DD+) EA*.
(12)
:-K2;
"
0o~
A.
BEN-ISRAEI,
AND
S.
J.
WERSAN
( ( L + an*)-
i0)
Ja *
requires the inversion of the r X r matrix (I~ -q- AA*). Similarly, if A*A is
singular, the me{hod (t2) rewritten as
A + = P ( I --
EA*
(12a)
(i) Zero rows [or columns] in A result in corresponding zero columns [or rows]
i~ A +. Hence an obvious reduction by working with ~, a matrix obtained from
A by striking all zero rows and columns; computing A+ by either (Ta) or (12a),
and inserting zero columns and rows to obtain A +.
(ii) Another possible reduction in eomputations and space is by working with
A*A if '~n >- n (A is an rn X n matrix), and with A A * if m < n. The latter case
results in A*+ which must then be transposed to obtain A +.
(iii) ?'or nonsingular matrices the above methods require more operations
than the ordinary inversion methods, due to the formation of A*A. Thus for the
nonsingular case: m, = n = r both methods require (4~na --2n 2 n) multiplieatiop~, %-~z"a
2 _ ~)~ divisions and (~}na - 2n 2 q- n) additions.
(iv) Because the last (n -- r) rows of EA* are zero, in method (12a) one
need not compute the last (n - r) columns of the matrix (I - DD +) [see
example below].
(v) As in other elimination methods, the above methods depend critically on
the correct determination of the rank, which in turn depends on the approximation and roundoff errors.
(vi) By the fact t h a t R ( H ) = R ( P * A * A ) = R ( P * A * ) , equation (9)can be
rewritten as A + = P R ( e . x . ) E A * with E, P as above.
(vii) Equation (7) can be given an alternate proof by using the fact that
A + is the unique solution of the following extremum problem:
Minimize
(trace X ' X ) ~
subject to
A * A X = A*"
(13)
(14)
( E A * A P) P * X = EA*.
By [I5t the unique solution of (14) is (7):
P*A +
==
7! ~ A [ ))
(EA
+ }~d *
~Equatioa (7a) is due to Professo:r A. Charnes and one of us, e.g. Notices Arner. Math.
Soc. 1, 1 (1963), 1:35.
~J~a,mple
Let
-+1 0 1
--1
1 0
A=
0 --1
1
0 1 -1
I
-1
535
t 1 0 -1
Diagonalize A*A vs A*; pivot element circled:
(~j
-2
]--2
-2
-2
-1
4 --2 --8
+-+-+
~--2 --8
+
I0
-1
~o
28
i
o
2 --I
27'+
-i
oo
,0
Here:
+ -+
-31-~
o
i
01 --~ i)
--
--I
-3
-+
-+
-~- -~ ~ -.+ ~I
ol
oo
0I 0
i --i 0 I12)
i-i
3 --3
l0
@ --3
-- --} --10
l011 --
-- --91
-- " --~
/O --3 3 91 + -- 1
-9
1 --1
EA*=g
-2
o
0
--2
0 0
0 0
11
1 (11
(I2 + AA*)-~ = 17 \--7
--76)
,
J
-4
--2
1
2
A+ =
1
:~
(1!--7
~
~-~.
6
1
-4
0 ~)//--2-1--1
0
1 [--1
1
o
o
~ \%
\o
. . . . b - - - - ~ - - b - . . . . <5-o
o o
oo
/-15
I0.
1 i)
--1
-- 18
5
-3
-3
2
9
-2
-9
18
-5
3
15\
03~
A . D E N - I S t { A E L A N D S.
J, W E R S A N
D =
():
/-1
-2\
--1
1(14
(D'D)-1 = ~
--5
D(D*D)-iD*= 1-7
11
-1
--1
14
--5
-il
J
T
I t is actually unnecessary
to c o m p u t e these columns.
1
-4
/ 11
--7
= i[-7
0
0
1
102
-15
8
7
6
-2
0
0
--18
13
5
--3
2
o
o
-1
0
0
-3
--5
2
9
5
--2
--9
o
18
--13
-5
3
537
5. DEN B~OEI)zm, G. G., J~., AND CHARN~S, A. Contributions to the theory of a generalized inverse for matrices. Purdue University, Lafayette, Ill., 1957). Republished
as 0 N R Research Memo No. 39 Northwestern Univ., The Teeh. Inst., Evanston,
II1. 1962.
6. CLINZJ, i~. E. On the computation of the generalized inverse A , of an arbitrary matrix A, and the use of certain associated eigenvectors in solving the allocation problem.
Preliminary report, Purdue Univ., Statistical and Computing Lab., Lafayette, Ill.,
(1958).
7. GREVILLE, T. N . E . The pseudo inverse of a rectangular or singular matrix and its
applications to the solution of system of linear equations. S I A M Rev. I (1959)
38-43.
8. - - - - . Some applications of the pseudo inverse of a matrix. S I A M Rev. 2, 1 (1960)
15--22.
9. HESTEN~S, M . R . Inversion of matrices by biorthogonalization and related results.
J. S I A M G (1958) 84.
10. - - .
Relative Hermitian matrices. Pacific J . Math. 11, 1 (1961) 225-245.
11. - - .
A ternary algebra with applications to matrices and linear transformations.
Arch. Rat. Mech. Anal. 11, 2 (1962), 138-194.
12. MooRs, E . H . Bull. Amer. Math. Soc. ~6 (1920), 394-395.
13. - - - - . (General Analysis, Pt I. Memoir Amer. Philos. Soc. I (1935).
14. PENR0SE, R. A generalized inverse for matrices. Proc. Camb. Philos. Soc. 51, 3 (1955)
406-413.
15. - - .
On best approximate solution of linear matrix equations. Proc. Camb. Philos.
Soc. 5Z, 1 (1956), 17-19.
16. PYLE, L . D . A gradient projection method for solving programming problems using
the generalized inverse A + and the eigenvectors e (1), . . . , e (n) of I - A+A. Preliminary
Report, Purdue Univ. Statistical and Computing Lab., Lafayette, Ind., 1958.