Anda di halaman 1dari 4

In linear algebra, the Cauchy–Binet formula, named after Augustin-Louis Cauchy and Jacques

Philippe Marie Binet, is an identity for the determinant of the product of two rectangular matrices of
transpose shapes (so that the product is well-defined and square). It generalizes the statement that
the determinant of a product of square matrices is equal to the product of their determinants. The
formula is valid for matrices with the entries from any commutative ring.

Contents
[hide]

• 1Statement
• 2Special cases
o 2.1In the case n = 3
• 3A simple proof
• 4Proof
• 5Relation to the generalized Kronecker delta
• 6Geometric interpretations
• 7Generalization
• 8References
• 9External links

Statement[edit]
Let A be an m×n matrix and B an n×m matrix. Write [n] for the set { 1, ..., n }, and for the set

of m-combinations of [n] (i.e., subsets of size m; there are of them). For , write A[m],S for
the m×m matrix whose columns are the columns of A at indices from S, and BS,[m] for the m×m matrix
whose rows are the rows of B at indices from S. The Cauchy–Binet formula then states

Example: taking m = 2 and n = 3, and matrices and , the Cauchy–Binet formula gives
the determinant:

Indeed , and its determinant is −28, which is also the value given by the right
hand side of the formula.

Special cases[edit]
If n < m then is the empty set, and the formula says that det(AB) = 0 (its right hand
side is an empty sum); indeed in this case the rank of the m×m matrix AB is at most n, which
implies that its determinant is zero. If n = m, the case where A and B are square

matrices, (a singleton set), so the sum only involves S = [n], and the formula states
that det(AB) = det(A)det(B).
For m = 0, A and B are empty matrices (but of different shapes if n > 0), as is their
product AB; the summation involves a single term S = Ø, and the formula states 1 = 1, with
both sides given by the determinant of the 0×0 matrix. For m = 1, the summation ranges

over the collection of the n different singletons taken from [n], and both sides of the

formula give , the dot product of the pair of vectors represented by the matrices. The
smallest value of m for which the formula states a non-trivial equality is m = 2; it is discussed
in the article on the Binet–Cauchy identity.

In the case n = 3[edit]

Let be three-dimensional vectors.

In the case m > 3, the right-hand side always equals 0.

A simple proof[edit]
The following simple proof presented in [1] relies on two facts that can be proven in
several different ways:

1. For any the coefficient of in the polynomial is the sum of

the principal minors of .

2. If and is an matrix and an matrix, then

Now, if we compare the coefficient of in the equation , the left hand side

will give the sum of the principal minors of while the right hand side will give

the constant term of , which is simply , which is what the Cauchy–Binet


formula states, i.e.

Proof[edit]
There are various kinds of proofs that can be given for the Cauchy−Binet
formula. The proof below is based on formal manipulations only, and avoids
using any particular interpretation of determinants, which may be taken to be
defined by the Leibniz formula. Only their multilinearity with respect to rows and
columns, and their alternating property (vanishing in the presence of equal rows
or columns) are used; in particular the multiplicative property of determinants for
square matrices is not used, but is rather established (the case n = m). The
proof is valid for arbitrary commutative coefficient rings.
The formula can be proved in two steps:

1. use the fact that both sides are multilinear (more precisely 2m-linear) in
the rows of A and the columns of B, to reduce to the case that each row
of A and each column of Bhas only one non-zero entry, which is 1.
2. handle that case using the functions [m] → [n] that map respectively the
row numbers of A to the column number of their nonzero entry, and the
column numbers of B to the row number of their nonzero entry.
For step 1, observe that for each row of A or column of B, and for each m-
combination S, the values of det(AB) and det(A[m],S)det(BS,[m]) indeed depend
linearly on the row or column. For the latter this is immediate from the
multilinear property of the determinant; for the former one must in addition
check that taking a linear combination for the row of A or column of B while
leaving the rest unchanged only affects the corresponding row or column of the
product AB, and by the same linear combination. Thus one can work out both
sides of the Cauchy−Binet formula by linearity for every row of A and then also
every column of B, writing each of the rows and columns as a linear
combination of standard basis vectors. The resulting multiple summations are
huge, but they have the same form for both sides: corresponding terms involve
the same scalar factor (each is a product of entries of A and of B), and these
terms only differ by involving two different expressions in terms of constant
matrices of the kind described above, which expressions should be equal
according to the Cauchy−Binet formula. This achieves the reduction of the first
step.
Concretely, the multiple summations can be grouped into two summations, one
over all functions f:[m] → [n] that for each row index of A gives a corresponding
column index, and one over all functions g:[m] → [n] that for each column index
of B gives a corresponding row index. The matrices associated to f and g are

where " " is the Kronecker delta, and the Cauchy−Binet formula to
prove has been rewritten as

where p(f,g) denotes the scalar factor . It remains to prove the


Cauchy−Binet formula for A = Lf and B = Rg, for all f,g:[m] → [n].
For this step 2, if f fails to be injective then Lf and LfRg both have two
identical rows, and if g fails to be injective then Rg and LfRg both have
two identical columns; in either case both sides of the identity are zero.
Supposing now that both f and g are injective maps [m] → [n], the

factor on the right is zero unless S = f([m]), while the

factor is zero unless S = g([m]). So if the images of f and g are


different, the right hand side has only null terms, and the left hand side

is zero as well since LfRg has a null row (for i with ). In the
remaining case where the images of f and g are the same,
say f([m]) = S = g([m]), we need to prove that

Let h be the unique increasing bijection [m] → S, and π,σ the

permutations of [m] such that and ; then is

the permutation matrix for π, is the permutation matrix for σ,

and LfRg is the permutation matrix for , and since the


determinant of a permutation matrix equals the signature of the
permutation, the identity follows from the fact that signatures are
multiplicative.
Using multi-linearity with respect to both the rows of A and the
columns of B in the proof is not necessary; one could use just one
of them, say the former, and use that a matrix product LfB either
consists of a permutation of the rows of Bf([m]),[m] (if f is injective), or
has at least two equal rows.

Anda mungkin juga menyukai