• No results found

Week 8, Monday: det(a) =det(a t ).

N/A
N/A
Protected

Academic year: 2021

Share "Week 8, Monday: det(a) =det(a t )."

Copied!
9
0
0

Loading.... (view fulltext now)

Full text

(1)

Week 8, Monday: det(A) = det(A

t

).

Elementary matrices. An n⇥ n matrix is called an elementary matrix if it is obtained from the identity matrix, In, through a single elementary row operations (scaling a row by a nonzero scalar, swapping rows, or adding one row to another).

Here is why elementary matrices are interesting: Let E be an n⇥n elementary matrix corresponding to some row operation and let A be any n⇥ k matrix. Then EA is the matrix obtained from A by performing that row operation. Thus, you can perform row operations through multiplication by elementary matrices.

Example. Let

A = 0

@ 1 2 3 4

3 0 1 2

1 5 6 7

1 A .

To find the elementary matrix that will subtract 3 times the first row of A from the second row, we do that same operation to the identity matrix:

0

@ 1 0 0 0 1 0 0 0 1

1

A r2!r2 3r!1 E = 0

@ 1 0 0

3 1 0 0 0 1

1 A .

Multiplying by E on the left performs the same elementary row operation to A:

EA = 0

@ 1 0 0

3 1 0 0 0 1

1 A

0

@ 1 2 3 4

3 0 1 2

1 5 6 7

1 A =

0

@ 1 2 3 4

0 6 10 10

1 5 6 7

1 A .

Performing a sequence of row operations to a matrix is thus equivalent to multiplying on the right by a sequence of elementary matrices. In particular, if eA is the reduced echelon form of A, then there are elementary matrices E1, . . . , E` such that

A = Ee `· · · E2E1A.

(2)

Determinant of the transpose. If A is an m⇥ n matrix, recall that its transpose is the matrix At defined by

(At)ij := Aji. Thus, the rows of At are the columns of A.

Our goal now is to prove the amazing fact that det(A) = det(At).

Example. We have seen that det

✓ a b c d

= ad bc.

Note that we also have det

✓ a b c d

t!

= det

✓ a c b d

= ad bc.

Recall that we can compute the determinant of A by performing row operations and keeping track of swaps and scalings of rows. Once we have shown that det(A) = det(At), it follows that, in order compute the determinant of A, we may also use column operations (again keeping track of swaps and scalings). That’s because row operations applied to At are the same as column operations applied to A.

To prove this fact about determinants of tranposes, we need the following three results:

Theorem. Let A and B be n⇥ n matrices. Then det(AB) = det(A) det(B).

Proof. Upcoming Homework. ⇤

Proposition. Let A and B be n⇥ n matrices. Then 1. (AB)t= BtAt.

2. If A is invertible, then (At) 1 = (A 1)t.

Proof. Upcoming homework. ⇤

Lemma. Let E be an elementary matrix. Then det(E) = det(Et)6= 0.

Proof. There are three cases to consider:

110

(3)

1. Suppose E is formed from In by swapping rows i and j. In this case, Et is also formed from Inby swapping rows i and j. So in this case, det(E) = 1 = det(Et).

2. Suppose E is formed from In by scaling row i by 6= 0. In this case, Et is also formed from In by scaling row i by . So in this case, det(E) = = det(Et).

3. Suppose E is formed from In by adding ri to rj for rows ri 6= rj. Then Et is formed from In by adding rj0 to r0i, where rk0 denotes the k-th row of Et. So in this case. det(Et) = det(E) = det(In) = 1.

⇤ We can now prove our main result:

Theorem. Let A be an n⇥ n matrix. Then det(A) = det(At).

Proof. Let eA be the reduced echelon form for A, and choose elementary matrices Ei

such that

A = Ee `· · · E2E1A. (20.1) Taking determinants and use the fact that determinants preserve products:

det( eA) = det(E`)· · · det(E2) det(E1) det(A).

Since the determinant of an elementary matrix is nonzero, we get

det(A) = det(E`) 1· · · det(E2) 1det(E1) 1det( eA). (20.2) Take transposes in equation (20.1):

Aet= AtE1tE2t· · · E`t.

Take determinants, solve for det(At), and use the fact that det(E) = det(Et) if E is an elementary matrix:

det(At) = det(E1t) 1det(E2t) 1· · · det(E`t) 1det( eAt)

= det(E1) 1det(E2) 1· · · det(E`) 1det( eAt)

= det(E`) 1· · · det(E2) 1det(E1) 1det( eAt). (20.3) To finish the proof, we consider two cases. The first case is where rank(A) = n. This is the case if and only if eA = In, in which case eAt = In, as well, and hence det( eA) = det( eAt) = det(In) = 1. From equations (20.2) and (20.3), it follows that

det(A) = det(E`) 1· · · det(E2) 1det(E1) 1 = det(At).

(4)

The second case is where rank(A) < n. Since row and column rank are the same, we have rank(At) = rank(A) < n, as well. Recall that an n⇥ n matrix has nonzero determinant if and only if its rank is n (since its reduced echelon form will have a row of 0s). So in this case, we deduce det(A) = det(At) = 0 without needing to consider

elementary matrices. ⇤

The following incredible result immediately follows:

Corollary. The determinant is a multilinear, alternating, normalized function of the columns of a square matrix.

112

(5)

WEEK 8, WEDNESDAY: PERMUTATION EXPANSION OF THE DETERMINANT

Definition. A permutation of a set X is a bijection X ! X. If n is a positive integer, the set of all permutations of{1, . . . , n} is denoted by Sn and called the symmetric group of degree n.

We can represent a permutation 2 Sn by the 2⇥ n matrix

✓ 1 2 · · · n

(1) (2) · · · (n)

◆ .

We can thus think of as a rearrangement of the numbers 1, . . . , n. It follows that #Sn = n!

since the number of possible rearrangements of 1, . . . , n is n!.

Example. The six elements of S3 are listed below.

✓1 2 3 1 2 3

◆ ,

✓1 2 3 2 1 3

◆ ,

✓1 2 3 3 2 1

◆ ,

✓1 2 3 1 3 2

◆ ,

✓1 2 3 2 3 1

◆ ,

✓1 2 3 3 1 2

Let us fix a field F for the remainder of the discussion. We will derive a formula for the determinant of any square matrix over F by using permutations.

Definition. Let e1, . . . , en be the standard basis vectors in Fn and let 2 Sn. The permutation matrix corresponding to is the matrix P 2 Mn⇥n(F ) whose i-th column is e (i).

Example. Let 2 S3 be given by (1) = 2, (2) = 3, and (3) = 1. Then

P = 0

@

0 0 1 1 0 0 0 1 0

1 A .

Proposition (Properties of permutation matrices).

(a) The (i)-th row of P is ei.

(b) P is an orthogonal matrix, i.e., P Pt = In.

(c) The set {P : 2 Sn} is equal to the set of all matrices in Mn⇥n(F ) having exactly one 1 in every row and column, and zeros elsewhere.

(d) P = P P for all , ⌧ 2 Sn.

Proof. We will prove (a) and (b) and leave the rest as an exercise. By definition, the columns of P are e (1), . . . , e (n). Letting j = (i), the j-th row of P is therefore

(e (1)j, . . . , e (i)j, . . . , e (n)j) = (0, . . . , 1, . . . , 0) = ei. This proves (a). To prove (b), note that for all indices 1 a, b  n we have

(P )ab = e (b)a = (b)a,

where is the Kronecker delta function. The definition of matrix multiplication now yields (P Pt)ij =

Xn k=1

(P )ik(Pt)kj = Xn k=1

(P )ik(P )jk = Xn k=1

(k)i (k)j.

If i 6= j, then the latter sum is equal to 0; if i = j the sum is 1. Thus, we have shown that

(6)

Remarks.

(1) It follows from part (a) of the proposition that P can be obtained from In by permuting its rows according to . Thus, if maps i to j, then in order to construct P we take the i-th row of In and place it in row j. Rethink the previous example in these terms.

(2) As shown previously, orthogonal matrices have determinant ±1. Thus, part (b) of the propo- sition implies that every permutation matrix has determinant ±1.

(3) If you’re familiar with the rules of chess, you may like to think of permutation matrices in the following way. By part (c) of the proposition, if we were to place a rook in every spot where P has a 1, then no rook would be attacking another. For this reason, permutation matrices are sometimes referred to as rook placements.

Definition. A transposition in Sn is a permutation which interchanges two elements of{1, . . . , n}

and fixes all others. If a and b are distinct elements of {1, . . . , n}, we write (ab) to denote the transposition interchanging a and b.

Example. In S2 there is only one nontrivial permutation, and it is the transposition (12). In S3 the transpositions are (12), (13), and (23).

Definition. The sign of a permutation 2 Sn is the scalar sign( ) = det(P ). Note that the sign of any permutation is ±1 by the second remark above.

Proposition. Suppose that is a composition of k transpositions. Then sign( ) = ( 1)k.

Proof. We prove the case k = 1 first. Thus, suppose that is a transposition. By the first remark above, P is obtained from In by swapping two rows. Therefore

sign( ) = det(P ) = det(In) = 1 = ( 1)k.

To prove the general case, write = ⌧12 · · · ⌧k, where each ⌧i is a transposition. By part (d) of the previous proposition, P = P1P2· · · Pk. Using the multiplicativity of the determinant and the case k = 1, we obtain det(P ) = det(P1)· · · det(Pk) = ( 1)k. ⇤ Remark. It is an important fact that every element of Sn can be written as a composition of transpositions. A proof of this is typically given in any introductory course on group theory.

Theorem (Permutation expansion of the determinant). For every matrix A2 Mn⇥n(F ), det(A) = X

2Sn

sign( )A1 (1)A2 (2)· · · An (n). Proof. The i-th row of A is ri= (Ai1, . . . , Ain) = Ai1e1+· · · + Ainen. Thus

det(A) = det(r1, . . . , rn) = det(A11e1+· · · + A1nen, . . . , An1e1+· · · + Annen).

Using multilinearity to expand this determinant, we obtain nn terms, each of the form A1j1· · · Anjndet(ej1, . . . , ejn).

(Consider the case n = 2 to see how this works.) If any two of the indices jk are equal, then det(ej1, . . . , ejn) = 0. Therefore, only the sequences j1, . . . , jn corresponding to a permutation of {1, . . . , n} will contribute to the sum. Moreover, if 2 Sn, the contribution of to the sum is

A1 (1)A2 (2)· · · An (n)· det(e (1), . . . , e (n)) = A1 (1)A2 (2)· · · An (n)· sign( ),

where the equality follows from the fact that the matrix (e (1), . . . , e (n)) is the transpose of P . ⇤

2

(7)

WEEK 8, FRIDAY: COFACTOR EXPANSION OF THE DETERMINANT

Let F be a field and n a positive integer. Recall that we defined the determinant to be a function det : Mn⇥n(F )! F which is alternating, multilinear, and satisfies det(In) = 1. Up to this point we have not shown that such a function exists. However, our work on determinants does show that if such a function exists, then it satisfies

det(A) = X

2Sn

sign( )A1 (1)A2 (2)· · · An (n)

for every matrix A2 Mn⇥n(F ). (Recall that sign( ) = 1 if is a composition of an even number of transpositions, and sign( ) = 1 if is a composition of an odd number of transpositions.) Thus, if we show that a function with the above properties exists, then we will know that it is unique.

We will now prove the existence (and therefore uniqueness) of the determinant.

Notation. Let A be an n⇥ n matrix. For indices 1  i, j  n we denote by A(i|j) the matrix obtained by deleting the i-th row and the j-th column of A.

Lemma. Suppose that n > 1 and that D : Mn 1,n 1(F ) ! F is an alternating, multilinear map such that D(In 1) = 1. Fix j 2 {1, . . . , n} and define a function dj : Mn⇥n(F )! F by the formula

dj(A) = Xn

i=1

( 1)i+jAijD(A(i|j)).

Then dj is alternating, multilinear, and satisfies dj(In) = 1.

Proof. The proof is relatively straightforward and will be mostly left as an exercise. However, let us show that dj(In) = 1. Note that In(j|j) = In 1, so that D(In(j|j)) = 1. Moreover, if i 6= j, then the j-th row of In(i|j) is zero, so D(In(i|j)) = 0. Therefore dj(In) = ( 1)j+j(In)jj· 1 = 1. ⇤ Proposition. There exists at least one determinant function d : Mn⇥n(F )! F .

Proof. The proof is by induction. The case n = 1 is trivial, so we proceed to the inductive step.

Assume there is a determinant function D : Mn 1⇥n 1(F ) ! F , and let d be any one of the functions dj defined in the lemma. Then d is a determinant function on n⇥ n matrices. ⇤ Theorem (Cofactor expansion of the determinant). For every n 1 there is a unique determinant function on n⇥ n matrices. Moreover, this function satisfies

det(A) = Xn i=1

( 1)i+jAijdet(A(i|j)) for every index j2 {1, . . . , n} and every matrix A 2 Mn⇥n(F ).

Proof. The existence follows from the previous proposition and the uniqueness follows from the permutation expansion. Since the determinant is unique, all the functions dj in the previous

lemma must be equal. ⇤

(8)

By applying the theorem to At, and using the fact that det(A) = det(At), one can show the following related formula for the determinant.

Theorem. For every matrix A2 Mn⇥n(F ) and every index i2 {1, . . . , n}, det(A) =

Xn j=1

( 1)i+jAijdet(A(i|j)).

Definition. The (i, j) cofactor of A is the scalar Cij = ( 1)i+jdet A(i|j).

The two theorems above prove the formulas det(A) =

Xn j=1

AijCij and det(A) = Xn i=1

AijCij.

The first formula is called an expansion along the i-th row, and the second is called an expansion along the j-th column.

Example. We will compute the determinant of the matrix A =

0

@

1 2 3 2 0 1 1 1 1

1 A

in two ways. First, expanding along the second row:

det(A) = 2· det

✓2 3 1 1

+ 0· det

✓1 3 1 1

1· det

✓1 2 1 1

= ( 2)( 1) ( 1) = 3.

Now expanding along the third column:

det(A) = 3· det

✓2 0 1 1

1· det

✓1 2 1 1

+ 1· det

✓1 2 2 0

= 3(2) 1( 1) + 1( 4) = 3.

Remark. If a matrix has many zeros in some row or column, then expanding along that row or column will speed up the computation of the determinant. For instance, to compute the determinant of the matrix

A = 0

@

1 3 0 3 2 3 1 4 0

1 A

we expand along the third column to obtain det(A) = 3 det

✓1 3 1 4

= 3.

Next we will prove a useful formula for the inverse of a matrix.

Definition. Given a square matrix A, let C be the matrix of cofactors of A, so that Cij is the (i, j) cofactor of A. The adjugate of A, also called the classical adjoint of A, is the matrix adj(A) = Ct. Theorem. If A is an n⇥ n matrix, then adj(A)A = (det A)In.

Proof. Let B = adj(A)A. We must show that Bij = det(A) if i = j and Bij = 0 otherwise. By definition of matrix multiplication,

Bij = Xn k=1

adj(A)ikAkj = Xn k=1

CkiAkj.

2

(9)

If i = j, the latter sum corresponds to an expansion of det(A) along the j-th column, so Bij = det(A). Now suppose that i6= j, and let M be the matrix obtained by replacing the i-th column of A with its j-th column. Note that Mki = Akj and M (k|i) = A(k|i) for every k. Thus, expanding det(M ) along the i-th column we have

det(M ) = Xn k=1

Mki( 1)k+idet(M (k|i)) = Xn k=1

Akj( 1)k+idet(A(k|i)) = Xn k=1

AkjCki = Bij. However, M has two equal columns, so det(M ) = 0, and therefore Bij = 0. ⇤ Corollary. If A is an invertible matrix, then A 1= (det A) 1adj(A).

Proof. This is an immediate consequence of the theorem. ⇤

References

Related documents

NB There is some overlap between these questions and those on the Circle, Similar Triangles and Trigonometry. The cross-section of the shelter is a segment of a circle with centre

The matrix M ′ is formed by replacing the third row of M by the five times the third row of M minus two times the second row of M... What is the maximum area of

The characteristics of each village were recorded in terms of size distribution of landholdings, caste composition, availability of irrigation, rainfall, cropping pattern,

The Audit Committee exercises supervision over the Managing Board in relation to aspects of internal risk management and control systems within the company from a

In column When an entry summary covers merchandise with more than one 27, directly below the line number, indicate the date corresponding foreign port of lading, record the

In a suspended QD tunneling electrons are able to excite phonons [5, 79, 80] that can be observed in transport measurements as equidistant excited state lines running parallel

vStorage Technologies and Interfaces • VMFS • Linked Clones • Thin Provisioning • Storage VMotion Storage Partners Storage Operations Storage Management • Storage Virtual

working through the headquarters emergency control centre and the District Officers. The District Officers will coordinate disaster relief efforts at the local level, in