Section 4.3 The determinant of large matrices
In Definition 4.1.1 the determinant of matrices of size \(n \le 3\) was defined using simple formulas. For larger matrices, unfortunately, there is no simple formula, and so we use a different approach. We reduce the problem of finding the determinant of one matrix of order \(n\) to a problem of finding \(n\) determinants of matrices of order \(n-1\text{.}\) So, for example, we find the determinant of a matrix of order \(4\) by evaluating the determinants of \(4\) matrices of order \(3\text{.}\) We have a formula for matrices of order \(3\text{,}\) so, in principle, it will be possible to evaluate the determinant for any matrix of order \(4\text{.}\) We can use this ability for finding the determinant of any matrix of order \(5\text{:}\) reduce it to a problem of \(5\) matrices of order \(4\text{,}\) which we already know how to solve. Continuing with the line of reasoning gives us the ability to evaluate determinants of any size.
While this gives the theoretical ability to compute determinants, the number of computations quickly becomes unworkable. We need to improve our mathematical techniques to enable practical computations. The material developed in this section allows the easy evaluation of many larger matrices.
Subsection 4.3.1 A motivating computation
This subsection contains optional material. The goal is to motivate Theorem 4.3.4. It may be skipped on first reading if desired.
Definition 4.3.1. Hadamard product of two matrices.
If \(A=[a_{i,j}]\) and \(B=[b_{i,j}]\) are both \(m\times n\) matrices, then the Hadamard product \(A\circ B\) is an \(m\times n\) matrix defined by
In other words, multiplication is done element-wise.
Lemma 4.3.2. \(C=P\circ M\).
If \(M\) is the matrix of minors, and \(C\) is the cofactor matrix, then
Proof.
This is just restating Observation 4.2.7
Compare the results of Example 4.2.5 and Example 4.2.6.
Starting with a given square matrix \(A\text{,}\) we have defined the matrix of minors \(M\)and the cofactor matrix \(C\) using Definition 4.2.1 and Definition 4.2.2. We have also used Hadamard multiplication of matrices Definition 4.3.1 to see that \(C=P\circ M\text{.}\) We now wish to do the further evaluation of \(A\circ P\circ M=A\circ C\text{.}\)
Example 4.3.3. \(A\circ C\) has constant row and column sums.
Let
The matrix of minors, \(M\text{,}\) is then
and the cofactor matrix \(C\) is then
As noted before, \(C=P\circ M\) where
We continue by computing \(A\circ C=A\circ P\circ M\text{.}\)
Finally, we compote the sums of the entries in each row and each column.
An astonishing result is seen in this example. Adding the entries in any given row or in any given column gives row sums and column sums that are identical.
Theorem 4.3.4. \(A\circ C\) has constant row and column sums.
Let \(A\) be any square matrix, \(M\) be its matrix of minors and \(P\) satisfy \(p_{i,j}=(-1)^{i+j}\text{.}\) Then the row sums and column sums of \(A\circ P\circ M\) are identical.
Proof.
The equivalent Theorem 4.3.8 will be proven later.
We now use Theorem 4.3.4 to define the determinant for large matrices.
Definition 4.3.5. The determinant of a square matrix.
Let \(A\) be a square matrix with \(M\) as the matrix of minors and \(C\) as cofactor matrix. Then the determinant of \(A\) is the common row and column sum of \(A\circ P\circ M= A\circ C\text{.}\)
Subsection 4.3.2 The definition of the determinant
We can make the definition more explicit by focusing on a particular row. For any square matrix \(A\) of order \(n\text{,}\) the entries in the first row of the cofactor matrix are \(C_{1,1}, C_{1,2},\ldots,C_{1,n}\text{.}\) The entries of the first row of \(A\) are \(a_{1,1},a_{1,2},a_{1,3},\ldots,a_{1,n}\text{.}\) Hence the sum of the entries in the first row of \(A\circ C\) is
This number, by definition, is the determinant of \(A\text{.}\) It is called the first row expansion of \(A\text{.}\) There is nothing special about the first row. An analogous definition exists for all rows.
Definition 4.3.6. The \(i\)-th row expansion of \(A\).
Let \(A\) be a square matrix of order \(n\text{.}\) Then the \(i\)-th row expansion of \(A\) is
Columns are handled in exactly the same way
Definition 4.3.7. The \(j\)-th column expansion of \(A\).
Let \(A\) be a square matrix of order \(n\text{.}\) Then the \(j\)-th column expansion of \(A\) is
We now restate Theorem 4.3.4.
Theorem 4.3.8. Laplace expansion theorem.
For any square matrix \(A\text{,}\) the \(i\)-th row and \(j\)-th column expansions are all equal.
Proof.
The proof is difficult and needs further mathematical tools. To maintain the flow of our presentation, we put it off until Section 4.6.
The Laplace expansion theorem allow an alternative (and more usual) definition of the determinant.
Definition 4.3.9. The determinant of a matrix.
For any square matrix \(A\text{,}\) the determinant of \(A\) is the common value of the \(i\)-th row expansions and \(j\)-th column expansions of \(A\text{.}\)
Example 4.3.10. An example of cofactor expansion with \(n=4\).
Let \(A= \begin{bmatrix} 1\amp 2\amp 2\amp 3\\ -1\amp 4\amp 5\amp 3\\ 3\amp 4\amp 8\amp -1\\ 1\amp 2\amp 2\amp 1 \end{bmatrix}\text{.}\)
We will evaluate \(\det(A)\) by expanding on the first row. The formula for the first row expansion is
Here is the computation of the individual pieces.
Now we can compute
The cofactor expansion on column \(C_3\) is
The individual pieces are
and so
Similarly, the cofactor expansion on column \(C_4\) evaluates to \(-3(26)+3(0)+1(0)+1(26)=-52\text{.}\)
The three cofactor expansions of \(A\) give the identical result.