Skip to main content

Section 4.3 The determinant of large matrices

In Definition 4.1.1 the determinant of matrices of size \(n \le 3\) was defined using simple formulas. For larger matrices, unfortunately, there is no simple formula, and so we use a different approach. We reduce the problem of finding the determinant of one matrix of order \(n\) to a problem of finding \(n\) determinants of matrices of order \(n-1\text{.}\) So, for example, we find the determinant of a matrix of order \(4\) by evaluating the determinants of \(4\) matrices of order \(3\text{.}\) We have a formula for matrices of order \(3\text{,}\) so, in principle, it will be possible to evaluate the determinant for any matrix of order \(4\text{.}\) We can use this ability for finding the determinant of any matrix of order \(5\text{:}\) reduce it to a problem of \(5\) matrices of order \(4\text{,}\) which we already know how to solve. Continuing with the line of reasoning gives us the ability to evaluate determinants of any size.

While this gives the theoretical ability to compute determinants, the number of computations quickly becomes unworkable. We need to improve our mathematical techniques to enable practical computations. The material developed in this section allows the easy evaluation of many larger matrices.

Subsection 4.3.1 A motivating computation

This subsection contains optional material. The goal is to motivate Theorem 4.3.4. It may be skipped on first reading if desired.

Definition 4.3.1. Hadamard product of two matrices.

If \(A=[a_{i,j}]\) and \(B=[b_{i,j}]\) are both \(m\times n\) matrices, then the Hadamard product \(A\circ B\) is an \(m\times n\) matrix defined by

\begin{equation*} (A\circ B)_{i,j}= a_{i,j}b_{i,j} \end{equation*}

In other words, multiplication is done element-wise.

Compare the results of Example 4.2.5 and Example 4.2.6.

Starting with a given square matrix \(A\text{,}\) we have defined the matrix of minors \(M\)and the cofactor matrix \(C\) using Definition 4.2.1 and Definition 4.2.2. We have also used Hadamard multiplication of matrices Definition 4.3.1 to see that \(C=P\circ M\text{.}\) We now wish to do the further evaluation of \(A\circ P\circ M=A\circ C\text{.}\)

Example 4.3.3. \(A\circ C\) has constant row and column sums.

Let

\begin{equation*} A= \begin{bmatrix} 1\amp0\amp-1\amp2\\ 1\amp-1\amp1\amp0\\ 0\amp1\amp-2\amp1\\ -1\amp1\amp0\amp1 \end{bmatrix} \end{equation*}

The matrix of minors, \(M\text{,}\) is then

\begin{equation*} M= \begin{bmatrix} 2 \amp -3 \amp 1 \amp 1\\ 4\amp -5\amp 2\amp 1\\ -3\amp 4\amp -1\amp -1\\ 1\amp -2\amp 1\amp 0 \end{bmatrix} \end{equation*}

and the cofactor matrix \(C\) is then

\begin{equation*} C= \begin{bmatrix} 2 \amp 3 \amp 1 \amp -1\\ -4\amp -5\amp -2\amp 1\\ -3\amp -4\amp -1\amp 1\\ -1\amp -2\amp -1\amp 0 \end{bmatrix} \end{equation*}

As noted before, \(C=P\circ M\) where

\begin{equation*} P= \begin{bmatrix} 1 \amp -1 \amp 1\amp -1\\ -1 \amp 1 \amp -1\amp 1\\ 1 \amp -1 \amp 1\amp -1\\ -1 \amp 1 \amp -1\amp 1 \end{bmatrix} \end{equation*}

We continue by computing \(A\circ C=A\circ P\circ M\text{.}\)

\begin{align*} A\circ C \amp = \begin{bmatrix} 1\amp0\amp-1\amp2\\ 1\amp-1\amp1\amp0\\ 0\amp1\amp-2\amp1\\ -1\amp1\amp0\amp1 \end{bmatrix} \circ \begin{bmatrix} 2 \amp 3 \amp 1 \amp -1\\ -4\amp -5\amp -2\amp 1\\ -3\amp -4\amp -1\amp 1\\ -1\amp -2\amp -1\amp 0 \end{bmatrix}\\ \amp = \begin{bmatrix} 2 \amp 0 \amp -1 \amp -2 \\ -4 \amp 5 \amp -2 \amp 0 \\ 0 \amp -4 \amp 2 \amp 1 \\ 1 \amp -2 \amp 0 \amp 0 \end{bmatrix} \end{align*}

Finally, we compote the sums of the entries in each row and each column.

\begin{equation*} \begin{matrix} \amp \textrm{Row sums} \\ \begin{bmatrix} 2 \amp 0 \amp -1 \amp -2 \\ -4 \amp 5 \amp -2 \amp 0 \\ 0 \amp -4 \amp 2 \amp 1 \\ 1 \amp -2 \amp 0 \amp 0 \end{bmatrix} \amp \begin{matrix} -1\\-1\\-1\\-1 \end{matrix} \\ \begin{matrix} \llap{\textrm{Column sums: }}-1\amp -1\amp-1\amp-1 \end{matrix} \end{matrix} \end{equation*}

An astonishing result is seen in this example. Adding the entries in any given row or in any given column gives row sums and column sums that are identical.

We now use Theorem 4.3.4 to define the determinant for large matrices.

Definition 4.3.5. The determinant of a square matrix.

Let \(A\) be a square matrix with \(M\) as the matrix of minors and \(C\) as cofactor matrix. Then the determinant of \(A\) is the common row and column sum of \(A\circ P\circ M= A\circ C\text{.}\)

Subsection 4.3.2 The definition of the determinant

We can make the definition more explicit by focusing on a particular row. For any square matrix \(A\) of order \(n\text{,}\) the entries in the first row of the cofactor matrix are \(C_{1,1}, C_{1,2},\ldots,C_{1,n}\text{.}\) The entries of the first row of \(A\) are \(a_{1,1},a_{1,2},a_{1,3},\ldots,a_{1,n}\text{.}\) Hence the sum of the entries in the first row of \(A\circ C\) is

\begin{equation*} a_{1,1}C_{1,1}+a_{1,2}C_{1,2}+a_{1,3}C_{1,3} +\cdots+a_{1,n}C_{1,n}=\sum_{j=1}^n a_{1,j}C_{1,j} \end{equation*}

This number, by definition, is the determinant of \(A\text{.}\) It is called the first row expansion of \(A\text{.}\) There is nothing special about the first row. An analogous definition exists for all rows.

Definition 4.3.6. The \(i\)-th row expansion of \(A\).

Let \(A\) be a square matrix of order \(n\text{.}\) Then the \(i\)-th row expansion of \(A\) is

\begin{equation*} \sum_{j=1}^n a_{i,j}C_{i,j}. \end{equation*}

Columns are handled in exactly the same way

Definition 4.3.7. The \(j\)-th column expansion of \(A\).

Let \(A\) be a square matrix of order \(n\text{.}\) Then the \(j\)-th column expansion of \(A\) is

\begin{equation*} \sum_{i=1}^n a_{i,j}C_{i,j}. \end{equation*}

We now restate Theorem 4.3.4.

The proof is difficult and needs further mathematical tools. To maintain the flow of our presentation, we put it off until Section 4.6.

The Laplace expansion theorem allow an alternative (and more usual) definition of the determinant.

Definition 4.3.9. The determinant of a matrix.

For any square matrix \(A\text{,}\) the determinant of \(A\) is the common value of the \(i\)-th row expansions and \(j\)-th column expansions of \(A\text{.}\)

Example 4.3.10. An example of cofactor expansion with \(n=4\).

Let \(A= \begin{bmatrix} 1\amp 2\amp 2\amp 3\\ -1\amp 4\amp 5\amp 3\\ 3\amp 4\amp 8\amp -1\\ 1\amp 2\amp 2\amp 1 \end{bmatrix}\text{.}\)

We will evaluate \(\det(A)\) by expanding on the first row. The formula for the first row expansion is

\begin{equation*} \det(A)=a_{1,1}C_{1,1} + a_{1,2}C_{1,2}+ a_{1,3}C_{1,3}+ a_{1,4}C_{1,4}. \end{equation*}

Here is the computation of the individual pieces.

\begin{equation*} \begin{array}{|c|c|c|} \hline a_{1,1}=1 \amp M_{1,1}= \det\begin{bmatrix}4\amp 5\amp 3\\ 4\amp 8\amp -1\\ 2\amp 2\amp 1\end{bmatrix} =-14 \amp C_{1,1}=(-1)^2 M_{1,1}=-14 \\ \hline a_{1,2}=2 \amp M_{1,2}= \det\begin{bmatrix}-1\amp 5\amp 3\\ 3\amp 8\amp -1\\ 1\amp 2\amp 1\end{bmatrix}=-36 \amp C_{1,2}=(-1)^3 M_{1,2}=36 \\ \hline a_{1,3}=2 \amp M_{1,3}= \det\begin{bmatrix}-1\amp 4\amp 3\\ 3\amp 4\amp -1\\ 1\amp 2\amp 1\end{bmatrix}=-16 \amp C_{1,3}=(-1)^4 M_{1,3}=-16 \\ \hline a_{1,4}=3 \amp M_{1,4}= \det\begin{bmatrix}-1\amp 4\amp 5\\ 3\amp 4\amp -8\\ 1\amp 2\amp 2\end{bmatrix}=26 \amp C_{1,4}=(-1)^5 M_{1,4}=-26\\ \hline \end{array} \end{equation*}

Now we can compute

\begin{align*} a_{1,1}C_{1,1} \amp + a_{1,2}C_{1,2} + a_{1,3}C_{1,3} + a_{1,4}C_{1,4}\\ \amp = 1\cdot(-14) +2\cdot 36 + 2\cdot(-16) + 3\cdot(-26)\\ \amp =-52 \end{align*}

The cofactor expansion on column \(C_3\) is

\begin{align*} a_{1,3}C_{1,3} + \amp a_{2,3}C_{2,3} +a_{3,3}C_{3,3} + a_{4,3}C_{4,3}\\ \amp = a_{1,3}M_{1,3} - a_{2,3}M_{2,3} +a_{3,3}M_{3,3} - a_{4,3}M_{4,3} \end{align*}

The individual pieces are

\begin{equation*} \begin{array}{c} M_{1,3}= \det\begin{bmatrix} -1\amp 4\amp 3\\ 3\amp 4\amp -1\\ 1\amp 2\amp 1 \end{bmatrix} =-16 \\ M_{2,3}= \det\begin{bmatrix} 1\amp 2\amp 3\\ 3\amp 4\amp -1\\ 1\amp 2\amp 1 \end{bmatrix} =4 \\ M_{3,3}= \det\begin{bmatrix} 1\amp 2\amp 3\\ -1\amp 4\amp 3\\ 1\amp 2\amp 1 \end{bmatrix} =-12 \\ M_{4,3}= \det\begin{bmatrix} 1\amp 2\amp 3\\ -1\amp 4\amp 3\\ 3\amp 4\amp -1 \end{bmatrix} =-48 \end{array} \end{equation*}

and so

\begin{align*} a_{1,3}M_{1,3} - \amp a_{2,3}M_{2,3}+a_{3,3}M_{3,3} - a_{4,3}M_{4,3}\\ \amp= 2(-16)-5(4)+8(-12)-2(-48)\\ \amp= -52 \end{align*}

Similarly, the cofactor expansion on column \(C_4\) evaluates to \(-3(26)+3(0)+1(0)+1(26)=-52\text{.}\)

The three cofactor expansions of \(A\) give the identical result.