Section 4.5 The adjoint of a matrix and Cramer's rule
We have already used Definition 4.2.2 to define the cofactor matrix \(C\) of a matrix \(A\text{.}\) We use this to define the adjoint of a square matrix.
Definition 4.5.1. The adjoint of a matrix.
If a matrix \(A\) has \(C\) as a cofactor matrix then the adjoint of \(A\) is \(C^T\text{.}\) We write this as \(\adj(A)=C^T\text{.}\)
Example 4.5.2. The adjoint of a matrix.
Let
Then
and so
Theorem 4.5.3. The inverse and the adjoint of a matrix.
Let \(A\) be an invertible matrix. Then
Proof.
Consider the \(i\)-\(j\) entry of \(A \adj A\text{,}\) which we write as
There are two cases:
-
\(i=j\text{:}\) In this case, from the \(i\)-th row expansion of \(A\text{,}\)
\begin{equation*} (A \adj A)_{i,i}=\sum_{k=1}^n a_{i,k} C_{i,k}=\det A \end{equation*} -
\(i\not=j\text{:}\) For this case we use a new matrix \(B\) constructed from \(A\) by replacing \(R_j\) with \(R_i\text{,}\) that is \(R_j\gets R_i\) (this is not an elementary row operation). This means \(B_{j,k}=A_{i,k}=a_{i,k}\) for \(k=1,2,\ldots n\text{.}\) Since \(B\) has two identical rows, Theorem 4.4.12 tells us that \(\det B=0\text{.}\) On the other hand, by expanding on \(R_j\) of \(B\text{,}\) we have
\begin{equation*} 0 =\sum_{k=1}^n B_{j,k} C_{j,k} =\sum_{k=1}^n a_{i,k} C_{j,k}= (A \adj A)_{i,j} \end{equation*}Combining the two cases.
\begin{align*} A \adj A \amp = \begin{cases} \det A \amp \textrm{if } i=j\\ 0 \amp \textrm{otherwise} \end{cases}\\ \amp = (\det A)I \end{align*}Hence
\begin{equation*} A\frac1{\det A} \adj A=I \end{equation*}and
\begin{equation*} \frac1{\det A} \adj A=A^{-1}. \end{equation*}
Example 4.5.4. The inverse computed using the adjoint of \(A\).
Let
First we compute \(\det A=1\)
Next we compute the minors:
from which we deduce
from which follows
Proposition 4.5.5. Integral matrices with integral inverses.
If \(A\) is a square matrix with integer entries, then \(A^{-1}\) has all integer entries if and only if \(\det A=\pm 1\text{.}\)
Proof.
If \(\det A=\pm1\) then
where \(C\) is the cofactor matrix. But the entries of \(C\) are computed by taking the determinant of matrices with integer entries. Since this determinant is computed using products and sums of integers, \(C\) must have all integer entries, and hence so does \(A^{-1}\text{.}\)
Conversely, if both \(A\) and \(A^{-1}\) have all integer entries, the \(\det A\) and \(\det A^{-1}\) are both integers. But then
Hence either \(\det A=\det A^{-1}=1\) or \(\det A=\det A^{-1}=-1\text{.}\)
There is a nice application of the adjoint to the solution of \(n\) equations in \(n\) unknowns. We can write such a system of linear equations as
If \(A\) is nonsingular, then this system has a unique solution \(x=A^{-1}b= \frac1{\det A} \adj Ab\text{.}\) We define new matrices \(A_1,A_2,\ldots,A_n\text{:}\) construct \(A_k\) by replacing the \(k\)-th column of \(A\) with \(b\text{.}\) More specifically, if the columns of \(A\) are \(C_1,C_2,\ldots,C_n\text{,}\) then
Theorem 4.5.6. Cramer's rule.
Let
be a system of \(n\) linear equations in \(n\) unknowns, and \(A_k\) be the matrix obtained by replacing the \(k\)-th column of \(A\) with \(b\text{.}\) If \(A\) is nonsingular, then the unique solution \(x\) satisfies
Proof.
Since \(A\) is invertible, we may use the cofactor matrix \(C\) to get
Then
since \(\sum_{k=1}^n b_kC_{k,i}\) is the \(i\)-th column expansion for the evaluation of \(\det A_i\text{.}\) Hence
Example 4.5.7. Application of Cramer's rule.
Consider the system of linear equations
We have
and so