Skip to main content

Section 3.7 Inverses and powers: Rules of Matrix Arithmetic

Subsection 3.7.1 What about division of matrices?

We have considered addition, subtraction and multiplication of matrices. What about division? When we consider real numbers, we can write \(\tfrac ba\) as \(b\cdot\tfrac 1a.\) In addition, we may think of \(\tfrac 1a\) as the multiplicative inverse of \(a\text{,}\) that is, it is the number which, when multiplied by \(a\) yields \(1.\) In other words, if we set \(a^{-1}=\tfrac1a\text{,}\) then \(a\cdot a^{-1}=a^{-1}\cdot a=1.\) Finally, \(1\) is the multiplicative identity, that is, \(r1=1r=r\) for any real number \(r\text{.}\) While these concepts can not be extended to matrices completely, there are some circumstances when they do make sense.

First, we can note that \(1\times1\) matrices satisfy \([a] + [b] = [a+b]\) and \([a][b]=[ab]\text{.}\) This means that both addition and multiplication of these matrices are just like the addition and multiplication of the real numbers. In this sense, matrices may be thought of as a generalization of the real numbers.

Next we remember that if \(A\) is \(m\times n\text{,}\) then \(I_mA=A=AI_n.\) This means that the identity matrix (or, more properly, matrices) acts in the same way as \(1\) does for the real numbers. This also means that if we want there to be a (single) matrix \(I\) satisfying \(IA=A=AI\text{,}\) then we must have \(m=n\text{.}\) This means we have to restrict ourselves to square matrices.

If \(A\) is an \(n\times n\) matrix, then \(I_nA=A=AI_n,\) and so \(I_n\) acts in the same manner as does \(1\) for the real numbers. Indeed, that is the reason it is called the identity matrix.

Finally, we want to find (if possible) a matrix \(A^{-1}\) so that \(A^{-1}A=AA^{-1}=I.\) When such a matrix exists, it is called the inverse of \(A\text{,}\) and the matrix \(A\) itself is called invertible.

Definition 3.7.1. The inverse of a matrix.

Let \(A\) be a square matrix. If there exists a matrix \(B\) so that

\begin{equation*} AB=BA=I \end{equation*}

then \(B\) is called the inverse of \(A\) and it is written as \(A^{-1}\text{.}\)

Definition 3.7.2. Matrix invertability.

A matrix \(A\) is invertible if it has an inverse, that is, if the matrix \(A^{-1}\) exists.

Subsection 3.7.2 Properties of the Inverse of a Matrix

We consistently refer to the inverse of \(A\) rather than an inverse of \(A,\) which would seem to imply that a matrix can have only one inverse. This is indeed true.

Suppose we have matrices \(B\) and \(C\) which both act as inverses, that is, \(AB=BA=I\) and \(AC=CA=I\text{.}\) We evaluate \(BAC\) in two different ways and equate the results:

\begin{equation*} BAC=(BA)C=IC=C\\ BAC=B(AC)=BI=B, \end{equation*}

and so \(B=C\text{.}\)

Inverse Test.

If \(A\) and \(B\) are square matrices of the same size, then \(B\) is a left inverse of \(A\) if \(BA=I.\) Similarly, it is a right inverse of \(A\) if \(AB=I\text{.}\)

By definition \(B\) is the inverse of \(A\) if \(AB=BA=I,\) that is, \(B\) is both a left inverse and a right inverse. We will show presently that if \(B\) is a right inverse of a square matrix \(A\text{,}\) then it is also a left inverse of \(A\) and hence the inverse of \(A\text{.}\)

We next make an observation about the reduced row echelon form of square matrices:

If every row in the reduced row echelon form of \(A\) has a leading one, then, since \(A\) has the same number of rows as columns, so does every column. This means that the leading ones must be on the diagonal, and the every other entry of the matrix is zero. In other words, the reduced row echelon form is \(I_n.\) If, on the other hand, some row does not have a leading one, then it is an all-zero row. Since these rows are at the bottom of the matrix when it is in reduced row echelon form, the last row, in particular, must be all zero.

Definition 3.7.5. Matrix singularity.

A square matrix is nonsingular if its reduced row echelon form is \(I\text{.}\) Otherwise it is singular.

Next we give a criterion for nonsingularity. It is trivial that if \(\vec x=\vec0,\) then \(M\vec x=\vec0.\) If this is the only vector \(\vec x\) for which this is true, then \(M\) is nonsingular.

First, suppose that \(M\) is nonsingular. The the equation \(M\vec x=\vec0\) has an augmented matrix which, in reduced row echelon form, gives the equation \(I\vec x=\vec0\text{.}\) Hence \(\vec x=\vec0\text{.}\)

Now suppose that \(M\) is singular. The reduced row echelon form is not \(I_n,\) and so some column does not contain a leading 1, that is, there must exist a free variable. It can be assigned a nonzero value, and thus provide a nonzero solution to \(M\vec x=\vec0.\)

Suppose that \(B\vec x=\vec0.\) Multiply both sides of the equation by \(A\text{:}\)

\begin{equation*} A(B\vec x)=A(\vec0)=\vec0\\ A(B\vec x)=(AB)\vec x=I\vec x=\vec x \end{equation*}

and so \(\vec x=\vec0.\) Hence \(B\vec x=\vec0\) implies \(\vec x=\vec0\) and so \(B\) is nonsingular.

From the previous lemma we know that \(B\) is nonsingular. Hence we know how to find \(C\) which is a solution to the equation \(BX=I\text{,}\) that is, so that \(BC=I.\) We now evaluate \(BABC\) in two different ways and equate the results:

\begin{equation*} BABC=B(AB)C=BIC=BC=I\\ BABC=(BA)(BC)=BA(I)=BA \end{equation*}

We get an important result from this Proposition.

By the Proposition above, \(AB=I\) implies \(BA=I.\) Since the inverse of \(A\) is unique, \(B=A^{-1}.\)

New Inverse Test.

If \(A\) and \(B\) are square matrices then \(B\) is the inverse of \(A\) if and only if \(AB=I.\)

Here is an application of the previous theorem:

Let \(B=(A^{-1})^T.\) Then

\begin{equation*} A^TB=A^T(A^{-1})^T = (A^{-1}A)^T=I^T=I \end{equation*}

and so \(B=(A^T)^{-1}\text{.}\)

Here is another application of the previous theorem:

Since

\begin{equation*} (AB)(B^{-1}A^{-1})= A(BB^{-1})A^{-1}=AIA^{-1}=AA^{-1}=I, \end{equation*}

it follows that \(B^{-1}A^{-1}\) is the inverse of \(AB\text{.}\)

Subsection 3.7.3 The Computation of the Inverse of a Matrix

Suppose we have a square matrix \(A\) and the reduced row echelon form of \(A\) is \(I\) (that is, \(A\) is nonsingular). \(X\) is the inverse of \(A\) if it satisfies the equation \(AX=I.\) We have seen how to solve such equations. We conclude that if we start with the matrix \([A|I]\) then the reduced row echelon form will be \([I|A^{-1}]\text{.}\) This not only allows us to compute the inverse of \(A\) but it also shows that nonsingular matrices are invertible and vice-versa.

Example: If we start with

\begin{equation*} A=\begin{bmatrix}1\amp2\amp1\\2\amp3\amp5\\1\amp2\amp0\end{bmatrix}, \end{equation*}

then

\begin{equation*} [A\mid I]= \left[\begin{array}{ccc|ccc} 1\amp2\amp1\amp1\amp0\amp0\\ 2\amp3\amp5\amp0\amp1\amp0\\ 1\amp2\amp0\amp0\amp0\amp1 \end{array}\right] \end{equation*}

has, as its reduced row echelon form,

\begin{equation*} [I\mid A^{-1}]= \left[\begin{array}{ccc|ccc} 1\amp0\amp0\amp-10\amp2\amp7\\ 0\amp1\amp0\amp5\amp-1\amp-3\\ 0\amp0\amp1\amp1\amp0\amp-1 \end{array}\right] \end{equation*}

and so we conclude that

\begin{equation*} A^{-1}=\begin{bmatrix} ]-10\amp2\amp7\\ 5\amp-1\amp-3\\ 1\amp0\amp-1 \end{bmatrix}. \end{equation*}
Example 3.7.12. The inverse of a \(3\times2\) matrix.

We start with the matrix

\begin{equation*} A=\begin{bmatrix} a\amp b\\c\amp d \end{bmatrix} \end{equation*}

Now we carry out row reduction:

\begin{equation*} \left[ \begin{array}{cc|cc} a\amp b\amp1\amp0\rlap{\hbox{$\qquad R_1\gets \frac1a R_1$}}\\ c\amp d\amp0\amp1 \end{array} \right] \\ \left[ \begin{array}{cc|cc} 1\amp\frac ba\amp\frac1a\amp0\\ c\amp d\amp0\amp1 \rlap{\hbox{$\qquad R_2\gets R_2-cR_1$}} \end{array} \right]\\ \left[ \begin{array}{cc|cc} 1\amp\frac ba\amp\frac1a\amp0 \\ 0\amp d-\frac{bc}a\amp-\frac ca\amp1 \end{array} \right] \rlap{\hbox{(Rewrite last row)}} \\ \left[\begin{array}{cc|cc} 1\amp\frac ba\amp\frac1a\amp0\\ 0\amp\frac{ad-bc}a\amp-\frac ca\amp1 \rlap{\hbox{$\qquad R_2\gets \frac a{ad-bc}R_2$}} \end{array} \right]\\ \left[ \begin{array}{cc|cc} 1\amp\frac ba\amp\frac1a\amp0\\ 0\amp1\amp-\frac c{ad-cb}\amp \frac a{ad-cb} \rlap{\hbox{$\qquad R_1\gets R_1-\frac ba R_2$}} \end{array}\right]\\ \left[ \begin{array}{cc|cc} 1\amp0\amp \frac d{ad-cb}\amp -\frac b{ad-cb} \\ 0\amp1\amp-\frac c{ad-cb}\amp \frac a{ad-cb} \end{array} \right] \end{equation*}

On the face of it, this seems to say

\begin{equation*} A^{-1} = \begin{bmatrix} \frac d{ad-cb}\amp -\frac b{ad-cb} \\ -\frac c{ad-cb}\amp \frac a{ad-cb} \end{bmatrix} = \frac1{ad-cb} \begin{bmatrix} d\amp-b\\ -c\amp a \end{bmatrix} \end{equation*}

But notice that we have blithely ignored the possibility that \(a=0\) or that \(ad-bc=0\text{.}\) Nonetheless we may compute:

\begin{equation*} \begin{bmatrix} a\amp b\\c\amp d \end{bmatrix} \begin{bmatrix} d\amp-b\\-c\amp a \end{bmatrix} = \begin{bmatrix} ad-bc\amp0\\0\amp ad-bc \end{bmatrix} =(ad-bc)I \end{equation*}

Hence if

\begin{equation*} A= \begin{bmatrix} a\amp b\\c\amp d \end{bmatrix} \hbox{ and } B= \frac1{ad-bc}\begin{bmatrix} d\amp-b\\-c\amp a \end{bmatrix} \end{equation*}

then

\begin{equation*} AB=I \end{equation*}

and so

\begin{equation*} B=A^{-1} \end{equation*}

we conclude that if

\begin{equation*} A= \begin{bmatrix} a\amp b\\c\amp d \end{bmatrix} \end{equation*}

where \(ad-bc\neq 0\) then

\begin{equation*} A^{-1}= \frac1{ad-bc}\begin{bmatrix} d\amp-b\\-c\amp a \end{bmatrix} \end{equation*}

Subsection 3.7.4 Applying the Inverse of a Matrix to Systems of Linear Equations

We take the equation \(Ax=b\) and multiply both sides by \(A^{-1}:\)

\begin{equation*} A^{-1}(Ax)=A^{-1}b\\ A^{-1}(Ax)=(A^{-1}A)x=Ix=x \end{equation*}

and so \(x=A^{-1}b\text{.}\)

Example 3.7.14. Solving a system of linear equations using the matrix inverse.

Suppose we want to solve the system of equations

\begin{equation*} \begin{array}{rrrrrrr} %x_1\amp+\amp2x_2\amp+\ampx_3\amp=\amp1\\ %2x_1\amp+\amp3x_2\amp+\amp5x_3\amp=\amp1\\ %x_1\amp+\amp2x_2\amp\amp\amp=\amp1 x_1+2x_2+x_3\amp=\amp1\\ 2x_1+3x_2+5x_3\amp=\amp1\\ x_1+2x_2\phantom{+0x_3}\amp=\amp1 \end{array} \end{equation*}

Then let

\begin{equation*} A= \begin{bmatrix} 1\amp2\amp1\\ 2\amp3\amp5\\ 1\amp2\amp0 \end{bmatrix} \hbox{ and } b= \begin{bmatrix} 1\\1\\1 \end{bmatrix} \end{equation*}

so that we are solving \(Ax=b.\) We have already done the computation to determine that

\begin{equation*} A^{-1}= \begin{bmatrix} -10\amp2\amp7\\ 5\amp-1\amp-3\\ 1\amp0\amp-1 \end{bmatrix}. \end{equation*}

Hence

\begin{equation*} x= \begin{bmatrix} -10\amp2\amp7\\ 5\amp-1\amp-3\\ 1\amp0\amp-1 \end{bmatrix} \begin{bmatrix}1\\1\\1\end{bmatrix} = \begin{bmatrix}-1\\1\\0\end{bmatrix}, \end{equation*}

and the (only) solution is \(x_1=-1, x_2=1, x_3=0.\)