Skip to main content

Section 6.3 Computing eigenspaces

We have already defined eigenspaces in Definition 6.1.7

Suppose we are given a square matrix \(A\) of order \(n\text{,}\) a real number \(\lambda\text{,}\) and we want to find all vectors in

\begin{equation*} E_\lambda=\{\vec x\in \R^n\mid A\vec x=\lambda\vec x\}\text{.} \end{equation*}

If \(A\vec x=\lambda\vec x\text{,}\) then \(A\vec x-\lambda\vec x=\vec 0\) and \((A-\lambda I)\vec x=\vec 0\text{.}\) Hence we need only solve a system of homogeneous equations.

From the previous Example 6.1.3,

\begin{equation*} A=\begin{bmatrix}5\amp -1\amp -2\\ 1\amp 3\amp -2\\ -1\amp -1\amp 4 \end{bmatrix} \end{equation*}

has eigenvalues \(\lambda=2,4,6\text{.}\) We find the eigenspaces for each eigenvalue.

Example 6.3.1. \(\lambda=2\).
\begin{equation*} A-2I= \begin{bmatrix}5\amp -1\amp -2\\ 1\amp 3\amp -2\\ -1\amp -1\amp 4 \end{bmatrix} - 2\begin{bmatrix}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 0\amp 1\end{bmatrix} =\begin{bmatrix}3\amp -1\amp -2\\ 1\amp 1\amp -2\\ -1\amp -1\amp 2 \end{bmatrix} \end{equation*}

As usual, we put the augmented matrix into reduced row echelon form:

\begin{equation*} \left[\begin{array}{ccc|c} 3\amp -1\amp -2\amp 0\\ 1\amp 1\amp -2\amp 0\\ -1\amp -1\amp 2\amp 0 \end{array}\right] \textrm{ reduces to } \left[\begin{array}{ccc|c} 1\amp 0\amp -1\amp 0\\ 0\amp 1\amp -1\amp 0\\ 0\amp 0\amp 0\amp 0 \end{array}\right] \end{equation*}

and so all solutions are of the form \((x,y,z)=(t,t,t)=t(1,1,1)\text{.}\)

Example 6.3.2. \(\lambda=4\).
\begin{equation*} \left[\begin{array}{ccc|c} 1\amp -1\amp -2\amp 0\\ 1\amp -1\amp -2\amp 0\\ -1\amp -1\amp 0\amp 0 \end{array}\right] \textrm{ reduces to } \left[\begin{array}{ccc|c} 1\amp 0\amp -1\amp 0\\ 0\amp 1\amp 1\amp 0\\ 0\amp 0\amp 0\amp 0 \end{array}\right] \end{equation*}

and so all solutions are of the form \((x,y,z)=(t,-t,t)=t(1,-1,1)\text{.}\)

Example 6.3.3. \(\lambda=6\).
\begin{equation*} \left[\begin{array}{ccc|c} -1\amp -1\amp -2\amp 0\\ 1\amp -3\amp -2\amp 0\\ -1\amp -1\amp -2\amp 0 \end{array}\right] \textrm{ reduces to } \left[\begin{array}{ccc|c} 1\amp 0\amp 1\amp 0\\ 0\amp 1\amp 1\amp 0\\ 0\amp 0\amp 0\amp 0 \end{array}\right] \end{equation*}

and so all solutions are of the form \((x,y,z)=(-t,-t,t)=t(-1,-1,1)\text{.}\)

Notice that setting \(t=1\) in each case gives us the the original eigenvectors of the example.

We can use similar arguments for Example 6.1.4, in which \(A=\begin{bmatrix} 2\amp 1\amp 4\\ 0\amp 3\amp 0\\ 2\amp -2\amp -5 \end{bmatrix}\) and \(\lambda=-6,3\text{:}\)

Example 6.3.4. \(\lambda=-6\).
\begin{equation*} A-\lambda I= \begin{bmatrix}8\amp 1\amp 4\\ 0\amp 9\amp 0\\ 2\amp -2\amp 1 \end{bmatrix} \text{ reduces to } \begin{bmatrix}1\amp 0\amp \frac12\\ 0\amp 1\amp 0\\ 0\amp 0\amp 0 \end{bmatrix} \end{equation*}

and so all eigenvectors are of the form \((x,y,z)=t(1,0,-2)\text{.}\)

Example 6.3.5. \(\lambda=3\).
\begin{equation*} A-\lambda I= \begin{bmatrix}-1\amp 1\amp 4\\ 0\amp 0\amp 0\\ 2\amp -2\amp -8 \end{bmatrix} \text{ reduces to } \begin{bmatrix}1\amp -1\amp -4\\ 0\amp 0\amp 0\\ 0\amp 0\amp 0 \end{bmatrix} \end{equation*}

and so all eigenvectors are of the form \((x,y,z)=t(1,1,0)+u(4,0,1)\text{.}\)

\begin{gather*} A^2\vec x=A(A\vec x)=A(\lambda \vec x) =\lambda A\vec x=\lambda^2\vec x\\ A^3\vec x=A(A^2\vec x)=A(\lambda^2 \vec x) =\lambda^2 A\vec x=\lambda^3\vec x \end{gather*}

Repeating this process yields the desired result.

By definition of the span of a set,

\begin{equation*} \vec x=\sum_{i=1}^m r_i\vec x_i \end{equation*}

and so

\begin{equation*} A(\vec x)=A(\sum_{i=1}^m r_i\vec x_i)=\sum_{i=1}^m r_iA(\vec x_i) =\sum_{i=1}^mr_i\lambda_i\vec x_i\text{.} \end{equation*}

Similarly,

\begin{equation*} A^n(\vec x)=A^n(\sum_{i=1}^m r_i\vec x_i)=\sum_{i=1}^m r_iA^n(\vec x_i) =\sum_{i=1}^mr_i\lambda_i^n\vec x_i\text{.} \end{equation*}
Example 6.3.8. Eigenspaces and the powers of a matrix.

Let \(A=\begin{bmatrix}0\amp1\amp1\\1\amp0\amp1\\1\amp1\amp0 \end{bmatrix}\text{,}\) \(\vec x_1=\begin{bmatrix}1\\1\\1\end{bmatrix}\text{,}\) \(\vec x_2=\begin{bmatrix}-1\\1\\0\end{bmatrix}\text{,}\) and \(\vec x_3=\begin{bmatrix}-1\\0\\1\end{bmatrix}\text{.}\) Then \(\vec x_1\text{,}\) \(\vec x_2\text{,}\) and \(\vec x_3\text{,}\) are eigenvectors of \(A\) with corresponding eigenvalues \(2\text{,}\) \(-1\text{,}\) and \(-1\text{.}\) In fact \(\{\vec x_1, \vec x_2, \vec x_3\}\) is a basis for \(\R^3\text{.}\) From the definition of a basis, for any given \(\vec x\in\R^3\text{,}\) there is a unique choice of \(r_1,r_2,r_3\) so that \(\vec x=r_1\vec x_1 + r_2\vec x_2 + r_3\vec x_3\text{.}\) From the previous proposition,

\begin{equation*} A^n\vec x=2^nr_1\vec x_1+ (-1)^n r_2\vec x_2 +(-1)^n r_3 \vec x_3 \end{equation*}

As \(n\) gets large, the coefficient of \(x_1\) becomes huge and the value of \(A^n\vec x\) is very close to a scalar multiple of \(\vec x_1\text{,}\) that is, it approaches the eigenspace \(E_2\text{.}\)