6 Linear Algebra
Topics:
- Working with Vectors
- Linear Independence
- Basics of Matrix Algebra
- Square Matrices
- Linear Equations
- Systems of Linear Equations
- Systems of Equations as Matrices
- Solving Augmented Matrices and Systems of Equations
- Rank
- The Inverse of a Matrix
- Inverse of Larger Matrices
6.1 Working with Vectors
Vector: A vector in \(n\)-space is an ordered list of \(n\) numbers. These numbers can be represented as either a row vector or a column vector: \[ {\bf v} \begin{pmatrix} v_1 & v_2 & \dots & v_n\end{pmatrix} , {\bf v} = \begin{pmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{pmatrix}\]
We can also think of a vector as defining a point in \(n\)-dimensional space, usually \({\bf R}^n\); each element of the vector defines the coordinate of the point in a particular direction.
Vector Addition and Subtraction: If two vectors, \({\bf u}\) and \({\bf v}\), have the same length (i.e. have the same number of elements), they can be added (subtracted) together: \[ {\bf u} + {\bf v} = \begin{pmatrix} u_1 + v_1 & u_2 + v_2 & \cdots & u_k + v_n \end{pmatrix}\] \[ {\bf u} - {\bf v} = \begin{pmatrix} u_1 - v_1 & u_2 - v_2 & \cdots & u_k - v_n \end{pmatrix}\]
Scalar Multiplication: The product of a scalar \(c\) (i.e. a constant) and vector \({\bf v}\) is:
\[ c{\bf v} = \begin{pmatrix} cv_1 & cv_2 & \dots & cv_n \end{pmatrix} \]
Vector Inner Product: The inner product (also called the dot product or scalar product) of two vectors \({\bf u}\) and \({\bf v}\) is again defined if and only if they have the same number of elements \[ {\bf u} \cdot {\bf v} = u_1v_1 + u_2v_2 + \cdots + u_nv_n = \sum_{i = 1}^n u_iv_i\] If \({\bf u} \cdot {\bf v} = 0\), the two vectors are orthogonal (or perpendicular).
Vector Norm: The norm of a vector is a measure of its length. There are many different ways to calculate the norm, but the most common is the Euclidean norm (which corresponds to our usual conception of distance in three-dimensional space): \[ ||{\bf v}|| = \sqrt{{\bf v}\cdot{\bf v}} = \sqrt{ v_1v_1 + v_2v_2 + \cdots + v_nv_n}\]
Example 6.1 (Vector Algebra) Let \(a = \begin{pmatrix} 2&1&2\end{pmatrix}\), \(b = \begin{pmatrix} 3&4&5 \end{pmatrix}\). Calculate the following:
\(a - b\)
\(a \cdot b\)
Exercise 6.1 (Vector Algebra) Let \(u = \begin{pmatrix} 7&1&-5&3\end{pmatrix}\), \(v = \begin{pmatrix} 9&-3&2&8 \end{pmatrix}\), \(w = \begin{pmatrix} 1&13& -7&2 &15 \end{pmatrix}\), and \(c = 2\). Calculate the following:
\(u-v\)
\(cw\)
\(u \cdot v\)
\(w \cdot v\)
6.2 Linear Independence
Linear combinations: The vector \({\bf u}\) is a linear combination of the vectors \({\bf v}_1, {\bf v}_2, \cdots , {\bf v}_k\) if \[{\bf u} = c_1{\bf v}_1 + c_2{\bf v}_2 + \cdots + c_k{\bf v}_k\]
For example, \(\begin{pmatrix}9 & 13 & 17 \end{pmatrix}\) is a linear combination of the following three vectors: \(\begin{pmatrix}1 & 2 & 3 \end{pmatrix}\), \(\begin{pmatrix} 2 & 3& 4\end{pmatrix}\), and \(\begin{pmatrix} 3 & 4 & 5 \end{pmatrix}\). This is because \(\begin{pmatrix}9 & 13 & 17 \end{pmatrix}\) = \((2)\begin{pmatrix}1 & 2 & 3 \end{pmatrix}\) \(+ (-1)\begin{pmatrix} 2 & 3& 4\end{pmatrix}\) + \(3\begin{pmatrix} 3 & 4 & 5 \end{pmatrix}\)
Linear independence: A set of vectors \({\bf v}_1, {\bf v}_2, \cdots , {\bf v}_k\) is linearly independent if the only solution to the equation \[c_1{\bf v}_1 + c_2{\bf v}_2 + \cdots + c_k{\bf v}_k = 0\] is \(c_1 = c_2 = \cdots = c_k = 0\). If another solution exists, the set of vectors is linearly dependent.
A set \(S\) of vectors is linearly dependent if and only if at least one of the vectors in \(S\) can be written as a linear combination of the other vectors in \(S\).
Linear independence is only defined for sets of vectors with the same number of elements; any linearly independent set of vectors in \(n\)-space contains at most \(n\) vectors.
Since \(\begin{pmatrix}9 & 13 & 17 \end{pmatrix}\) is a linear combination of \(\begin{pmatrix}1 & 2 & 3 \end{pmatrix}\), \(\begin{pmatrix} 2 & 3& 4\end{pmatrix}\), and \(\begin{pmatrix} 3 & 4 & 5 \end{pmatrix}\), these 4 vectors constitute a linearly dependent set.
Example 6.2 (Linear Independence) Are the following sets of vectors linearly independent?
- \(\begin{pmatrix}2 & 3 & 1 \end{pmatrix}\) and \(\begin{pmatrix}4 & 6 & 1 \end{pmatrix}\)
- \(\begin{pmatrix}1 & 0 & 0 \end{pmatrix}\), \(\begin{pmatrix}0 & 5 & 0 \end{pmatrix}\), and \(\begin{pmatrix}10 & 10 & 0 \end{pmatrix}\)
Exercise 6.2 (Linear Independence) Are the following sets of vectors linearly independent?
\[{\bf v}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} , {\bf v}_2 = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} , {\bf v}_3 = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \]
\[{\bf v}_1 = \begin{pmatrix} 2 \\ 2 \\ 1 \end{pmatrix} , {\bf v}_2 = \begin{pmatrix} -4 \\ 6 \\ 5 \end{pmatrix} , {\bf v}_3 = \begin{pmatrix} -2 \\ 8 \\ 6 \end{pmatrix} \]
6.3 Basics of Matrix Algebra
Matrix: A matrix is an array of real numbers arranged in \(m\) rows by \(n\) columns. The dimensionality of the matrix is defined as the number of rows by the number of columns, \(m \times n\).
\[{\bf A}=\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix}\]
Note that you can think of vectors as special cases of matrices; a column vector of length \(k\) is a \(k \times 1\) matrix, while a row vector of the same length is a \(1 \times k\) matrix.
It’s also useful to think of matrices as being made up of a collection of row or column vectors. For example, \[\bf A = \begin{pmatrix} {\bf a}_1 & {\bf a}_2 & \cdots & {\bf a}_m \end{pmatrix}\]
Matrix Addition: Let \(\bf A\) and \(\bf B\) be two \(m\times n\) matrices. \[{\bf A+B}=\begin{pmatrix} a_{11}+b_{11} & a_{12}+b_{12} & \cdots & a_{1n}+b_{1n} \\ a_{21}+b_{21} & a_{22}+b_{22} & \cdots & a_{2n}+b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1}+b_{m1} & a_{m2}+b_{m2} & \cdots & a_{mn}+b_{mn} \end{pmatrix}\]
Note that matrices \({\bf A}\) and \({\bf B}\) must have the same dimensionality, in which case they are conformable for addition.
Example 6.3 \[{\bf A}=\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{pmatrix}, \qquad {\bf B}=\begin{pmatrix} 1 & 2 & 1 \\ 2 & 1 & 2 \end{pmatrix}\] \({\bf A+B}=\)
Scalar Multiplication: Given the scalar \(s\), the scalar multiplication of \(s {\bf A}\) is \[ s {\bf A}= s \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} = \begin{pmatrix} s a_{11} & s a_{12} & \cdots & s a_{1n} \\ s a_{21} & s a_{22} & \cdots & s a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ s a_{m1} & s a_{m2} & \cdots & s a_{mn} \end{pmatrix}\]
Example 6.4 \(s=2, \qquad {\bf A}=\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{pmatrix}\)
\(s {\bf A} =\)
Matrix Multiplication: If \({\bf A}\) is an \(m\times k\) matrix and \(\bf B\) is a \(k\times n\) matrix, then their product \(\bf C = A B\) is the \(m\times n\) matrix where \[c_{ij}=a_{i1}b_{1j}+a_{i2}b_{2j}+\cdots+a_{ik}b_{kj}\]
Example 6.5
\(\begin{pmatrix} a&b\\c&d\\e&f \end{pmatrix} \begin{pmatrix} A&B\\C&D \end{pmatrix} =\)
\(\begin{pmatrix} 1&2&-1\\3&1&4 \end{pmatrix} \begin{pmatrix} -2&5\\4&-3\\2&1\end{pmatrix} =\)
Note that the number of columns of the first matrix must equal the number of rows of the second matrix, in which case they are conformable for multiplication. The sizes of the matrices (including the resulting product) must be \[(m\times k)(k\times n)=(m\times n)\]
Also note that if AB exists, BA exists only if \(\dim({\bf A}) = m \times n\) and \(\dim({\bf B}) = n \times m\).
This does not mean that AB = BA. AB = BA is true only in special circumstances, like when \({\bf A}\) or \({\bf B}\) is an identity matrix or \({\bf A} = {\bf B}^{-1}\).
Laws of Matrix Algebra:
Commutative law for multiplication does not hold – the order of multiplication matters: \[\bf AB\ne BA\]
For example, \[{\bf A}=\begin{pmatrix} 1&2\\-1&3\end{pmatrix}, \qquad {\bf B}=\begin{pmatrix} 2&1\\0&1\end{pmatrix}\] \[{\bf AB}=\begin{pmatrix} 2&3\\-2&2\end{pmatrix}, \qquad {\bf BA}=\begin{pmatrix} 1&7\\-1&3\end{pmatrix}\]
Transpose: The transpose of the \(m\times n\) matrix \(\bf A\) is the \(n\times m\) matrix \({\bf A}^T\) (also written \({\bf A}'\)) obtained by interchanging the rows and columns of \(\bf A\).
For example,
\({\bf A}=\begin{pmatrix} 4&-2&3\\0&5&-1\end{pmatrix}, \qquad {\bf A}^T=\begin{pmatrix} 4&0\\-2&5\\3&-1 \end{pmatrix}\)
\({\bf B}=\begin{pmatrix} 2\\-1\\3 \end{pmatrix}, \qquad {\bf B}^T=\begin{pmatrix} 2&-1&3\end{pmatrix}\)
The following rules apply for transposed matrices:
Example of \(({\bf AB})^T = {\bf B}^T{\bf A}^T\): \[{\bf A}=\begin{pmatrix} 1&3&2\\2&-1&3\end{pmatrix}, \qquad {\bf B}=\begin{pmatrix} 0&1\\2&2\\3&-1\end{pmatrix}\] \[ ({\bf AB})^T = \left[ \begin{pmatrix} 1&3&2\\2&-1&3\end{pmatrix} \begin{pmatrix} 0&1\\2&2\\3&-1\end{pmatrix} \right]^T = \begin{pmatrix} 12&7\\5&-3 \end{pmatrix}\] \[ {\bf B}^T{\bf A}^T= \begin{pmatrix} 0&2&3\\1&2&-1 \end{pmatrix} \begin{pmatrix} 1&2\\3&-1\\2&3 \end{pmatrix} = \begin{pmatrix} 12&7\\5&-3 \end{pmatrix}\]
Exercise 6.3 (Matrix Multiplication) Let \[A = \begin{pmatrix} 2&0&-1&1\\1&2&0&1 \end{pmatrix}\]
\[B = \begin{pmatrix} 1&5&-7\\1&1&0\\0&-1&1\\2&0&0\end{pmatrix} \]
\[C = \begin{pmatrix} 3&2&-1\\0&4&6 \end{pmatrix}\]
Calculate the following:
- \[AB\]
- \[BA\]
- \[(BC)^T\]
- \[BC^T\]
6.4 Systems of Linear Equations
Linear Equation: \(a_1 x_1 + a_2 x_2 + \cdots + a_n x_n = b\)
\(a_i\) are parameters or coefficients. \(x_i\) are variables or unknowns.
Linear because only one variable per term and degree is at most 1.
We are often interested in solving linear systems like
\[\begin{matrix} x & - & 3y & = & -3\\ 2x & + & y & = & 8 \end{matrix}\]
More generally, we might have a system of \(m\) equations in \(n\) unknowns
\[\begin{matrix} a_{11}x_1 & + & a_{12}x_2 & + & \cdots & + & a_{1n}x_n & = & b_1\\ a_{21}x_1 & + & a_{22}x_2 & + & \cdots & + & a_{2n}x_n & = & b_2\\ \vdots & & & & \vdots & & & \vdots & \\ a_{m1}x_1 & + & a_{m2}x_2 & + & \cdots & + & a_{mn}x_n & = & b_m \end{matrix}\]
A solution to a linear system of \(m\) equations in \(n\) unknowns is a set of \(n\) numbers \(x_1, x_2, \cdots, x_n\) that satisfy each of the \(m\) equations.
Example: \(x=3\) and \(y=2\) is the solution to the above \(2\times 2\) linear system. If you graph the two lines, you will find that they intersect at \((3,2)\).
Does a linear system have one, no, or multiple solutions? For a system of 2 equations with 2 unknowns (i.e., two lines): _
One solution: The lines intersect at exactly one point.
No solution: The lines are parallel.
Infinite solutions: The lines coincide.
Methods to solve linear systems:
- Substitution
- Elimination of variables
- Matrix methods
Exercise 6.4 (Linear Equations) Provide a system of 2 equations with 2 unknowns that has
one solution
no solution
infinite solutions
6.5 Systems of Equations as Matrices
Matrices provide an easy and efficient way to represent linear systems such as \[\begin{matrix} a_{11}x_1 & + & a_{12}x_2 & + & \cdots & + & a_{1n}x_n & = & b_1\\ a_{21}x_1 & + & a_{22}x_2 & + & \cdots & + & a_{2n}x_n & = & b_2\\ \vdots & & & & \vdots & & & \vdots & \\ a_{m1}x_1 & + & a_{m2}x_2 & + & \cdots & + & a_{mn}x_n & = & b_m \end{matrix}\]
as \[{\bf A x = b}\] where
The \(m \times n\) \({\bf A}\) is an array of \(m n\) real numbers arranged in \(m\) rows by \(n\) columns: \[{\bf A}=\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix}\]
The unknown quantities are represented by the vector \({\bf x}=\begin{pmatrix} x_1\\x_2\\\vdots\\x_n \end{pmatrix}\).
The right hand side of the linear system is represented by the vector \({\bf b}=\begin{pmatrix} b_1\\b_2\\\vdots\\b_m \end{pmatrix}\).
Augmented Matrix: When we append \(\bf b\) to the coefficient matrix \(\bf A\), we get the augmented matrix \(\widehat{\bf A}=[\bf A | b]\) \[\begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} & | & b_1\\ a_{21} & a_{22} & \cdots & a_{2n} & | & b_2\\ \vdots & & \ddots & \vdots & | & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} & | & b_m \end{pmatrix}\]
Exercise 6.5 (Augmented Matrix) Create an augmented matrix that represent the following system of equations:
\[2x_1 -7x_2 + 9x_3 -4x_4 = 8\] \[41x_2 + 9x_3 -5x_6 = 11\] \[x_1 -15x_2 -11x_5 = 9\]
6.6 Finding Solutions to Augmented Matrices and Systems of Equations
Row Echelon Form: Our goal is to translate our augmented matrix or system of equations into row echelon form. This will provide us with the values of the vector x which solve the system. We use the row operations to change coefficients in the lower triangle of the augmented matrix to 0. An augmented matrix of the form
\[\begin{pmatrix} \fbox{$a'_{11}$}& a'_{12} & a'_{13}& \cdots & a'_{1n} & | & b'_1\\ 0 & \fbox{$a'_{22}$} & a'_{23}& \cdots & a'_{2n} & | & b'_2\\ 0 & 0 & \fbox{$a'_{33}$}& \cdots & a'_{3n} & | & b'_3\\ 0 & 0 &0 & \ddots & \vdots & | & \vdots \\ 0 & 0 &0 &0 & \fbox{$a'_{mn}$} & | & b'_m \end{pmatrix}\]
is said to be in row echelon form — each row has more leading zeros than the row preceding it.
Reduced Row Echelon Form: We can go one step further and put the matrix into reduced row echelon form. Reduced row echelon form makes the value of x which solves the system very obvious. For a system of \(m\) equations in \(m\) unknowns, with no all-zero rows, the reduced row echelon form would be
\[\begin{pmatrix} \fbox{$1$} & 0 & 0 & 0 & 0 & | & b^*_1\\ 0 & \fbox{$1$} & 0 & 0 & 0 & | & b^*_2\\ 0 & 0 & \fbox{$1$} & 0 & 0 & | & b^*_3\\ 0 & 0 & 0 &\ddots & 0 & | &\vdots\\ 0 & 0 & 0 & 0 & \fbox{$1$} & | & b^*_m \end{pmatrix}\]
Gaussian and Gauss-Jordan elimination: We can conduct elementary row operations to get our augmented matrix into row echelon or reduced row echelon form. The methods of transforming a matrix or system into row echelon and reduced row echelon form are referred to as Gaussian elimination and Gauss-Jordan elimination, respectively.
Elementary Row Operations: To do Gaussian and Gauss-Jordan elimination, we use three basic operations to transform the augmented matrix into another augmented matrix that represents an equivalent linear system – equivalent in the sense that the same values of \(x_j\) solve both the original and transformed matrix/system:
Interchanging Rows: Suppose we have the augmented matrix \[{\widehat{\bf A}}=\begin{pmatrix} a_{11} & a_{12} & | & b_1\\ a_{21} & a_{22} & | & b_2 \end{pmatrix}\] If we interchange the two rows, we get the augmented matrix \[\begin{pmatrix} a_{21} & a_{22} & | & b_2\\ a_{11} & a_{12} & | & b_1 \end{pmatrix}\] which represents a linear system equivalent to that represented by matrix \(\widehat{\bf A}\).
Multiplying by a Constant: If we multiply the second row of matrix \(\widehat{\bf A}\) by a constant \(c\), we get the augmented matrix \[\begin{pmatrix} a_{11} & a_{12} & | & b_1\\ c a_{21} & c a_{22} & | & c b_2 \end{pmatrix}\] which represents a linear system equivalent to that represented by matrix \(\widehat{\bf A}\).
Adding (subtracting) Rows: If we add (subtract) the first row of matrix \(\widehat{\bf A}\) to the second, we obtain the augmented matrix \[\begin{pmatrix} a_{11} & a_{12} & | & b_1\\ a_{11}+a_{21} & a_{12}+a_{22} & | & b_1+b_2 \end{pmatrix}\] which represents a linear system equivalent to that represented by matrix \(\widehat{\bf A}\).
Example 6.6 Solve the following system of equations by using elementary row operations:
\(\begin{matrix} x & - & 3y & = & -3\\ 2x & + & y & = & 8 \end{matrix}\)
Exercise 6.6 (Solving Systems of Equations) Put the following system of equations into augmented matrix form. Then, using Gaussian or Gauss-Jordan elimination, solve the system of equations by putting the matrix into row echelon or reduced row echelon form.
\[ 1. \begin{cases} x + y + 2z = 2\\ 3x - 2y + z = 1\\ y - z = 3 \end{cases} \]
\[ 2. \begin{cases} 2x + 3y - z = -8\\ x + 2y - z = 12\\ -x -4y + z = -6 \end{cases} \]
6.7 Rank — and Whether a System Has One, Infinite, or No Solutions
To determine how many solutions exist, we can use information about (1) the number of equations \(m\), (2) the number of unknowns \(n\), and (3) the rank of the matrix representing the linear system.
Rank: The maximum number of linearly independent row or column vectors in the matrix. This is equivalent to the number of nonzero rows of a matrix in row echelon form. For any matrix A, the row rank always equals column rank, and we refer to this number as the rank of A.
For example
\(\begin{pmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{pmatrix}\)
Rank = 3
\(\begin{pmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 0 \end{pmatrix}\)
Rank = 2
Exercise 6.7 (Rank of Matrices) Find the rank of each matrix below:
(Hint: transform the matrices into row echelon form. Remember that the number of nonzero rows of a matrix in row echelon form is the rank of that matrix)
1.\(\begin{pmatrix} 1 & 1 & 2 \\ 2 & 1 & 3 \\ 1 & 2 & 3 \end{pmatrix}\)
2.\(\begin{pmatrix} 1 & 3 & 3 & -3 & 3\\ 1 & 3 & 1 & 1 & 3 \\ 1 & 3 & 2 & -1 & -2 \\ 1 & 3 & 0 & 3 & -2 \end{pmatrix}\)
Answer to Exercise Exercise 6.7:
rank is 2
rank is 3
6.8 The Inverse of a Matrix
Identity Matrix: The \(n\times n\) identity matrix \({\bf I}_n\) is the matrix whose diagonal elements are 1 and all off-diagonal elements are 0. Examples: \[ {\bf I}_2=\begin{pmatrix} 1&0\\0&1 \end{pmatrix}, \qquad {\bf I}_3=\begin{pmatrix} 1&0&0\\ 0&1&0\\ 0&0&1 \end{pmatrix}\]
Inverse Matrix: An \(n\times n\) matrix \({\bf A}\) is nonsingular or invertible if there exists an \(n\times n\) matrix \({\bf A}^{-1}\) such that \[{\bf A} {\bf A}^{-1} = {\bf A}^{-1} {\bf A} = {\bf I}_n\] where \({\bf A}^{-1}\) is the inverse of \({\bf A}\). If there is no such \({\bf A}^{-1}\), then \({\bf A}\) is singular or not invertible.
Example: Let \[{\bf A} = \begin{pmatrix} 2&3\\2&2 \end{pmatrix}, \qquad {\bf B}=\begin{pmatrix} -1&\frac{3}{2}\\ 1&-1 \end{pmatrix}\] Since \[{\bf A} {\bf B} = {\bf B} {\bf A} = {\bf I}_n\] we conclude that \({\bf B}\) is the inverse, \({\bf A}^{-1}\), of \({\bf A}\) and that \({\bf A}\) is nonsingular.
Properties of the Inverse:
If the inverse exists, it is unique.
If \({\bf A}\) is nonsingular, then \({\bf A}^{-1}\) is nonsingular.
\(({\bf A}^{-1})^{-1} = {\bf A}\)
If \({\bf A}\) and \({\bf B}\) are nonsingular, then \({\bf A}{\bf B}\) is nonsingular
\(({\bf A}{\bf B})^{-1} = {\bf B}^{-1}{\bf A}^{-1}\)
If \({\bf A}\) is nonsingular, then \(({\bf A}^T)^{-1}=({\bf A}^{-1})^T\)
Procedure to Find \({\bf A}^{-1}\): We know that if \({\bf B}\) is the inverse of \({\bf A}\), then \[{\bf A} {\bf B} = {\bf B} {\bf A} = {\bf I}_n\] Looking only at the first and last parts of this \[{\bf A} {\bf B} = {\bf I}_n\] Solving for \({\bf B}\) is equivalent to solving for \(n\) linear systems, where each column of \({\bf B}\) is solved for the corresponding column in \({\bf I}_n\). We can solve the systems simultaneously by augmenting \({\bf A}\) with \({\bf I}_n\) and performing Gauss-Jordan elimination on \({\bf A}\). If Gauss-Jordan elimination on \([{\bf A} | {\bf I}_n]\) results in \([{\bf I}_n | {\bf B} ]\), then \({\bf B}\) is the inverse of \({\bf A}\). Otherwise, \({\bf A}\) is singular.
To summarize: To calculate the inverse of \({\bf A}\)
Form the augmented matrix \([ {\bf A} | {\bf I}_n]\)
Using elementary row operations, transform the augmented matrix to reduced row echelon form.
The result of step 2 is an augmented matrix \([ {\bf C} | {\bf B} ]\).
If \({\bf C}={\bf I}_n\), then \({\bf B}={\bf A}^{-1}\).
If \({\bf C}\ne{\bf I}_n\), then \(\bf C\) has a row of zeros. This means \({\bf A}\) is singular and \({\bf A}^{-1}\) does not exist.
Example 6.7 Find the inverse of the following matricies:
- \({\bf A}=\begin{pmatrix} 1&1&1\\0&2&3\\5&5&1 \end{pmatrix}\)
Exercise 6.8 (Finding the inverse of matrices) Find the inverse of the following matrix:
- \({\bf A}=\begin{pmatrix} 1&0&4\\0&2&0\\0&0&1 \end{pmatrix}\)
6.9 Linear Systems and Inverses
Let’s return to the matrix representation of a linear system
\[\bf{Ax} = \bf{b}\]
If \(\bf{A}\) is an \(n\times n\) matrix,then \(\bf{Ax}=\bf{b}\) is a system of \(n\) equations in \(n\) unknowns. Suppose \(\bf{A}\) is nonsingular. Then \(\bf{A}^{-1}\) exists. To solve this system, we can multiply each side by \(\bf{A}^{-1}\) and reduce it as follows:
\[\begin{eqnarray*} \bf{A}^{-1} (\bf{A} \bf{x}) & = & \bf{A}^{-1} \bf{b} \\ (\bf{A}^{-1} \bf{A})\bf{x} & = & \bf{A}^{-1} \bf{b}\\ \bf{I}_n \bf{x} & = & \bf{A}^{-1} \bf{b}\\ \bf{x} & = & \bf{A}^{-1} \bf{b} \end{eqnarray*}\]Hence, given \(\bf{A}\) and \(\bf{b}\) and given that \(\bf{A}\) is nonsingular, then \(\bf{x} = \bf{A}^{-1} \bf{b}\) is a unique solution to this system.
Exercise 6.9 (Solve linear system using inverses) Use the inverse matrix to solve the following linear system:
\[\begin{align*} -3x + 4y &= 5 \\ 2x - y &= -10 \end{align*}\]Hint: the linear system above can be written in the matrix form
\(\textbf{A}\textbf{z} = \textbf{b}\)
given \[\textbf{A} = \begin{pmatrix} -3&4\\2&-1 \end{pmatrix},\]
\[\textbf{z} = \begin{pmatrix} x\\y \end{pmatrix},\] and \[\textbf{b} = \begin{pmatrix} 5\\-10 \end{pmatrix}\]
6.10 Determinants
Singularity: Determinants can be used to determine whether a square matrix is nonsingular.
A square matrix is nonsingular if and only if its determinant is not zero.
Determinant of a \(1 \times 1\) matrix, A, equals \(a_{11}\)
Determinant of a \(2 \times 2\) matrix, A, \(\begin{vmatrix} a_{11}&a_{12}\\ a_{21}&a_{22} \end{vmatrix}\):
\[\begin{eqnarray*} \det({\bf A}) &=& |{\bf A}|\\ &=& a_{11}|a_{22}| - a_{12}|a_{21}|\\ &=& a_{11}a_{22} - a_{12}a_{21} \end{eqnarray*}\]We can extend the second to last equation above to get the definition of the determinant of a \(3 \times 3\) matrix:
\[\begin{eqnarray*} \begin{vmatrix} a_{11}&a_{12}&a_{13}\\ a_{21} & a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{vmatrix} &=& a_{11} \begin{vmatrix} a_{22}&a_{23}\\ a_{32}&a_{33} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21}&a_{23}\\ a_{31}&a_{33} \end{vmatrix} + a_{13} \begin{vmatrix} a_{21}&a_{22}\\ a_{31}&a_{32} \end{vmatrix}\\ &=& a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31}) \end{eqnarray*}\]Let’s extend this now to any \(n\times n\) matrix. Let’s define \({\bf A}_{ij}\) as the \((n-1)\times (n-1)\) submatrix of \({\bf A}\) obtained by deleting row \(i\) and column \(j\). Let the \((i,j)\)th minor of \({\bf A}\) be the determinant of \({\bf A}_{ij}\): \[M_{ij}=|{\bf A}_{ij}|\] Then for any \(n\times n\) matrix \({\bf A}\) \[|{\bf A}|= a_{11}M_{11} - a_{12}M_{12} + \cdots + (-1)^{n+1} a_{1n} M_{1n}\]
For example, in figuring out whether the following matrix has an inverse? \[{\bf A}=\begin{pmatrix} 1&1&1\\0&2&3\\5&5&1 \end{pmatrix}\] 1. Calculate its determinant. \[\begin{eqnarray} &=& 1(2-15) - 1(0-15) + 1(0-10) \nonumber\\ &=& -13+15-10 \nonumber\\ &=& -8\nonumber \end{eqnarray}\] 2. Since \(|{\bf A}|\ne 0\), we conclude that \({\bf A}\) has an inverse.
Exercise 6.10 (Determinants and Inverses) Determine whether the following matrices are nonsingular:
\[1. \begin{pmatrix} 1 & 0 & 1\\ 2 & 1 & 2\\ 1 & 0 & -1 \end{pmatrix}\]
\[2. \begin{pmatrix} 2 & 1 & 2\\ 1 & 0 & 1\\ 4 & 1 & 4 \end{pmatrix}\]
6.11 Getting Inverse of a Matrix using its Determinant
Thus far, we have a number of algorithms to
- Find the solution of a linear system,
- Find the inverse of a matrix
but these remain just that — algorithms. At this point, we have no way of telling how the solutions \(x_j\) change as the parameters \(a_{ij}\) and \(b_i\) change, except by changing the values and “rerunning” the algorithms.
With determinants, we can provide an explicit formula for the inverse and therefore provide an explicit formula for the solution of an \(n\times n\) linear system.
Hence, we can examine how changes in the parameters and \(b_i\) affect the solutions \(x_j\).
Determinant Formula for the Inverse of a \(2 \times 2\):
The determinant of a \(2 \times 2\) matrix A \(\begin{pmatrix} a & b\\ c & d\\ \end{pmatrix}\) is defined as: \[\frac{1}{\det({\bf A})} \begin{pmatrix} d & -b\\ -c & a\\ \end{pmatrix}\]
For example, Let’s calculate the inverse of matrix A from Exercise Exercise 6.9 using the determinant formula.
Recall,
\[A = \begin{pmatrix} -3 & 4\\ 2 & -1\\ \end{pmatrix}\]
\[\det({\bf A}) = (-3)(-1) - (4)(2) = 3 - 8 = -5\]
\[\frac{1}{\det({\bf A})} \begin{pmatrix} -1 & -4\\ -2 & -3\\ \end{pmatrix}\]
\[\frac{1}{-5} \begin{pmatrix} -1 & -4\\ -2 & -3\\ \end{pmatrix}\]
\[ \begin{pmatrix} \frac{1}{5} & \frac{4}{5}\\ \frac{2}{5} & \frac{3}{5}\\ \end{pmatrix}\]
Exercise 6.11 (Calculate Inverse using Determinant Formula) Caculate the inverse of A
\[A = \begin{pmatrix} 3 & 5\\ -7 & 2\\ \end{pmatrix}\]
Answers to Examples and Exercises
Answer to Example Example 6.1:
- \(\begin{pmatrix} -1 &-3&-3 \end{pmatrix}\)
- 6 + 4 + 10 = 20
Answer to Exercise Exercise 6.1:
- \(\begin{pmatrix} -2 &4&-7&-5 \end{pmatrix}\)
- \(\begin{pmatrix} 2 &26&-14&4&30 \end{pmatrix}\)
- 63 -3 -10 + 24 = 74
- undefined
Answer to Example Example 6.2:
- yes
- no
Answer to Exercise Exercise 6.2:
- yes
- no (\(-v_1 -v_2 + v_3 = 0\))
Answer to Example Example 6.3:
\({\bf A+B}=\begin{pmatrix} 2 & 4 & 4 \\ 6 & 6 & 8 \end{pmatrix}\)
Answer to Example Example 6.4:
\(s {\bf A} = \begin{pmatrix} 2 & 4 & 6 \\ 8 & 10 & 12 \end{pmatrix}\)
Answer to Example Example 6.5:
\(\begin{pmatrix} aA+bC&aB+bD\\cA+dC&cB+dD\\eA+fC&eB+fD \end{pmatrix}\)
\(\begin{pmatrix} 1(-2)+2(4)-1(2)&1(5)+2(-3)-1(1)\\ 3(-2)+1(4)+4(2)&3(5)+1(-3)+4(1)\end{pmatrix} = \begin{pmatrix} 4&-2\\6&16\end{pmatrix}\)
Answer to Exercise Exercise 6.3:
\(AB = \begin{pmatrix} 4 & 11 & -15 \\ 5 & 7 & -7 \end{pmatrix}\)
\(BA =\) undefined
\((BC)^T =\) undefined
\(BC^T = \begin{pmatrix} 1&5&-7\\1&1&0\\0&-1&1\\2&0&0\end{pmatrix}\begin{pmatrix} 3&0\\2&4\\-1&6 \end{pmatrix} =\begin{pmatrix}20 & -22 \\ 5 & 4 \\ -3 &2 \\6 & 0\end{pmatrix}\)
Answer to Exercise Exercise 6.4:
There are many answers to this. Some possible simple ones are as follows:
One solution: \[\begin{matrix} -x & + & y & = & 0\\ x & + & y & = & 2 \end{matrix}\]
No solution: \[\begin{matrix} -x & + & y & = & 0\\ x & - & y & = & 2 \end{matrix}\]
Infinite solutions: \[\begin{matrix} -x & + & y & = & 0\\ 2x & - & 2y & = & 0 \end{matrix}\]
Answer to Exercise Exercise 6.5:
\(\begin{pmatrix} 2 & -7 & 9 & -4 & 0 & 0| & 8\\ 0 & 41 & 9 & 0 & 0 & 5 | & 11\\ 1 & -15 & 0 & 0 & -11 & 0 | & 9 \end{pmatrix}\)
Answer to Example Example 6.6:
\[\begin{matrix} x & - & 3y & = & -3\\ 2x & + & y & = & 8 \end{matrix}\]
\[\begin{matrix} x & - & 3y & = & -3\\ & & 7y & = & 14\\ \end{matrix}\]
\[\begin{matrix} x & - & 3y & = & -3\\ & & y & = & 2\\ \end{matrix}\]
\[\begin{matrix} x & = & 3\\ y & = & 2\\ \end{matrix}\]
Answer to Exercise Exercise 6.6:
x = 2, y = 2, z = -1
x = -17, y = -3, z = -35
Answer to Exercise Exercise 6.7:
rank is 2
rank is 3
Answer to Example Example 6.7:
\(\left(\begin{array}{ccc|ccc} 1&1&1&1&0&0\\ 0&2&3&0&1&0\\ 5&5&1&0&0&1 \end{array} \right)\)
\(\left(\begin{array}{ccc|ccc} 1&1&1 &1 &0&0\\ 0&2&3 &0 &1&0\\ 0&0&-4&-5&0&1 \end{array} \right)\)
\(\left(\begin{array}{ccc|ccc} 1&1&1&1 &0&0\\ 0&2&3&0 &1&0\\ 0&0&1&5/4&0&-1/4 \end{array} \right)\)
\(\left(\begin{array}{ccc|ccc} 1&1&0&-1/4 &0&1/4\\ 0&2&0&-15/4&1&3/4\\ 0&0&1&5/4 &0&-1/4 \end{array} \right)\)
\(\left(\begin{array}{ccc|ccc} 1&1&0&-1/4 &0 &1/4\\ 0&1&0&-15/8&1/2&3/8\\ 0&0&1&5/4 &0 &-1/4 \end{array} \right)\)
\(\left(\begin{array}{ccc|ccc} 1&0&0&13/8 &-1/2&-1/8\\ 0&1&0&-15/8&1/2 &3/8\\ 0&0&1&5/4 &0 &-1/4 \end{array} \right)\)
\({\bf A}^{-1} = \left(\begin{array}{ccc} 13/8 &-1/2&-1/8\\ -15/8&1/2 &3/8\\ 5/4 &0 &-1/4 \end{array} \right)\)
Answer to Exercise Exercise 6.8:
- \({\bf A}^{-1}=\begin{pmatrix} 1&0&-4\\0&\frac{1}{2}&0\\0&0&1 \end{pmatrix}\)
Answer to Exercise Exercise 6.9:
\(\textbf{z} = \bf{A}^{-1} \bf{b} = \begin{pmatrix} 1/5 &4/5\\ 2/5&3/5 \end{pmatrix} \begin{pmatrix} 5 \\ -10 \end{pmatrix}= \begin{pmatrix} -7 \\ -4 \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix}\)
Answer to Exercise Exercise 6.10:
nonsingular
singular
Answer to Exercise Exercise 6.11:
\(\begin{pmatrix} \frac{2}{41} & \frac{-5}{41}\\ \frac{7}{41} & \frac{3}{41}\\ \end{pmatrix}\)