WBCHSE Class 12 Maths Solutions For Matrices

Class 12 Maths Solutions For Matrices

Matrix A rectangular array of mn numbers in the form of m horizontal lines (called rows) and n vertical lines (called columns) is called a matrix of order m by n, written as an m x n matrix.

Such an array is enclosed by [ ] or ( ).

Each of the mn numbers constituting the matrix is called an element or an entry of the matrix.

Usually, we denote a matrix by a capital letter.

The plural of matrix is matrices.

Examples (1) A = \(\left[\begin{array}{rrr}
3 & 5 & -4 \\
0 & 1 & 9
\end{array}\right]\) is a matrix, having 2 rows and 3 columns.

Its order is 2 x 3 and it has 6 elements.

(2) B = \(\left[\begin{array}{rrrr}
9 & 4 & \sqrt{2} & -1 \\
1 & 8 & -3 & 2 \\
6 & 0 & 5 & 7
\end{array}\right]\) is a matrix, having 3 rows and 4 columns. Its order is 3 x 4 and it has 12 elements.

How to Describe a Matrix

To locate the position of a particular element of a matrix, we have to specify the number of the row and that of the column in which the element occurs.

An element occuring in the ith row and jth column of a matrix A will be called the (i,j)th element of A, to be denoted by air.

WBCHSE Class 12 Maths Solutions For Matrices

Read and Learn More  Class 12 Math Solutions

In general, as m x n matrix A may be written as

A = \(\left[\begin{array}{ccccc}
a_{11} & a_{12} & a_{13} & \ldots & a_{1 n} \\
a_{21} & a_{22} & a_{23} & \ldots & a_{2 n} \\
\ldots & \ldots & \ldots & \ldots & \ldots \\
a_{i 1} & a_{i 2} & a_{i 3} & \ldots & a_{i n} \\
\ldots & \ldots & \ldots & \ldots & \ldots \\
a_{m 1} & a_{m 2} & a_{m 3} & \cdots & a_{n n}
\end{array}\right]=\left[a_{i j}\right]_{m \times n} .\)

Example 1 Consider the matrix A = \(\left[\begin{array}{rrr}
3 & -2 & 5 \\
6 & 9 & 1
\end{array}\right]\)

Clearly, the element in the 1st row and 2nd column is -2.

So, we write a12 = -2.

Similarly, a11 = 3; a12 = -2; a13 = 5; a21 = 6; a22 = 9 and a23 = 1.

Example 2 Construct a 3 x 2 matrix whose elements are given by aij = (i+2j).

Solution

A 3 x 2 matrix has 3 rows and 2 columns.

In general, a 3 x 2 matrix is given by

A = \(\left[\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
a_{31} & a_{32}
\end{array}\right]_{3 \times 2}\)

Thus aij = (i+2j) for i=1, 2,3 and j = 1,2.

∴ a11 = (1+2×1) = 3; a12 = (1+2×2) = 5;

a21 = (2+2×1) = 4; a22=(2+2×2) = 6;

a31 = (3+2×1) = 5; a32 = (3+2×2) = 7.

Hence, A = \(\left[\begin{array}{ll}
3 & 5 \\
4 & 6 \\
5 & 7
\end{array}\right]_{3 \times 2}\)

Example 3 Construct a 2 x 3 matrix whose elements are given by aij = \(\frac{1}{2}|5 i-3 j| .\)

Solution

A 2 x 3 matrix has 2 rows and 3 columns.

In general, a 2 x 3 matrix is given by

A = \(\left[\begin{array}{lll}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23}
\end{array}\right]_{2 \times 3}\)

Thus, \(a_{i j}=\frac{1}{2}|5 i-3 j|\), where i = 1,2 and j = 1,2,3.

∴ \(a_{11}=\frac{1}{2}|5 \times 1-3 \times 1|=\frac{1}{2} \cdot|2|=\frac{1}{2} \times 2=1 ;\)

\(a_{12}=\frac{1}{2}|5 \times 1-3 \times 2|=\frac{1}{2} \cdot|5-6|=\frac{1}{2} \cdot|-1|=\frac{1}{2} \times 1=\frac{1}{2} \text {; }\) \(a_{13}=\frac{1}{2}|5 \times 1-3 \times 3|=\frac{1}{2} \cdot|5-9|=\frac{1}{2} \cdot|-4|=\frac{1}{2} \times 4=2 \text {; }\) \(a_{21}=\frac{1}{2}|5 \times 2-3 \times 1|=\frac{1}{2} \cdot|10-3|=\frac{1}{2} \cdot|7|=\frac{1}{2} \times 7=\frac{7}{2} ;\) \(a_{22}=\frac{1}{2}|5 \times 2-3 \times 2|=\frac{1}{2} \cdot|10-6|=\frac{1}{2} \cdot|4|=\frac{1}{2} \times 4=2\) \(a_{23}=\frac{1}{2}|5 \times 2-3 \times 3|=\frac{1}{2} \cdot|10-9|=\frac{1}{2} \cdot|1|=\frac{1}{2} \times 1=\frac{1}{2}\)

Hence, A = \(\left[\begin{array}{lll}
1 & \frac{1}{2} & 2 \\
\frac{7}{2} & 2 & \frac{1}{2}
\end{array}\right]\).

Example 4 If a matrix has 12 elements, what are the possible orders it can have?

Solution

We know that a matrix of order m x n has mn elements.

Hence, all possible orders of a matrix having 12 elements are (12×1), (1×12), (6×2), (2×6), (4×3) and (3×4).

Various Types of Matrices

Row Matrix A matrix having only one row is known as a row matrix or a row vector.

Examples (1) A = [5 18] is a row matrix of order 1 x 2.

(2) B = [2 √5 -9 0] is a row matrix of order 1 x 4.

Column Matrix A matrix having only one column is known as a column matrix or a column vector.

Examples (1) A = \(\left[\begin{array}{r}
2 \\
7 \\
-3
\end{array}\right]\) is a column matrix of order 3 x 1.

(2) B = \(\left[\begin{array}{l}
6 \\
4
\end{array}\right]\) is a column matrix of order 2 x 1.

Zero Or Null Matrix A matrix each of whose elements is zero is called a zero matrix or a null matrix.

Example The matrices [0], [0 0], \(\left[\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}\right]\) and \(\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right]\) are null matrices of order (1×1), (1×2), (2×2) and (2×3) respectively.

Square Matrix A matrix having the same number of rows and columns is called a square matrix.

A matrix of order (n x n) is called a square matrix of order n or an n-rowed square matrix.

A matrix of order m x n, where m ≠ n, is called a rectangular matrix.

Examples (1) The matrix \(\left[\begin{array}{rr}
3 & 2 \\
6 & -5
\end{array}\right]\) is a 2-rowed square matrix.

(2) The matrix \(\left[\begin{array}{ccc}
5 & 3 & 6 \\
7 & \sqrt{2} & -4 \\
-9 & \frac{1}{3} & 0
\end{array}\right]\) is a 3-rowed square matrix.

Diagonal Elements Of A Matrix Let A = [aij]mxn be an m x n matrix. Then, the elements aij for which i = j, are called the diagonal elements of A.

Thus, the diagonal elements of A = [aij]mxn are a11, a22, a33, a44, etc.

The line along which the diagonal elements lie is called the diagonal of the matrix.

Example Let A = \(\left[\begin{array}{rrr}
3 & 2 & -1 \\
\sqrt{5} & \frac{5}{8} & 7 \\
6 & -4 & \sqrt{2}
\end{array}\right]\) Then, the diagonal elements of A are: a11 = 3, a22 = \(\frac{5}{8}\), a33 = √2.

Diagonal Matrix A square matrix in which every nondiagonal element is zero is called a diagonal matrix.

If A = [aij]mxn be a diagonal matrix then aij=0 when i ≠ j and we write it as

A = diag[a11, a22, a33,…, ann].

Example Let A = \(\left[\begin{array}{rrr}
6 & 0 & 0 \\
0 & 4 & 0 \\
0 & 0 & -2
\end{array}\right]\). Then, A is a diagonal matrix. We may write it as, A = diag[6,4,-2].

Scalar Matrix A square matrix in which every nondiagonal element is zero and all diagonal elements re equal is known as a scalar matrix.

Examples (1) A = \(\left[\begin{array}{ll}
5 & 0 \\
0 & 5
\end{array}\right]\) is a scalar matrix of order 2.

(2) B = \(\left[\begin{array}{rrr}
-3 & 0 & 0 \\
0 & -3 & 0 \\
0 & 0 & -3
\end{array}\right]\) is a scalar matrix of order 3.

Unit Matrix A square matrix in which every nondiagonal element is 0 and every diagonal element is 1 is called a unit matrix or an identity matrix.

Thus, a square matrix [aij]nxn is a unit matrix if

\(a_{i j}=\left\{\begin{array}{l}
0 \text { when } i \neq j, \\
1 \text { when } i=j .
\end{array}\right.\)

A unit matrix of order n will be denoted by In or simply by I.

Examples (1) I2 = \(\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]\) is a unit matrix of order 2.

(2) I3 = \(\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right]\) is a unit matrix of order 3.

Comparable Matrices Two matrices A and B are said to be comparable if they are of the same order, i.e., they have the same number of rows and the same umber of columns.

Example A = \(\left[\begin{array}{rrr}
2 & -5 & 1 \\
0 & 3 & 6
\end{array}\right]\) and B = \(\left[\begin{array}{rrr}
3 & 7 & 0 \\
1 & 4 & -9
\end{array}\right]\) are comparable matrices, each being of order (2×3).

Equal Matrices Two matrices A and B are said to be equal, written as A = B, if they are of the same order and their corresponding elements are equal.

Example 1 Find x, y, z when \(\left[\begin{array}{ll}
5 & 3 \\
x & 7
\end{array}\right]=\left[\begin{array}{ll}
y & z \\
1 & 7
\end{array}\right]\)

Solution

Since the corresponding elements of equal matrices are equal, we have

\(\left[\begin{array}{ll}
5 & 3 \\
x & 7
\end{array}\right]=\left[\begin{array}{ll}
y & z \\
1 & 7
\end{array}\right]\) ⇔ x = 1, y = 5 and z = 3.

Example 2 Find x, y, z w when \(\left[\begin{array}{ll}
x-y & 2 x+z \\
2 x-y & 3 z+w
\end{array}\right]=\left[\begin{array}{rr}
-1 & 5 \\
0 & 13
\end{array}\right]\).

Solution

We know that in equal matrices, the corresponding elements are equal.

∴ \(\left[\begin{array}{rr}
x-y & 2 x+z \\
2 x-y & 3 z+w
\end{array}\right]=\left[\begin{array}{rr}
-1 & 5 \\
0 & 13
\end{array}\right]\)

⇔ x – y = -1, 2x – y = 0, 2x + z = 5 and 3z + w = 13.

Solving the first two equations, we get x = 1 and y = 2.

Putting x = 1 in 2x + z = 5, we get z = 3.

Putting z = 3 in 3z + w = 13, we get w = 4.

∴ x = 1, y = 2, z = 3 and w = 4.

Example 3 \(\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right] \neq\left[\begin{array}{ll}
0 & 0 \\
0 & 0 \\
0 & 0
\end{array}\right] \text {. }\) Why?

Solution

Since the given null matrices are not comparable, they are not equal.

Operations On Matrices

Mainly we have four operations on matrices, namely:

Transposition, Matrix Addition, Matrix multiplication and Scalar multiplication.

Out of these operations, transposition is a unary operation while matrix addition and matrix multiplication are both binary operations and scalar multiplication is an external composition.

Transposition

Transpose Of A Matrix Let A be an (mxn) matrix. Then, the matrix obtained by interchanging the rows and columns of A is called the transpose of A, denoted by A’ or A’.

If A = [aij]mxn, then A’ = [aij]nxm.

Remarks (1) If A is an (mxn) matrix, then A’ is an (nxm) matrix.

(2) (i,j)th element of A = (j,i)th element of A’.

Examples (1) If A = \(\left[\begin{array}{rrr}
2 & 3 & -1 \\
4 & -2 & 5
\end{array}\right]\), then \(A^t=\left[\begin{array}{rr}
2 & 4 \\
3 & -2 \\
-1 & 5
\end{array}\right]\)

Here A is a (2×3) matrix and A’ is a (3×2) matrix.

(2) If B = \(\left[\begin{array}{r}
3 \\
-4 \\
6
\end{array}\right]\), then Bt = [3 -4 6].

Here, B is a (3×1) matrix and Bt is a (1×3) matrix.

Theorem (Involution) For any matrix A, prove that (A’)’ = A.

Proof

Let A = [aij]mxn matrix ⇒ A’ is an (nxm) matrix.

⇒ (A’)’ is a (mxn) matrix.

∴ A and (A’)’ are matrices of the same order.

Also, (i,j)th element of A = (j,i)th element of A’

= (i,j)th element of (A’)’.

Thus, A and (A’)’ are comparable matrices having their corresponding elements equal.

Hence, (A’)’ = A.

Symmetric Matrix A square matrix A is said to be symmetric if A’ = A.

Thus, A is symmetric ⇔ aji = aij.

Examples (1) If A = \(\left[\begin{array}{rr}
4 & 2 \\
2 & -3
\end{array}\right]\), then A’ = \(\left[\begin{array}{rr}
4 & 2 \\
2 & -3
\end{array}\right]\) = A.

∴ A is symmetric.

(2) If B = \(\left[\begin{array}{rrr}
6 & 8 & -4 \\
8 & 3 & 0 \\
-4 & 0 & 5
\end{array}\right]\), then B’ = \(\left[\begin{array}{rrr}
6 & 8 & -4 \\
8 & 3 & 0 \\
-4 & 0 & 5
\end{array}\right]\) = B.

∴ B is symmetric.

Skew-Symmetric Matrix A square matrix A is said to be skew-symmetric if A’ = -A, where -A is the matrix obtained by replacing each element of A by its negative.

Remark A is skew-symmetric ⇒ A’ = -A

⇒ aji = -aij, where A = [aij]nxn

⇒ aii = -aii

⇒ 2aii = 0 ⇒ aii = 0

⇒ every diagonal element of A is 0.

Thus, every diagonal element of a skew-symmetric matrix is 0.

Examples (1) If A = \(\left[\begin{array}{rr}
0 & 8 \\
-8 & 0
\end{array}\right]\), then A’ = \(\left[\begin{array}{rr}
0 & -8 \\
8 & 0
\end{array}\right]\) = -A.

Hence, A is skew-symmetric.

(2) Let B = \(\left[\begin{array}{ccc}
0 & h & -g \\
-h & 0 & f \\
-g & f & 0
\end{array}\right]\), then B’ = \(\left[\begin{array}{ccc}
0 & -h & -g \\
h & 0 & -f \\
g & f & 0
\end{array}\right]\) = -B.

Hence, B is skew-symmetric.

Addition Of Matrices

Let A and B be two comparable matrices, each of order (mxn). Then, their sum (A+B) is a matrix of order (mxn), obtained by adding the corresponding elements of A and B.

Example 1 If A = \(\left[\begin{array}{ll}
2 & 1 \\
0 & 4
\end{array}\right]\) and B = \(\left[\begin{array}{lll}
3 & 4 & 5 \\
1 & 2 & 3
\end{array}\right]\) then A and B are matrices order 2 x 2 and 2 x 3 respectively.

So, A and B are not comparable.

Hence, A+B is not defined.

Example 2 Let A = \(\left[\begin{array}{lll}
5 & 0 & -2 \\
3 & 2 & -7
\end{array}\right]\) and B = \(\left[\begin{array}{rrr}
4 & -3 & -6 \\
-1 & 0 & 4
\end{array}\right] \text {. }\)

Clearly, each one of A and B is a 2 x 3 matrix.

∴ A + B is defined.

We have: A+B = \(\left[\begin{array}{lll}
5+4 & 0+(-3) & -2+(-6) \\
3+(-1) & 2+0 & -7+4
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
9 & -3 & -8 \\
2 & 2 & -3
\end{array}\right]\)

Some Results on Addition of Matrices

Theorem 1 Matrix addition is commutative, i.e., A + B = B + A for all comparable matrices A and B.

Proof

Let A = [aij]mxn abd B = [bij]mxn. Then,

A + B = [aij]mxn + [bij]mxn

= [aij + bij]mxn [by the definition of addition of matrices]

= [bij + aij]mxn [∵ addition of numbers is commutative]

= [bij]mxn + [aij]mxn = B + A.

Hence, A + B = B + A.

Theorem 2 Matrix addition is associative, i.e., (A + B)+C = A+(B+C) for all comparable matrices A, B and C.

Proof

Let A = [aij]mxn, B = [bij]mxn and C = [cij]mxn. Then,

(A+B)+c = ([aij]mxn + [bij]mxn) + [cij]mxn

= [aij + bij]mxn + [cij]mxn

= [(aij+bij)+cij]mxn

= [aij + (bij + cij)]mxn

[∵ addition of numbers is associative]

= [aij]mxn + [bij+cij]mxn

= [aij]mxn + ([bij]m+n + [cij]mxn) = A + (B+C).

Hence, (A+B)+C = A+(B+C).

Theorem 3 If A is an mxn matrix and O is an mxn null matrix, then A + O = O + A = A.

Proof

Let A = [aij]mxn and O = [bij]mxn,

where bij = 0 for all suffixes i and j.

Then, A + O = [aij]mxn + [bij]mxn = [aij+bij]mxn

= [aij+0]mxn [∵ bij = 0]

= [aij]mxn = A.

∴ A + O = A.

Similarly, O + A = A.

Hence, A + O = O + A = A.

Remark The null matrix O of order m x n is the additive identity in the set of all m x n matrices.

Example 3 Let A = \(\left[\begin{array}{rrr}
3 & 5 & 4 \\
1 & 2 & -3
\end{array}\right]\) and O = \(\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right]\), then verify that A + O = O + A = A.

Solution

Clearly, each one of A and O is a matrix of order (2×3).

So, (A+O) and (O+A) are both defined.

Now, A + O = \(\left[\begin{array}{rrr}
3 & 5 & 4 \\
1 & 2 & -3
\end{array}\right]+\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
3+0 & 5+0 & 4+0 \\
1+0 & 2+0 & -3+0
\end{array}\right]+\left[\begin{array}{rrr}
3 & 5 & 4 \\
1 & 2 & -3
\end{array}\right]\) = A.

And, O + A = \(\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right]+\left[\begin{array}{rrr}
3 & 5 & 4 \\
1 & 2 & -3
\end{array}\right]\)

= \(\left[\begin{array}{lll}
0+3 & 0+5 & 0+4 \\
0+1 & 0+2 & 0+(-3)
\end{array}\right]=\left[\begin{array}{llr}
3 & 5 & 4 \\
1 & 2 & -3
\end{array}\right]\) = A.

Hence, A + O = O + A = A.

Negative Of A Matrix Let A = [aij]mxn. Then, the negative of A is the matrix (-A) = [-aij]mxn, obtained by replacing each element of A with its corresponding additive inverse. (-A) is called the additive inverse of A.

Example 4 If A = \(\left[\begin{array}{rrr}
3 & -2 & 0 \\
-5 & 7 & \sqrt{2}
\end{array}\right]\), find (-A) and verify that A + (-A) = (-A) + A = 0.

Solution

Clearly, we have

(-A) = \(\left[\begin{array}{rrr}
-3 & 2 & 0 \\
5 & -7 & -\sqrt{2}
\end{array}\right]\)

Now, A + (-A) = \(\left[\begin{array}{rrr}
3 & -2 & 0 \\
-5 & 7 & \sqrt{2}
\end{array}\right]+\left[\begin{array}{rrr}
-3 & 2 & 0 \\
5 & -7 & -\sqrt{2}
\end{array}\right]\)

= \(\left[\begin{array}{ccc}
3+(-3) & -2+2 & 0+0 \\
-5+5 & 7+(-7) & \sqrt{2}+(-\sqrt{2})
\end{array}\right]=\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right]=O\)

and, (-A) + A = \(\left[\begin{array}{rrr}
-3 & 2 & 0 \\
5 & -7 & -\sqrt{2}
\end{array}\right]+\left[\begin{array}{rrr}
3 & -2 & 0 \\
-5 & 7 & \sqrt{2}
\end{array}\right]\)

= \(\left[\begin{array}{ccc}
-3+3 & 2+(-2) & 0+0 \\
5+(-5) & -7+7 & -\sqrt{2}+\sqrt{2}
\end{array}\right]=\left[\begin{array}{lll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right]\)

Hence, A + (-A) = (-A) + A = O.

Theorem 4 If A and B are two matrices of the same order then prove that (A+B)’ = (A’+B’).

Proof Let A = [aij]mxn and B = [bij]mxn. Then,

A is an (mxn) matrix, B is an (mxn) matrix

⇒ (A+B) is an (mxn) matrix

⇒ (A + B)’ is an (nxm) matrix.

Also, A is an (mxn) matrix and B is an (mxn) matrix

⇒ A’ is an (nxm) matrix and B’ is an (nxm) matrix

⇒ (A’ + B’) is an (nxm) matrix.

Thus, (A+B)’ and (A’+B’) are comparable matrices.

Also, (j,i)th element of (A+B)’

= (i,j)th element of (A+B)

= (i,j)th element of A + (i,j)th element of B

= (j, i)th element of A’ + (j, i)th element of B’

= (j, i)th element of (A’ + B’).

Thus, (A + B)’ and (A’ + B’) are comparable and their corresponding elements are equal.

Hence, (A+B)’ = (A’ + B’).

Solved Examples

Example 1 Let A = \(\left[\begin{array}{rrr}
2 & 3 & -5 \\
0 & -4 & 8
\end{array}\right]\) Verify that (A’)’ = A.

Solution

We have A’ = \(\left[\begin{array}{rrr}
2 & 3 & -5 \\
0 & -4 & 8
\end{array}\right]^t=\left[\begin{array}{rr}
2 & 0 \\
3 & -4 \\
-5 & 8
\end{array}\right]\)

⇒ \(\left(A^t\right)^t=\left[\begin{array}{rr}
2 & 0 \\
3 & -4 \\
-5 & 8
\end{array}\right]^t=\left[\begin{array}{rrr}
2 & 3 & -5 \\
0 & -4 & 8
\end{array}\right]=A\)

Hence, (A’)’ = A.

Example 2 Let A = \(\left[\begin{array}{rrr}
2 & 3 & 5 \\
-1 & 0 & 4
\end{array}\right]\) and B = \(\left[\begin{array}{rrr}
4 & -2 & 3 \\
2 & 6 & -1
\end{array}\right]\). Verify that A + B = B + A.

Solution

Here, A is a 2 x 3 matrix and B is a 2 x 3 matrix. So, A and B are comparable.

Therefore, (A+B) and (B+A) both exist and each is a 2 x 3 matrix.

Now, A + B = \(\left[\begin{array}{rrr}
2 & 3 & 5 \\
-1 & 0 & 4
\end{array}\right]+\left[\begin{array}{rrr}
4 & -2 & 3 \\
2 & 6 & -1
\end{array}\right]\)

= \(\left[\begin{array}{rll}
2+4 & 3+(-2) & 5+3 \\
-1+2 & 0+6 & 4+(-1)
\end{array}\right]=\left[\begin{array}{lll}
6 & 1 & 8 \\
1 & 6 & 3
\end{array}\right] .\)

And, B + A = \(\left[\begin{array}{rrr}
4 & -2 & 3 \\
2 & 6 & -1
\end{array}\right]+\left[\begin{array}{rrr}
2 & 3 & 5 \\
-1 & 0 & 4
\end{array}\right]\)

= \(\left[\begin{array}{lrr}
4+2 & -2+3 & 3+5 \\
2+(-1) & 6+0 & (-1)+4
\end{array}\right]=\left[\begin{array}{lll}
6 & 1 & 8 \\
1 & 6 & 3
\end{array}\right]\)

Hence, A + B = B + A.

Example 3 Let A = \(\left[\begin{array}{rr}
1 & -2 \\
5 & 4 \\
3 & 0
\end{array}\right]\), B = \(\left[\begin{array}{rr}
3 & 1 \\
0 & 2 \\
-3 & 5
\end{array}\right]\) and C = \(\left[\begin{array}{rr}
4 & 3 \\
-2 & 2 \\
1 & 6
\end{array}\right]\). Verify that (A + B) + C = A + (B + C).

Solution

Clearly, each one of the matrices A, B, C is a (3×2) matrix. So, (A + B) + C and A + (B + C) are both defined and each one is a 3 x 2 matrix.

Now, (A+B) = \(\left[\begin{array}{rr}
1 & -2 \\
5 & 4 \\
3 & 0
\end{array}\right]+\left[\begin{array}{rr}
3 & 1 \\
0 & 2 \\
-3 & 5
\end{array}\right]\)

= \(\left[\begin{array}{lr}
1+3 & -2+1 \\
5+0 & 4+2 \\
3+(-3) & 0+5
\end{array}\right]=\left[\begin{array}{rr}
4 & -1 \\
5 & 6 \\
0 & 5
\end{array}\right]\)

∴ (A+B)+C = \(\left[\begin{array}{rr}
4 & -1 \\
5 & 6 \\
0 & 5
\end{array}\right]+\left[\begin{array}{rr}
4 & 3 \\
-2 & 2 \\
1 & 6
\end{array}\right]\)

= \(\left[\begin{array}{lr}
4+4 & -1+3 \\
5+(-2) & 6+2 \\
0+1 & 5+6
\end{array}\right]=\left[\begin{array}{rr}
8 & 2 \\
3 & 8 \\
1 & 11
\end{array}\right]\)

Also, (B + C) = \(\left[\begin{array}{rr}
3 & 1 \\
0 & 2 \\
-3 & 5
\end{array}\right]+\left[\begin{array}{rr}
4 & 3 \\
-2 & 2 \\
1 & 6
\end{array}\right]\)

= \(\left[\begin{array}{ll}
3+4 & 1+3 \\
0+(-2) & 2+2 \\
-3+1 & 5+6
\end{array}\right]=\left[\begin{array}{rr}
7 & 4 \\
-2 & 4 \\
-2 & 11
\end{array}\right] \text {. }\)

∴ A + (B + C) = \(\left[\begin{array}{rr}
1 & -2 \\
5 & 4 \\
3 & 0
\end{array}\right]+\left[\begin{array}{rr}
7 & 4 \\
-2 & 4 \\
-2 & 11
\end{array}\right]\)

= \(\left[\begin{array}{ll}
1+7 & -2+4 \\
5+(-2) & 4+4 \\
3+(-2) & 0+11
\end{array}\right]=\left[\begin{array}{rr}
8 & 2 \\
3 & 8 \\
1 & 11
\end{array}\right] \text {. }\)

Hence, (A + B) + C = A + (B + C).

WBCHSE Class 12 Maths Solutions For Matrices Solved Examples

Example 4 Find the additive inverse of the matrix A = \(\left[\begin{array}{rrr}
2 & -5 & 0 \\
4 & 3 & -1
\end{array}\right]\).

Solution

The additive inverse of the given matrix A is the matrix -A, given by

\(-A=\left[\begin{array}{ccc}
-2 & -(-5) & 0 \\
-4 & -3 & -(-1)
\end{array}\right]=\left[\begin{array}{ccc}
-2 & 5 & 0 \\
-4 & -3 & 1
\end{array}\right]\)

Subtraction Of Matrices If A and B are two comparable matrices then we define (A – B) = A + (-B).

Example 5 If A = \(\left[\begin{array}{rrr}
2 & -3 & 1 \\
0 & 7 & -9
\end{array}\right]\) and B = \(\left[\begin{array}{lll}
1 & 2 & -3 \\
4 & 8 & -4
\end{array}\right]\), find (A – B).

Solution

We have, (-B) = \(\left[\begin{array}{lll}
-1 & -2 & 3 \\
-4 & -8 & 4
\end{array}\right]\)

∴ (A – B) = A + (-B)

= \(\left[\begin{array}{rrr}
2 & -3 & 1 \\
0 & 7 & -9
\end{array}\right]+\left[\begin{array}{lll}
-1 & -2 & 3 \\
-4 & -8 & 4
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
2+(-1) & -3+(-2) & 1+3 \\
0+(-4) & 7+(-8) & -9+4
\end{array}\right]=\left[\begin{array}{rrr}
1 & -5 & 4 \\
-4 & -1 & -5
\end{array}\right]\)

Hence, (A – B) = \(\left[\begin{array}{rrr}
1 & -5 & 4 \\
-4 & -1 & -5
\end{array}\right] \text {. }\)

Example 6 Prove that the sum of two symmetric matrices is symmetric.

Solution

Let A and B be two symmetric matrices of the same order.

Then, A’ = A and B’ = B.

∴ (A + B)’ = A’ + B’ = (A + B) [∵ A’ = A and B’ = B].

Hence, (A + B) is symmetric.

Example 7 Prove that the sum of two skew-symmetric matrices is a skew-symmetric matrix.

Solution

Let A and B be two skew-symmetric matrices. Then,

A’ = -A and B’ = -B.

∴ (A + B)’ = (A’ + B’) = (-A) + (-B) = -(A + B).

Hence, (A + B) is skew-symmetric.

Example 8 For any square matrix A with real entries, prove that (1) (A + A’) is symmetric (2) (A – A’) is skew-symmetric.

Solution

Let A and B be two skew-symmetric matrices. Then,

(1) (A + A’)’ = A’ + (A’)’ [∵ (A + B)’ = A’ + B’]

= A’ = A [∵ (A’)’ = A]

= (A + A’) [∵ A + B = B + A].

Hence, (A + A’) is symmetric.

(2) (A – A’)’ = A’ – (A’)’ [∵ (A – B)’ = A’ – B’

= A’ – A [∵ (A’)’ = A]

= (A – A’).

Thus, (A – A’)’ = -(A – A’).

Hence, (A – A’) is skew-symmetric.

Multiplication of Matrices

For two given matrices A and B, we say that the product AB exists only when the number of rows in A equals the number of columns in B.

When AB exists, we say that A is conformable to B for multiplication.

Product Of Matrices

Let A = [aij]mxn and B = [bij]nxp be two matrices such that the number of columns in A equals the number of rows in B.

Then, AB exists and it is an (mxp) matrix, given by

\(A B=\left[c_{i 2}\right]_{-\times} \text {where } c_{i k}=\left(a_{i i} b_{1 k}+a_{i k} b_{2 k}+\ldots a_{i k} b_{n k}\right)=\sum_{j=1}^n a_i b_{j k}\)

∴ (i,k)th element of AB

= sum of the products of corresponding elements of ith row of A and kth column of B.

Remarks For two given matrices A and B:

(1) AB may exist and BA may not exist;

(2) BA may exist and AB may not exist;

(3) AB and BA both may not exist;

(4) AB and BA both may exist.

Solved Examples

Example 1 If A = \(\left[\begin{array}{lll}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23}
\end{array}\right]\) and B = \(\left[\begin{array}{ll}
b_{11} & b_{12} \\
b_{21} & b_{22} \\
b_{31} & b_{32}
\end{array}\right]\) then show that AB and BA both exist. Find AB and BA.

Solution

Here, A is a 2 x 3 matrix and B is a 3 x 2 matrix.

∴ AB exists and it is a 2 x 2 matrix.

Let AB = \(\left[\begin{array}{ll}
c_{11} & c_{12} \\
c_{21} & c_{22}
\end{array}\right]\). Then,

c11 = (1st row of A) x (1st column of B)

= a11b11 + a12b21 + a13b31;

c12 = (1st row of A)x(2nd column of B)

= a11b12 + a12b22 + a13b32;

c21 = (2nd row of A) x (1st column of B)

= a22b11 + a22b21 + a23b31;

and c22 = (2nd row of A) x (2nd column of B)

= a21b12 + a22b22 + a23b32.

∴ AB = \(\left[\begin{array}{lll}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23}
\end{array}\right]\left[\begin{array}{ll}
b_{11} & b_{12} \\
b_{21} & b_{22} \\
b_{31} & b_{32}
\end{array}\right]\)

= \(\left[\begin{array}{ll}
a_{11} b_{11}+a_{12} b_{21}+a_{13} b_{31} & a_{11} b_{12}+a_{12} b_{22}+a_{13} b_{32} \\
a_{21} b_{11}+a_{22} b_{21}+a_{23} b_{31} & a_{21} b_{12}+a_{22} b_{22}+a_{23} b_{32}
\end{array}\right]\)

Again, B is 3 x 2 matrix and A is a 2 x 3 matrix. So, BA exists and it is a 3 x 3 matrix.

Proceeding as above, we get

BA = \(\left[\begin{array}{ll}
b_{11} & b_{12} \\
b_{21} & b_{22} \\
b_{31} & b_{32}
\end{array}\right]\left[\begin{array}{lll}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23}
\end{array}\right]\)

= \(\left[\begin{array}{lll}
b_{11} a_{11}+b_{12} a_{21} & b_{11} a_{12}+b_{12} a_{22} & b_{11} a_{13}+b_{12} a_{23} \\
b_{21} a_{11}+b_{22} a_{21} & b_{21} a_{12}+b_{22} a_{22} & b_{21} a_{13}+b_{22} a_{23} \\
b_{31} a_{11}+b_{32} a_{21} & b_{31} a_{12}+b_{32} a_{22} & b_{31} a_{13}+b_{32} a_{23}
\end{array}\right] .\)

Example 2 If A = \(\left[\begin{array}{rr}
2 & -1 \\
3 & 4 \\
1 & 5
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
-1 & 3 \\
2 & 1
\end{array}\right]\), find AB. Does BA exist?

Solution

Here A is a 3 x 2 matrix and B is a 2 x 2 matrix.

Clearly, the number of columns in A equals the number of rows in B.

∴ AB exists and it is a 3 x 2 matrix.

Now, AB = \(\left[\begin{array}{rr}
2 & -1 \\
3 & 4 \\
1 & 5
\end{array}\right]\left[\begin{array}{rr}
-1 & 3 \\
2 & 1
\end{array}\right]\)

= \(\left[\begin{array}{rl}
2 \cdot(-1)+(-1) \cdot 2 & 2 \cdot 3+(-1) \cdot 1 \\
3 \cdot(-1)+4 \cdot 2 & 3 \cdot 3+4 \cdot 1 \\
1 \cdot(-1)+5 \cdot 2 & 1 \cdot 3+5 \cdot 1
\end{array}\right]\)

= \(\left[\begin{array}{rr}
-4 & 5 \\
5 & 13 \\
9 & 8
\end{array}\right]\)

Further, B is a 2 x 2 matrix and A is a 3 x 2 matrix. So, the number of columns in B is not equal to the number of rows in A.

So, BA does not exist.

Example 3 Let A = \(\left[\begin{array}{rrr}
1 & -2 & 3 \\
-4 & 2 & 5
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
2 & 3 \\
4 & 5 \\
-2 & 1
\end{array}\right]\). Find AB and BA,a nd show that AB ≠ BA.

Solution

Here A is a 2 x 3 matrix and B is a 3 x 2 matrix.

So, AB exists and it is a 2 x 2 matrix.

Now, AB = \(\left[\begin{array}{rrr}
1 & -2 & 3 \\
-4 & 2 & 5
\end{array}\right]\left[\begin{array}{rr}
2 & 3 \\
4 & 5 \\
-2 & 1
\end{array}\right]\)

= \(\left[\begin{array}{ll}
1 \cdot 2+(-2) \cdot 4+3 \cdot(-2) & 1 \cdot 3+(-2) \cdot 5+3 \cdot 1 \\
(-4) \cdot 2+2 \cdot 4+5 \cdot(-2) & (-4) \cdot 3+2 \cdot 5+5 \cdot 1
\end{array}\right]\)

= \(\left[\begin{array}{rr}
-12 & -4 \\
-10 & 3
\end{array}\right]\)

Again, B is a 3 x 2 matrix and A is a 2 x 3 matrix.

So, BA exists and it is a 3 x 3 matrix.

Now, BA = \(\left[\begin{array}{rr}
2 & 3 \\
4 & 5 \\
-2 & 1
\end{array}\right]\left[\begin{array}{rrr}
1 & -2 & 3 \\
-4 & 2 & 5
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
2 \cdot 1+3 \cdot(-4) & 2 \cdot(-2)+3 \cdot 2 & 2 \cdot 3+3 \cdot 5 \\
4 \cdot 1+5 \cdot(-4) & 4 \cdot(-2)+5 \cdot 2 & 4 \cdot 3+5 \cdot 5 \\
(-2) \cdot 1+1 \cdot(-4) & (-2) \cdot(-2)+1 \cdot 2 & (-2) \cdot 3+1 \cdot 5
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
-10 & 2 & 21 \\
-16 & 2 & 37 \\
-6 & 6 & -1
\end{array}\right] \text {. }\)

Hence, AB ≠ BA.

Properties of Matrix Multiplication

1. Commutativity

Matrix multiplication is not commutative in general.

Proof Let A and B be two given matrices.

If AB exists then it is quite possible that BA may not exist.

For example, if A is a 3 x 2 matrix and B is a 2 x 2 matrix then clearly, AB exists but BA does not exist.

Similarly, if BA exists then AB may not exist.

For example, if A is a 2 x 3 matrix an dB is a 2 x 2 matrix then clearly, BA exists but AB does not exist.

Further, if AB and BA both exist, then may not be comparable. For example, if A is a 2 x 3 matrix and B is a 3 x 2 matrix then clearly, AB as well ad BA exists. But, AB is a 2 x 2 matrix while BA is a 3 x 3 matrix.

Again, if AB and BA both exist and they are comparable, even then they may not be equal.

For example, if A = \(\left[\begin{array}{ll}
1 & 1 \\
1 & 2
\end{array}\right]\) and B = \(\left[\begin{array}{ll}
1 & 2 \\
0 & 3
\end{array}\right]\) then AB and BA are both defined and each one is a 2 x 2 matrix.

But, AB = \(\left[\begin{array}{ll}
1 & 5 \\
1 & 8
\end{array}\right]\) and BA = \(\left[\begin{array}{ll}
3 & 5 \\
3 & 6
\end{array}\right]\)

This shows that AB ≠ BA.

Hence, in general, AB ≠ BA.

Remarks (1) When AB = BA, we say that A and B commute.

(2) When AB = -BA, we say that A and B anticommute.

2. Associative law

For any matrices A, B, C for which (AB)C and A(BC) both exist, we have (AB)C = A(BC).

3. Distributive laws of multiplication over addition We have:

(1) A.(B + c) = (AB + AC)

(2) (A + B).C = (AC + BC)

4. The product of two nonzero matrices can be a zero matrix.

Example Let A = \(\left[\begin{array}{ll}
0 & 1 \\
0 & 2
\end{array}\right]\) and B = \(\left[\begin{array}{ll}
1 & 2 \\
0 & 0
\end{array}\right]\).

Then, A ≠ O and B ≠ O. But, AB = O.

Left Zero divisor If AB = O and A ≠ O then A is called a leftzerodivisior of AB.

Right Zero divisor If AB = O and B ≠ O then B is called a right zero divisor of AB.

5. If A is a given square matrix and I is an identity matrix of the same order as A then we have A.I = I.A = A.

6. If A is a given square matrix and O is the null matrix of the same order as A then O.A = A.O = O.

Positive Integral Powers of a Square Matrix

Let A be a square matrix of order n. Then, we define:

A2 = A.A;

A3 = A.A.A = A2.A;

A4 = A.A.A.A = A3.A, and so on.

∴ An = (A. A. A……n times).

Theorem 1 If A and B are square matrices of the same order then (A+B)2 = A2 + AB + BA + B2. Also, when AB = BA then (A+B)2 = A2 + 2AB + B2.

Proof Let A and B be n-rowed square matrices.

Then, clearly, (A + B) is a square matrix of order n.

So, (A + B)2 is defined.

Now, (A + B)2 = (A + B).(A + B)

= A.(A + B) + B.(A + B) [by distributive law]

= AA + AB + BA + BB [by distributive law]

=A2 + AB + BA + B2.

Hence, (A + B)2 = (A2 + AB + BA + B2).

Particular case When AB = BA

In this case, we have

(A + B)2 = (A2 + AB + AB + B2) = (A2 + 2AB + B2) [∵ BA = AB].

Theorem 2 If A and B are square matrices of the same order then (A + B)(A – B) = A2 – AB + BA – B2.

Also, when AB = BA then (A + B)(A – B) = A2 – B2.

Proof

We have

(A + B).(A – B) = A(A – B) + B(A – B) [by distributive law]

= AA – AB + BA – BB [∵ A(B – c) = AB – AC]

= A2 – AB + BA – B2.

Hence, (A + B)(A – B) = A2 – AB + BA – B2.

Particular case When AB = BA

In this case, (A + B)(A – B) = (A2 – B2) [∵ BA = AB].

Solved Examples

Example 1 If A = \(\left[\begin{array}{ll}
5 & 4 \\
2 & 3
\end{array}\right]\) and B = \(\left[\begin{array}{lll}
3 & 5 & 1 \\
6 & 8 & 4
\end{array}\right]\), find AB and BA whichever exists.

Solution

Here, A is a 2 x 2 matrix and B is a 2 x 3 matrix.

Clearly, the number of columns in A = number of rows in B.

∴ AB exists and it is a 2 x 3 matrix.

AB = \(\left[\begin{array}{ll}
5 & 4 \\
2 & 3
\end{array}\right]\left[\begin{array}{lll}
3 & 5 & 1 \\
6 & 8 & 4
\end{array}\right]\)

= \(\left[\begin{array}{lll}
5 \cdot 3+4 \cdot 6 & 5 \cdot 5+4 \cdot 8 & 5 \cdot 1+4 \cdot 4 \\
2 \cdot 3+3 \cdot 6 & 2 \cdot 5+3 \cdot 8 & 2 \cdot 1+3 \cdot 4
\end{array}\right]\)

= \(\left[\begin{array}{ccc}
15+24 & 25+32 & 5+16 \\
6+18 & 10+24 & 2+12
\end{array}\right]=\left[\begin{array}{lll}
39 & 57 & 21 \\
24 & 34 & 14
\end{array}\right]\)

Again, B is a 2 x 3 matrix and A is a 2 x 2 matrix.

∴ number of columns in B ≠ number of rows in A.

So, BA does not exist.

Example 2 If A = \(\left[\begin{array}{rrr}
2 & -1 & 3 \\
-4 & 5 & 1
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
2 & 3 \\
4 & -2 \\
1 & 5
\end{array}\right]\) then find AB and BA. Show that AB ≠ BA.

Solution

Here, A is a 2 x 3 matrix and B is a 3 x 2 matrix.

So, number of columns in A = number of rows in B.

∴ AB exists and it is a 2 x 2 matrix.

AB = \(\left[\begin{array}{rrr}
2 & -1 & 3 \\
-4 & 5 & 1
\end{array}\right]\left[\begin{array}{rr}
2 & 3 \\
4 & -2 \\
1 & 5
\end{array}\right]\)

= \(\left[\begin{array}{ll}
2 \cdot 2+(-1) \cdot 4+3 \cdot 1 & 2 \cdot 3+(-1) \cdot(-2)+3 \cdot 5 \\
-4 \cdot 2+5 \cdot 4+1 \cdot 1 & -4 \cdot 3+5 \cdot(-2)+1 \cdot 5
\end{array}\right]\)

= \(\left[\begin{array}{rr}
4-4+3 & 6+2+15 \\
-8+20+1 & -12-10+5
\end{array}\right]=\left[\begin{array}{rr}
3 & 23 \\
13 & -17
\end{array}\right] .\)

Again, B is a 3 x 2 matrix and A is a 2 x 3 matrix.

So, number of columns in B = number of rows in A.

∴ BA exists and it is a 3 x 3 matrix.

BA = \(\left[\begin{array}{rr}
2 & 3 \\
4 & -2 \\
1 & 5
\end{array}\right]\left[\begin{array}{rrr}
2 & -1 & 3 \\
-4 & 5 & 1
\end{array}\right]\)

= \(\left[\begin{array}{lll}
2 \cdot 2+3 \cdot(-4) & 2 \cdot(-1)+3 \cdot 5 & 2 \cdot 3+3 \cdot 1 \\
4 \cdot 2+(-2) \cdot(-4) & 4 \cdot(-1)+(-2) \cdot 5 & 4 \cdot 3+(-2) \cdot 1 \\
1 \cdot 2+5 \cdot(-4) & 1 \cdot(-1)+5 \cdot 5 & 1 \cdot 3+5 \cdot 1
\end{array}\right]\)

= \(\left[\begin{array}{llr}
4-12 & -2+15 & 6+3 \\
8+8 & -4-10 & 12-2 \\
2-20 & -1+25 & 3+5
\end{array}\right]=\left[\begin{array}{rrr}
-8 & 13 & 9 \\
16 & -14 & 10 \\
-18 & 24 & 8
\end{array}\right]\)

Clearly, AB ≠ BA.

Example 3 If A = \(\left[\begin{array}{rrr}
1 & -1 & 2 \\
3 & 2 & 0 \\
-2 & 0 & 1
\end{array}\right]\), B = \(\left[\begin{array}{rr}
3 & 1 \\
0 & 2 \\
-2 & 5
\end{array}\right]\) and C = \(\left[\begin{array}{lll}
2 & 1 & -3 \\
3 & 0 & -1
\end{array}\right]\) then verify that (AB)C = A(BC).

Solution

We have

AB = \(\left[\begin{array}{rrr}
1 & -1 & 2 \\
3 & 2 & 0 \\
-2 & 0 & 1
\end{array}\right]\left[\begin{array}{rr}
3 & 1 \\
0 & 2 \\
-2 & 5
\end{array}\right]\)

= \(\left[\begin{array}{rr}
3-0-4 & 1-2+10 \\
9+0-0 & 3+4+0 \\
-6+0-2 & -2+0+5
\end{array}\right]=\left[\begin{array}{rr}
-1 & 9 \\
9 & 7 \\
-8 & 3
\end{array}\right]\)

⇒ (AB)C = \(\left[\begin{array}{rr}
-1 & 9 \\
9 & 7 \\
-8 & 3
\end{array}\right]\left[\begin{array}{lll}
2 & 1 & -3 \\
3 & 0 & -1
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
-2+27 & -1+0 & 3-9 \\
18+21 & 9+0 & -27-7 \\
-16+9 & -8+0 & 24-3
\end{array}\right]=\left[\begin{array}{rrr}
25 & -1 & -6 \\
39 & 9 & -34 \\
-7 & -8 & 21
\end{array}\right] \text {. }\)

Also, BC = \(\left[\begin{array}{rr}
3 & 1 \\
0 & 2 \\
-2 & 5
\end{array}\right]\left[\begin{array}{lll}
2 & 1 & -3 \\
3 & 0 & -1
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
6+3 & 3+0 & -9-1 \\
0+6 & 0+0 & 0-2 \\
-4+15 & -2+0 & 6-5
\end{array}\right]=\left[\begin{array}{rrr}
9 & 3 & -10 \\
6 & 0 & -2 \\
11 & -2 & 1
\end{array}\right]\)

⇒ (AB)C = \(\left[\begin{array}{rrr}
1 & -1 & 2 \\
3 & 2 & 0 \\
-2 & 0 & 1
\end{array}\right]\left[\begin{array}{rrr}
9 & 3 & -10 \\
6 & 0 & -2 \\
11 & -2 & 1
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
9-6+22 & 3-0-4 & -10+2+2 \\
27+12+0 & 9+0-0 & -30-4+0 \\
-18+0+11 & -6+0-2 & 20-0+1
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
25 & -1 & -6 \\
39 & 9 & -34 \\
-7 & -8 & 21
\end{array}\right]\)

Hence, (AB)C = A(BC).

Example 4 If A = \(\left[\begin{array}{ll}
3 & 2 \\
1 & 0
\end{array}\right]\), B = \(\left[\begin{array}{rrr}
1 & -2 & 5 \\
0 & 7 & 3
\end{array}\right]\) and C = \(\left[\begin{array}{rrr}
8 & 1 & -6 \\
2 & -5 & 0
\end{array}\right]\) verify that A(B + C) = (AB + AC).

Solution

We have

A(B + C) = \(\left[\begin{array}{ll}
3 & 2 \\
1 & 0
\end{array}\right] \cdot\left\{\left[\begin{array}{rrr}
1 & -2 & 5 \\
0 & 7 & 3
\end{array}\right]+\left[\begin{array}{rrr}
8 & 1 & -6 \\
2 & -5 & 0
\end{array}\right]\right\}\)

= \(\left[\begin{array}{ll}
3 & 2 \\
1 & 0
\end{array}\right]\left[\begin{array}{rrr}
9 & -1 & -1 \\
2 & 2 & 3
\end{array}\right]\)

= \(\left[\begin{array}{lll}
3 \cdot 9+2 \cdot 2 & 3 \cdot(-1)+2 \cdot 2 & 3 \cdot(-1)+2 \cdot 3 \\
1 \cdot 9+0 \cdot 2 & 1 \cdot(-1)+0 \cdot 2 & 1 \cdot(-1)+0 \cdot 3
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
31 & 1 & 3 \\
9 & -1 & -1
\end{array}\right] \text {. }\)

Now, AB = \(\left[\begin{array}{ll}
3 & 2 \\
1 & 0
\end{array}\right]\left[\begin{array}{rrr}
1 & -2 & 5 \\
0 & 7 & 3
\end{array}\right]\)

= \(\left[\begin{array}{lll}
3 \cdot 1+2 \cdot 0 & 3 \cdot(-2)+2 \cdot 7 & 3 \cdot 5+2 \cdot 3 \\
1 \cdot 1+0 \cdot 0 & 1 \cdot(-2)+0 \cdot 7 & 1 \cdot 5+0 \cdot 3
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
3 & 8 & 21 \\
1 & -2 & 5
\end{array}\right]\)

And, AC = \(\left[\begin{array}{ll}
3 & 2 \\
1 & 0
\end{array}\right]\left[\begin{array}{rrr}
8 & 1 & -6 \\
2 & -5 & 0
\end{array}\right]\)

= \(\left[\begin{array}{lll}
3 \cdot 8+2 \cdot 2 & 3 \cdot 1+2 \cdot(-5) & 3 \cdot(-6)+2 \cdot 0 \\
1 \cdot 8+0 \cdot 2 & 1 \cdot 1+0 \cdot(-5) & 1 \cdot(-6)+0 \cdot 0
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
28 & -7 & -18 \\
8 & 1 & -6
\end{array}\right] .\)

∴ (AB + AC) = \(\left[\begin{array}{rrr}
3 & 8 & 21 \\
1 & -2 & 5
\end{array}\right]+\left[\begin{array}{rrr}
28 & -7 & -18 \\
8 & 1 & -6
\end{array}\right]\)

= \(\left[\begin{array}{rrr}
31 & 1 & 3 \\
9 & -1 & -1
\end{array}\right]\)

Hence, A(B + C) = (AB + AC).

Example 5 Give an example of two matrices A and B such that A ≠ O, B ≠ O and AB = BA = O.

Solution

Let A = \(\left[\begin{array}{ll}
1 & 1 \\
1 & 1
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
1 & -1 \\
-1 & 1
\end{array}\right]\). Then,

AB = \(\left[\begin{array}{ll}
1 & 1 \\
1 & 1
\end{array}\right]\left[\begin{array}{rr}
1 & -1 \\
-1 & 1
\end{array}\right]=\left[\begin{array}{ll}
1 \cdot 1+1 \cdot(-1) & 1 \cdot(-1)+1 \cdot 1 \\
1 \cdot 1+1 \cdot(-1) & 1 \cdot(-1)+1 \cdot 1
\end{array}\right]=\left[\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}\right]\)=0 .

BA = \(\left[\begin{array}{rr}
1 & -1 \\
-1 & 1
\end{array}\right]\left[\begin{array}{ll}
1 & 1 \\
1 & 1
\end{array}\right]=\left[\begin{array}{ll}
1 \cdot 1+1 \cdot(-1) & 1 \cdot 1+1 \cdot(-1) \\
(-1) \cdot 1+1 \cdot 1 & (-1) \cdot 1+1 \cdot 1
\end{array}\right]=\left[\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}\right]\)=O

Hence, AB = BA = O.

Example 6 Let A = \(\left[\begin{array}{cc}
0 & -\tan \frac{\alpha}{2} \\
\tan \frac{\alpha}{2} & 0
\end{array}\right]\) and I is the identity matrix of order 2. Show that (I + A) = \((I-A) \cdot\left[\begin{array}{rr}
\cos \alpha & -\sin \alpha \\
\sin \alpha & \cos \alpha
\end{array}\right]\)

Solution

Let \(\tan \frac{\alpha}{2}=t\)

Then, \(\cos \alpha=\frac{1-\tan ^2(\alpha / 2)}{1+\tan ^2(\alpha / 2)}=\frac{1-t^2}{1+t^2}\)

and \(\sin \alpha=\frac{2 \tan (\alpha / 2)}{1+\tan ^2(\alpha / 2)}=\frac{2 t}{1+t^2}\)

∴ \((I+A)=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]+\left[\begin{array}{rr}
0 & -t \\
t & 0
\end{array}\right]=\left[\begin{array}{rr}
1 & -t \\
t & 1
\end{array}\right]\)

And, \((I-A)=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]-\left[\begin{array}{rr}
0 & -t \\
t & 0
\end{array}\right]=\left[\begin{array}{rr}
1 & t \\
-t & 1
\end{array}\right]\)

∴ \((I-A) \cdot\left[\begin{array}{rr}
\cos \alpha & -\sin \alpha \\
\sin \alpha & \cos \alpha
\end{array}\right]\)

= \(\left[\begin{array}{rr}
1 & t \\
-t & 1
\end{array}\right]\left[\begin{array}{ll}
\frac{1-t^2}{1+t^2} & \frac{-2 t}{1+t^2} \\
\frac{2 t}{1+t^2} & \frac{1-t^2}{1+t^2}
\end{array}\right]\)

= \(\left[\begin{array}{cc}
\frac{1-t^2}{1+t^2}+\frac{2 t^2}{1+t^2} & \frac{-2 t}{1+t^2}+\frac{t\left(1-t^2\right)}{1+t^2} \\
\frac{-t\left(1-t^2\right)}{1+t^2}+\frac{2 t}{1+t^2} & \frac{2 t^2}{1+t^2}+\frac{1-t^2}{1+t^2}
\end{array}\right]\)

= \(\left[\begin{array}{rr}
1 & -t \\
t & 1
\end{array}\right]\)=(I+A) .

Hence, (I + A) = \((I-A) \cdot\left[\begin{array}{rr}
\cos \alpha & -\sin \alpha \\
\sin \alpha & \cos \alpha
\end{array}\right]\)

Example 7 If A = \(\left[\begin{array}{rr}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{array}\right]\) then prove that \(A^n=\left[\begin{array}{rr}
\cos n \theta & \sin n \theta \\
-\sin n \theta & \cos n \theta
\end{array}\right], n \in N .\)

Solution

We shall prove the result by using the principle of mathematical induction.

When n = 1, we have

\(A^1=\left[\begin{array}{rr}
\cos 1 \cdot \theta & \sin 1 \cdot \theta \\
-\sin 1 \cdot \theta & \cos 1 \cdot \theta
\end{array}\right]=\left[\begin{array}{rr}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{array}\right] .\)

Thus, the result is true for n = 1.

Let the result be true for n = k.

Then, \(A^k=\left[\begin{array}{rr}
\cos k \theta & \sin k \theta \\
-\sin k \theta & \cos k \theta
\end{array}\right]\)

∴ A^{k+1}

= \(A \cdot A^k=\left[\begin{array}{rr}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{array}\right]\left[\begin{array}{rr}
\cos k \theta & \sin k \theta \\
-\sin k \theta & \cos k \theta
\end{array}\right]\)

= \(\left[\begin{array}{rr}
\cos \theta \cos k \theta-\sin \theta \sin k \theta & \cos \theta \sin k \theta+\sin \theta \cos k \theta \\
-\sin \theta \cos k \theta-\cos \theta \sin k \theta & -\sin \theta \sin k \theta+\cos \theta \cos k \theta
\end{array}\right]\)

= \(\left[\begin{array}{rr}
\cos (\theta+k \theta) & \sin (\theta+k \theta) \\
-\sin (\theta+k \theta) & \cos (\theta+k \theta)
\end{array}\right]\)

= \(\left[\begin{array}{rr}
\cos (k+1) \theta & \sin (k+1) \theta \\
-\sin (k+1) \theta & \cos (k+1) \theta
\end{array}\right]\)

Thus, the result is true for n = (k+1), whenever it is true for n = k.

Hence, \(A^n=\left[\begin{array}{rr}
\cos n \theta & \sin n \theta \\
-\sin n \theta & \cos n \theta
\end{array}\right]\) for all values of n ∈ N.

Example 8 Let A and B be symmetric matrices of the same order. Show that AB is symmetric if and only if AB = BA.

Solution

Since A and B are symmetric, we have A’ = A and B’ = B.

Let AB be symmetric. Then,

\((A B)^t=A B \Rightarrow B^t A^t=A B\left[(A B)^t=B^t A^t\right]\)

⇒ BA = AB [∵ B’ = B and a’ = A].

Thus, AB = BA.

Conversely, let AB = BA. Then,

AB = BA ⇒ \((A B)^t=(B A)^t=A^t B^t=A B\) [∵ A’ = A and B’ = B]

⇒ AB is symmetric.

Hence, AB is symmetric ⇔ AB = BA.

Example 9 If A and B are symmetric matrices, prove that (AB – BA) is skew-symmetric.

Solution

Since A and B are symmetric, we have At = A and Bt = B.

∴ \((A B-B A)^t=(A B)^t-(B A)^t\)

= \(B^t A^t-A^t B^t\) [∵ \((A B)^t=B^t A^t \text { and }(B A)^t=A^t B^t\)]

= BA – AB [∵ \(A^t=A \text { and } B^t=B\)]

= -(AB-BA).

This shows that (AB-BA) is skew-symmetric.

Example 10 Find the matrix A such that \(\left[\begin{array}{rr}
2 & -1 \\
1 & 0 \\
-3 & 4
\end{array}\right]-A=\left[\begin{array}{rrr}
-1 & -8 & -10 \\
1 & -2 & -5 \\
9 & 22 & 15
\end{array}\right]\)

Solution

Clearly, the product is a (3×3) matrix and the prefactor is a (3×2) matrix. So, A must be a (2×3) matrix.

Let A = \(\left[\begin{array}{lll}
x & y & z \\
u & v & w
\end{array}\right]\)

Then, the given equation becomes

\(\left[\begin{array}{rr}
2 & -1 \\
1 & 0 \\
-3 & 4
\end{array}\right] \cdot\left[\begin{array}{lll}
x & y & z \\
u & v & w
\end{array}\right]=\left[\begin{array}{rrr}
-1 & -8 & -10 \\
1 & -2 & -5 \\
9 & 22 & 15
\end{array}\right]\)

⇒ \(\left[\begin{array}{ccc}
2 x-u & 2 y-v & 2 z-w \\
x & y & z \\
-3 x+4 u & -3 y+4 v & -3 z+4 w
\end{array}\right]=\left[\begin{array}{ccc}
-1 & -8 & -10 \\
1 & -2 & -5 \\
9 & 22 & 15
\end{array}\right]\)

⇒ 2x – u = -1, 2y – v = -8, 2z – w = -10, x = 1, y = -2, z = -5

⇒ x = 1, y = -2, x = -5, u = 3, v = 4 and w = 0.

Hence, A = \(\left[\begin{array}{rrr}
1 & -2 & -5 \\
3 & 4 & 0
\end{array}\right]\)

Scalar Multiplication

Let A be a given matrix and k be a nonzero real number. Then, the matrix obtained by multiplying each element of A by k is called the scalar multiple of A by k and it is denoted by kA.

Here, k is called a scalar.

If A is an (mxn) matrix then kA is also an (mxn) matrix.

If A = [aij]mxn then kA = [k.aij]mxn.

Example 1 If A = \(\left[\begin{array}{rrr}
5 & 6 & -4 \\
8 & -3 & 2
\end{array}\right]\), find (1) 3A (2) -5A (3)\frac{1}{2}A.

Solution We have:

(1) 3A = \(\left[\begin{array}{ccc}
3 \times 5 & 3 \times 6 & 3 \times(-4) \\
3 \times 8 & 3 \times(-3) & 3 \times 2
\end{array}\right]=\left[\begin{array}{rrr}
15 & 18 & -12 \\
24 & -9 & 6
\end{array}\right]\)

(2) -5A = \(\left[\begin{array}{ccc}
(-5) \times 5 & (-5) \times 6 & (-5) \times(-4) \\
(-5) \times 8 & (-5) \times(-3) & (-5) \cdot 2
\end{array}\right]=\left[\begin{array}{rrr}
-25 & -30 & 20 \\
-40 & 15 & -10
\end{array}\right] \text {. }\)

(3) \(\frac{1}{2} A=\left[\begin{array}{lll}
\frac{1}{2} \times 5 & \frac{1}{2} \times 6 & \frac{1}{2} \times(-4) \\
\frac{1}{2} \times 8 & \frac{1}{2} \times(-3) & \frac{1}{2} \times 2
\end{array}\right]=\left[\begin{array}{rrr}
\frac{5}{2} & 3 & -2 \\
4 & -\frac{3}{2} & 1
\end{array}\right] \text {. }\)

Some Properties of Scalar Multiplication

Theorem 1 If A and B are two matrices of the same order and k is a scalar then prove that k(A+B) = kA + kB.

Proof Let A = [aij]mxn and B = [bij]mxn. Then,

k(A+B) = k.([aij]mxn + [bij]mxn)

= k.[aij + bij]mxn [by definition of addition of matrices]

= [k(aij+bij)]mxn [by definition of scalar multiplication]

= [k.aij + k.bij]mxn [by distributive law]

= kA + kB.

Hence, k(A + B) = kA + kB.

Theorem 2 If A is any matrix and k1, k2 are any scalars then prove that \(\left(k_1+k_2\right) A=k_1 A+k_2 A\).

Proof

Let A = [aij]mxn. Then,

\(\left(k_1+k_2\right) A=\left(k_1+k_2\right) \cdot\left[a_{i j}\right]_{m \times n}\)

= \(\left[\left(k_1+k_2\right) \cdot a_{i j}\right]_{m \times n}\) [by definition of scalar multiplication]

= \(\left[k_1 \cdot a_{i j}+k_2 \cdot a_{i j}\right]_{m \times n}\) [by distributive law]

= \(\left[k_1 \cdot a_{i j}\right]_{m \times n}+\left[k_2 \cdot a_{i j}\right]_{m \times n}\) [by definition of addition of matrices]

= \(k_1 A+k_2 A .\)

Hence, \(\left(k_1+k_2\right) A=k_1 A+k_2 A\).

Theorem 3 If A is any matrix and k1, k2 are any scalars then prove that \(k_1\left(k_2 A\right)=\left(k_1 k_2\right) A .\)

Proof

Let A = \(\left[a_{i j}\right]_{m \times n}\). Then,

\(k_1\left(k_2 A\right)=k_1\left[k_2 \cdot a_{i j}\right]_{m \times n}=\left[k_1\left(k_2 \cdot a_{i j}\right)\right]_{m \times n}\)

= \(\left[\left(k_1 k_2\right) \cdot a_{i j}\right]_{m \times n}\) [by associativity of multiplication in numbers]

= \(\left(k_1 k_2\right) \cdot\left[a_{i j}\right]_{m \times n}=\left(k_1 k_2\right) A .\)

Hence, \(k_1\left(k_2 A\right)=\left(k_1 k_2\right) A .\)

Theorem 4 Let A be a given matrix and let k be a scalar. Then, prove that \((k A)^t=k A^t .\)

Proof

Let A = [aij]mxn be a given matrix and let k be a scalar. Then,

A is an (mxn) matrix ⇒ kA is an (mxn) matrix

⇒ \((k A)^t\) is an (nxm) matrix.

Also, A is an (mxn) matrix ⇒ A^t is an (nxm) matrix

⇒ \(kA^t\) is an (nxm) matrix.

Thus, \((k A)^t\) and \(\left(k A^t\right)\) are comparable matrices.

Also, (j,i)th element of \((k A)^t\) = (i,j)th element of kA

= k times (i,j)th element of \(A^t\)

= (j,i)th element of \((k A)^t\).

Thus, \((k A)^t and \left(k A^t\right)\) are comparable matrices whose corresponding elements are equal.

Hence, \((k A)^t=k A^t .\)

Theorem 5 Let A and B be two matrices such that AB exists and let c be a nonzero scalar. Then, prove that c(A x B) = (cA) x B = A x (cB).

Proof

Let A = [aij]mxn and B = [bjk]nxp. Then,

A is an (mxn) matrix, B is an (nxp) matrix

⇒ (A x B) is an (m x p) matrix

⇒ c(A x B) is an (m x p) matrix.

Again, A is an (m x n) matrix, B is an (n x p) matrix

⇒ cA is an (m x n) matrix, B is an (n x p) matrix

⇒ (cA) x B is an (m x p) matrix.

∴ c(A x B) and (cA) x B are comparable matrices.

Now, (i,k)th element of c(A x B)

= c time (i,k)th element of (A x B)

= \(c \cdot \sum_{j=1}^n a_{i j} b_{j k}\)

= \(c \cdot\left(a_{i 1} b_{1 k}+a_{i 2} b_{2 k}+\ldots+a_{i n} b_{n k}\right)\)

= \(\left(c a_{i 1}\right) b_{1 k}+\left(c a_{i 2}\right) b_{2 k}+\ldots+\left(c a_{i n}\right) b_{n k}\)

= (i,j)th element of (cA) x B.

Thus, c(A x B) are (cA) x B are comparable and their corresponding elements are equal.

Hence, c(A x B) = (cA x B).

Again, A is an (m x n) matrix and B is an (n x p) matrix

⇒ A is an (m x n) matrix and cB is an (n x p) matrix

⇒ A x (cB) is an (m x p) matrix.

Thus, c(A x B) and A x (cB) are comparable.

Also, (i,k)th element of A x (cB)

= \(\sum_{j=1}^n a_{i j}\left(c b_{j k}\right)\)

= \(a_{i 1}\left(c b_{1 k}\right)+a_{i 2}\left(c b_{2 k}\right)+\ldots+a_{i n}\left(c b_{n k}\right)\)

= \(c\left(a_{i 1} b_{1 k}+a_{i 2} b_{2 k}+\ldots+a_{i n} b_{n k}\right)\)

= \(c \sum_{j=1}^n a_{i j} b_{j k}\) = (i,k)th element of c(A x B).

∴ c(A x B) = A x (cB)

Hence, c(A x B) = (cA) x B = A x (cB).

Summary of properties of Matrix Operations

1. (1) \(\text { (i) }\left(A^t\right)^t=A\)

(2) \((A+B)^t=\left(A^t+B^t\right) .\)

(3) \((A \times B)^t \neq\left(A^t \times B^t\right) \text { and }(A \times B)^t \neq\left(B^t \times A^t\right) \text {. }\)

2. (1) (A + B) = (B + A)

(2) (A + B) + C = A + (B + C)

(3) For all comparable matrices A there exists a null matrix O such that A + O = O + A = A.

3. (1) (A x B) ≠ (B x A).

(2) (A x B) x C = A x (B x C)

(3) A x (B + C) = (A x B) + (A x C)

(4) (A + B) x C = (A x C) + (B x C)

(5) For all square matrices A of the same order there exists a unit matrix I such that (A x I) = (I x A) = A.

4. (1) (C A)^t=C A^{\prime} \text {. }

(2) c(A + B) = (cA + cB)

(3) c(A x B) = (cA x B) = A x (cB).

(4) ∃ a matrix S such that St = S(called a symmetric matrix)

More Solved Examples

Example 2 If A = \(\left[\begin{array}{rr}
3 & 5 \\
7 & -9
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
6 & -4 \\
2 & 3
\end{array}\right]\), find (4A – 3B).

Solution

We have: 4A – 3B = 4A + (-3)B.

Now, \(4 A=\left[\begin{array}{ll}
4 \times 3 & 4 \times 5 \\
4 \times 7 & 4 \times(-9)
\end{array}\right]=\left[\begin{array}{rr}
12 & 20 \\
28 & -36
\end{array}\right]\)

and (-3)B = \(\left[\begin{array}{ll}
(-3) \times 6 & (-3) \times(-4) \\
(-3) \times 2 & (-3) \times 3
\end{array}\right]=\left[\begin{array}{rr}
-18 & 12 \\
-6 & -9
\end{array}\right] .\)

∴ 4A – 3B = 4A + (-3)B-B

= \(\left[\begin{array}{rr}
12 & 20 \\
28 & -36
\end{array}\right]+\left[\begin{array}{rr}
-18 & 12 \\
-6 & -9
\end{array}\right]\)

= \(\left[\begin{array}{cc}
12+(-18) & 20+12 \\
28+(-6) & (-36)+(-9)
\end{array}\right]=\left[\begin{array}{rr}
-6 & 32 \\
22 & -45
\end{array}\right] \text {. }\)

Hence, (4A – 3B) = \(\left[\begin{array}{rr}
-6 & 32 \\
22 & -45
\end{array}\right]\)

Example 3 Let A = diag[3,-5,7] and B = diag[-1, 2,4]. Find (1) (A + B), (2) (A – B), (3) -5A, (4) (2A + 3B).

Solution

We have

\(A=\left[\begin{array}{rrr}
3 & 0 & 0 \\
0 & -5 & 0 \\
0 & 0 & 7
\end{array}\right] \text { and } B=\left[\begin{array}{rrr}
-1 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & 4
\end{array}\right]\)

∴ (1) \(A+B=\left[\begin{array}{rrr}
3 & 0 & 0 \\
0 & -5 & 0 \\
0 & 0 & 7
\end{array}\right]+\left[\begin{array}{rrr}
-1 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & 4
\end{array}\right]=\left[\begin{array}{rrr}
2 & 0 & 0 \\
0 & -3 & 0 \\
0 & 0 & 11
\end{array}\right]\)

(2) (A – B) = A + (-B)

= \(\left[\begin{array}{rrr}
3 & 0 & 0 \\
0 & -5 & 0 \\
0 & 0 & 7
\end{array}\right]+\left[\begin{array}{rrr}
1 & 0 & 0 \\
0 & -2 & 0 \\
0 & 0 & -4
\end{array}\right]=\left[\begin{array}{rrr}
4 & 0 & 0 \\
0 & -7 & 0 \\
0 & 0 & 3
\end{array}\right]\)

(3) -5A = (-5).A = \((-5) \cdot\left[\begin{array}{rrr}
3 & 0 & 0 \\
0 & -5 & 0 \\
0 & 0 & 7
\end{array}\right]=\left[\begin{array}{rrr}
-15 & 0 & 0 \\
0 & 25 & 0 \\
0 & 0 & -35
\end{array}\right]\)

(4) 2A + 3B = \(\left[\begin{array}{rrr}
6 & 0 & 0 \\
0 & -10 & 0 \\
0 & 0 & 14
\end{array}\right]+\left[\begin{array}{rrr}
-3 & 0 & 0 \\
0 & 6 & 0 \\
0 & 0 & 12
\end{array}\right]=\left[\begin{array}{rrr}
3 & 0 & 0 \\
0 & -4 & 0 \\
0 & 0 & 26
\end{array}\right]\)

Example 4 Simplify \(\cos \theta \cdot\left[\begin{array}{rr}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{array}\right]+\sin \theta \cdot\left[\begin{array}{rr}
\sin \theta & -\cos \theta \\
\cos \theta & \sin \theta
\end{array}\right] \text {. }\)

Solution

We have

\(\cos \theta \cdot\left[\begin{array}{rr}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{array}\right]+\sin \theta \cdot\left[\begin{array}{rr}
\sin \theta & -\cos \theta \\
\cos \theta & \sin \theta
\end{array}\right] \text {. }\)

= \(\left[\begin{array}{cc}
\cos ^2 \theta & \sin \theta \cos \theta \\
-\sin \theta \cos \theta & \cos ^2 \theta
\end{array}\right]+\left[\begin{array}{cc}
\sin ^2 \theta & -\sin \theta \cos \theta \\
\sin \theta \cos \theta & \sin ^2 \theta
\end{array}\right]\)

= \(\left[\begin{array}{cc}
\cos ^2 \theta+\sin ^2 \theta & \sin \theta \cos \theta+(-\sin \theta \cos \theta) \\
-\sin \theta \cos \theta+\sin \theta \cos \theta & \cos ^2 \theta+\sin ^2 \theta
\end{array}\right]\)

= \(\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right] .\)

Example 5 If \(2\left[\begin{array}{cc}
x & 5 \\
7 & y-3
\end{array}\right]+\left[\begin{array}{ll}
3 & 4 \\
1 & 2
\end{array}\right]=\left[\begin{array}{rc}
7 & 14 \\
15 & 14
\end{array}\right]\), find the values of x and y.

Solution

We have

\(2\left[\begin{array}{cc}
x & 5 \\
7 & y-3
\end{array}\right]+\left[\begin{array}{ll}
3 & 4 \\
1 & 2
\end{array}\right]=\left[\begin{array}{rr}
7 & 14 \\
15 & 14
\end{array}\right]\)

⇒ \(\left[\begin{array}{cc}
2 x & 10 \\
14 & 2 y-6
\end{array}\right]+\left[\begin{array}{ll}
3 & 4 \\
1 & 2
\end{array}\right]=\left[\begin{array}{rc}
7 & 14 \\
15 & 14
\end{array}\right]\)

⇒ \(\left[\begin{array}{cc}
2 x+3 & 14 \\
15 & 2 y-4
\end{array}\right]=\left[\begin{array}{rr}
7 & 14 \\
15 & 14
\end{array}\right]\)

⇒ 2x + 3 = 7 and 2y – 4 = 14

⇒ x = 2 and y = 9.

The values of x and y are 3 and 9

Example 6 Find matrix X such that \(X+\left[\begin{array}{rr}
4 & 6 \\
-3 & 7
\end{array}\right]=\left[\begin{array}{ll}
3 & -6 \\
5 & -8
\end{array}\right]\)

Solution

Let A = \(\left[\begin{array}{rr}
4 & 6 \\
-3 & 7
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
3 & -6 \\
5 & -8
\end{array}\right]\)

Then, the given matrix equation is X + A = B.

Now, X + A = B ⇒ X = B – A = B + (-A)

= \(\left[\begin{array}{rr}
3 & -6 \\
5 & -8
\end{array}\right]+\left[\begin{array}{rr}
-4 & -6 \\
3 & -7
\end{array}\right]\)

= \(\left[\begin{array}{ll}
3+(-4) & -6+(-6) \\
5+3 & -8+(-7)
\end{array}\right]=\left[\begin{array}{rr}
-1 & -12 \\
8 & -15
\end{array}\right] \text {. }\)

Hence, X = \(\left[\begin{array}{rr}
-1 & -12 \\
8 & -15
\end{array}\right]\)

Example 7 Find a matrix X such that 2A + B + X = O, where A = \(\left[\begin{array}{rr}
-1 & 2 \\
3 & 4
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
3 & -2 \\
1 & 5
\end{array}\right] \text {. }\)

Solution

We have

2A + B + X = O ⇒ X = -(2A + B).

Now, (2A + B) = \(2 \cdot\left[\begin{array}{rr}
-1 & 2 \\
3 & 4
\end{array}\right]+\left[\begin{array}{rr}
3 & -2 \\
1 & 5
\end{array}\right]\)

= \(\left[\begin{array}{rr}
-2 & 4 \\
6 & 8
\end{array}\right]+\left[\begin{array}{rr}
3 & -2 \\
1 & 5
\end{array}\right]\)

= \(\left[\begin{array}{rl}
-2+3 & 4+(-2) \\
6+1 & 8+5
\end{array}\right]=\left[\begin{array}{rr}
1 & 2 \\
7 & 13
\end{array}\right]\)

∴ X = -(2A + B) = \(\left[\begin{array}{cc}
-1 & -2 \\
-7 & -13
\end{array}\right]\)

Example 8 Find matrices X and Y, if X + Y = \(\left[\begin{array}{ll}
5 & 2 \\
0 & 9
\end{array}\right]\) and X – Y = \(\left[\begin{array}{rr}
3 & 6 \\
0 & -1
\end{array}\right]\).

Solution

Adding the given matrices, we get

(X + Y) + (X – Y) = \(\left[\begin{array}{ll}
5 & 2 \\
0 & 9
\end{array}\right]+\left[\begin{array}{rr}
3 & 6 \\
0 & -1
\end{array}\right]\)

⇒ \(2 X=\left[\begin{array}{ll}
5+3 & 2+6 \\
0+0 & 9+(-1)
\end{array}\right]\)

⇒ \(2 X=\left[\begin{array}{ll}
8 & 8 \\
0 & 8
\end{array}\right]\) ⇒ \(X=\frac{1}{2} \cdot\left[\begin{array}{ll}
8 & 8 \\
0 & 8
\end{array}\right]=\left[\begin{array}{ll}
4 & 4 \\
0 & 4
\end{array}\right]\)

On substracting the given matrices, we get

(X + Y) – (X – Y)=\(\left[\begin{array}{ll}
5 & 2 \\
0 & 9
\end{array}\right]-\left[\begin{array}{rr}
3 & 6 \\
0 & -1
\end{array}\right]\)

⇒ \(2 Y=\left[\begin{array}{ll}
5-3 & 2-6 \\
0-0 & 9-(-1)
\end{array}\right]=\left[\begin{array}{ll}
2 & -4 \\
0 & 10
\end{array}\right]\)

⇒ \(Y=\frac{1}{2}\left[\begin{array}{rr}
2 & -4 \\
0 & 10
\end{array}\right]=\left[\begin{array}{rr}
1 & -2 \\
0 & 5
\end{array}\right] .\)

Hence, X = \(\left[\begin{array}{ll}
4 & 4 \\
0 & 4
\end{array}\right] and Y = \left[\begin{array}{rr}
1 & -2 \\
0 & 5
\end{array}\right] \text {. }\)

Matrix Polynomial Let f(x) = \(a_0 x^m+a_1 x^{m-1}+a_2 x^{m-2}+\ldots+a_{m-1} x+a_m\) be a polynomial of degree m and let A be a square matrix of order n. Then, the corresponding matrix polynomial is:

f(A) = \(a_0 A^{m+}+a_1 A^{m-1}+a_2 A^{m-2}+\ldots+a_{m-1} A+a_m I\), where I is a unit matrix of order n.

Example 9 If f(x) = \(x^2-5 x+7\) and A = \(\left[\begin{array}{rr}
3 & 1 \\
-1 & 2
\end{array}\right]\), find f(A).

Solution

f(x) = \(x^2-5 x+7\) ⇒ f(A) = \(A^2-5 A+7 I\)

Now, \(A^2=\left[\begin{array}{rr}
3 & 1 \\
-1 & 2
\end{array}\right]\left[\begin{array}{rr}
3 & 1 \\
-1 & 2
\end{array}\right]\)

= \(\left[\begin{array}{rr}
9-1 & 3+2 \\
-3-2 & -1+4
\end{array}\right]=\left[\begin{array}{rr}
8 & 5 \\
-5 & 3
\end{array}\right]\)

-5A = \(\left[\begin{array}{ll}
(-5) \cdot 3 & (-5) \cdot 1 \\
(-5) \cdot(-1) & (-5) \cdot 2
\end{array}\right]=\left[\begin{array}{rr}
-15 & -5 \\
5 & -10
\end{array}\right]\)

7I = \(7 \cdot\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]=\left[\begin{array}{ll}
7 & 0 \\
0 & 7
\end{array}\right] .\)

∴ f(A) = \(A^2-5 A+7 I\)

= \(\left[\begin{array}{rr}
8 & 5 \\
-5 & 3
\end{array}\right]+\left[\begin{array}{rr}
-15 & -5 \\
5 & -10
\end{array}\right]+\left[\begin{array}{ll}
7 & 0 \\
0 & 7
\end{array}\right]\)

= \(\left[\begin{array}{cc}
8+(-15)+7 & 5+(-5)+0 \\
-5+5+0 & 3+(-10)+7
\end{array}\right]=\left[\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}\right]\)

Hence, f(A) = \(\left[\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}\right]=0 .\)

Example 10 If A = \(\left[\begin{array}{rr}
3 & -5 \\
-4 & 2
\end{array}\right]\), show that \(A^2-5 A-14 I=0 .\)

Solution

We have

\(A^2=\left[\begin{array}{rr}
3 & -5 \\
-4 & 2
\end{array}\right]\left[\begin{array}{rr}
3 & -5 \\
-4 & 2
\end{array}\right]\)

= \(\left[\begin{array}{cc}
3 \cdot 3+(-5)(-4) & 3 \cdot(-5)+(-5) \cdot 2 \\
(-4) \cdot 3+2 \cdot(-4) & (-4) \cdot(-5)+2 \cdot 2
\end{array}\right]=\left[\begin{array}{rr}
29 & -25 \\
-20 & 24
\end{array}\right]\)

-5A = \((-5)\left[\begin{array}{rr}
3 & -5 \\
-4 & 2
\end{array}\right]=\left[\begin{array}{rr}
-15 & 25 \\
20 & -10
\end{array}\right] \text {; }\)

-14I = \((-14)\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]=\left[\begin{array}{rr}
-14 & 0 \\
0 & -14
\end{array}\right] \text {. }\)

∴ \(A^2-5 A-14 I=A^2+(-5) A+(-14 I)\)

= \(\left[\begin{array}{rr}
29 & -25 \\
-20 & 24
\end{array}\right]+\left[\begin{array}{rr}
-15 & 25 \\
20 & -10
\end{array}\right]+\left[\begin{array}{rr}
-14 & 0 \\
0 & -14
\end{array}\right]\)

= \(\left[\begin{array}{cc}
29+(-15)+(-14) & -25+25+0 \\
-20+20+0 & 24+(-10)+(-14)
\end{array}\right]\)

= \(\left[\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}\right]=0 \text {. }\)

Hence, \(A^2-5 A-14 I=0 .\)

Example 11 If A = \(\left[\begin{array}{rr}
1 & 0 \\
-1 & 7
\end{array}\right]\), find k so that \(A^2=8 A+k I\).

Solution

We have

\(A^2=\left[\begin{array}{rr}
1 & 0 \\
-1 & 7
\end{array}\right]\left[\begin{array}{rr}
1 & 0 \\
-1 & 7
\end{array}\right]=\left[\begin{array}{rl}
1-0 & 0+0 \\
-1-7 & 0+49
\end{array}\right]=\left[\begin{array}{rr}
1 & 0 \\
-8 & 49
\end{array}\right]\)

(8A + kI) = \(8 \cdot\left[\begin{array}{rr}
1 & 0 \\
-1 & 7
\end{array}\right]+k \cdot\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]\)

= \(\left[\begin{array}{rr}
8 & 0 \\
-8 & 56
\end{array}\right]+\left[\begin{array}{ll}
k & 0 \\
0 & k
\end{array}\right]=\left[\begin{array}{cc}
8+k & 0 \\
-8 & 56+k
\end{array}\right]\)

∴ \(A^2=8 A+k I\) ⇒ \(\left[\begin{array}{rr}
1 & 0 \\
-8 & 49
\end{array}\right]=\left[\begin{array}{cc}
8+k & 0 \\
-8 & 56+k
\end{array}\right]\)

⇒ 8 + k = 1 and 56 + k = 49

⇒ k = -7.

Hence, k = -7.

Example 12 If A = \(\left[\begin{array}{ll}
0 & 1 \\
0 & 0
\end{array}\right]\), prove that for all n ∈ N, \((a I+b A)^n=a^n I+n d^{n-1} b A\), where I is the identity matrix of order 2.

Solution

We shall prove the result by mathematical induction.

When n = 1, we have:

LHS = \((a I+b A)^1=(a I+b A)=\left(a^1 I+1 a^0 b A\right)\) = RHS.

So, the result is true for n = 1.

Let it be true for n = m, so that

\((a I+b A)^m=a^m I+m a^{m-1} b A\) …(1)

∴ \((a I+b A)^{m+1}\)

= \((a I+b A) \cdot(a I+b A)^m=(a I+b A) \cdot\left(a^m I+m a^{m-1} b A\right)\) [using(1)]

= \(a I\left(a^m I+m a^{m-1} b A\right)+b A\left(a^m I+m a^{m-1} b A\right)\)

= \(a^{m+1} I+m d^m b A+a^m b A+m a^{m-1} b^2 A^2\) [∵ II = I, IA = A = AI]

= \(a^{m+1} I+(m+1) a^m b A\) [∵ \(A^2=\left[\begin{array}{ll}
0 & 1 \\
0 & 0
\end{array}\right] \times\left[\begin{array}{ll}
0 & 1 \\
0 & 0
\end{array}\right]=\left[\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}\right]=O]\).

Thus, whenever the result is true for n = m, then it is also true for n = (m + 1).

Hence, by mathematical induction, it is true for all n ∈ N.

Theorem 6 If A is a symmetric matrix, then prove that kA is symmetric.

Proof Since A is symmetric, we have \(A^t=A .\)

∴ \((k A)^t=k A^t=k A\) [∵ \(A^t=A\)].

Hence, (kA) is symmetric.

Theorem 7 If A is a skew-symmetric matrix, then prove that kA is skew-symmetric.

Proof Since A is skew-symmetric, we have \(A^t=-A .\)

∴ \((k A)^t=k \cdot A^t=k \cdot(-A)=-(k A)\) [∵ \(A^t=-A\)].

Hence, (kA) is skew-symmetric.

Theorem 8 Prove that every square matrix is expressible as the sum of a symmetric matrix and a skew-symmetric matrix.

Proof Let A be any square matrix. Then, we can write

\(A=\frac{1}{2}\left(A+A^t\right)+\frac{1}{2}\left(A-A^t\right)=P+Q \text { (say). }\)

Then, it is easy to verify that P is symmetric and Q is skew-symmetric.

Hence, the theorem follows.

Example 13 Express the matrix A = \(\left[\begin{array}{ll}
3 & -4 \\
1 & -1
\end{array}\right]\) as the sum of a symmetric matrix and a skew-symmetric matrix.

Solution

We know that \(A=\frac{1}{2}\left(A+A^{\dagger}\right)+\frac{1}{2}\left(A-A^{\dagger}\right)=P+Q\), where P is a symmetric and Q is skew-symmetric.

∴ \(P=\frac{1}{2}\left(A+A^t\right)=\frac{1}{2} \cdot\left\{\left[\begin{array}{ll}
3 & -4 \\
1 & -1
\end{array}\right]+\left[\begin{array}{rr}
3 & 1 \\
-4 & -1
\end{array}\right]\right\}\)

= \(\frac{1}{2} \cdot\left[\begin{array}{cc}
3+3 & -4+1 \\
1+(-4) & -1+(-1)
\end{array}\right]=\frac{1}{2} \cdot\left[\begin{array}{rr}
6 & -3 \\
-3 & -2
\end{array}\right]=\left[\begin{array}{cc}
3 & \frac{-3}{2} \\
\frac{-3}{2} & -1
\end{array}\right]\)

And, \(Q=\frac{1}{2}\left(A-A^t\right)=\frac{1}{2} \cdot\left\{\left[\begin{array}{ll}
3 & -4 \\
1 & -1
\end{array}\right]-\left[\begin{array}{rr}
3 & 1 \\
-4 & -1
\end{array}\right]\right\}\)

= \(\frac{1}{2} \cdot\left[\begin{array}{cc}
3-3 & -4-1 \\
1-(-4) & -1-(-1)
\end{array}\right]=\frac{1}{2} \cdot\left[\begin{array}{cc}
0 & -5 \\
1+4 & -1+1
\end{array}\right]\)

= \(\frac{1}{2} \cdot\left[\begin{array}{rr}
0 & -5 \\
5 & 0
\end{array}\right]=\left[\begin{array}{rr}
0 & -\frac{5}{2} \\
\frac{5}{2} & 0
\end{array}\right] \text {. }\)

Hence, A = P + Q, where P is symmetric and Q is skew-symmetric.

Example 14 Express the matrix A = \(\left[\begin{array}{rrr}
1 & 3 & 5 \\
-6 & 8 & 3 \\
-4 & 6 & 5
\end{array}\right]\) as the sum of a symmetric matrix and a skew-symmetric matrix.

Solution

We know that \(A=\frac{1}{2}\left(A+A^t\right)+\frac{1}{2}\left(A-A^t\right)=P+Q\), where P is symmetric and Q is skew-symmetric.

Now, \(P=\frac{1}{2}\left(A+A^t\right)\)

= \(\frac{1}{2} \cdot\left\{\left[\begin{array}{rrr}
1 & 3 & 5 \\
-6 & 8 & 3 \\
-4 & 6 & 5
\end{array}\right]+\left[\begin{array}{rrr}
1 & -6 & -4 \\
3 & 8 & 6 \\
5 & 3 & 5
\end{array}\right]\right\}\)

= \(\frac{1}{2} \cdot\left[\begin{array}{rrr}
2 & -3 & 1 \\
-3 & 16 & 9 \\
1 & 9 & 10
\end{array}\right]=\left[\begin{array}{rrr}
1 & -\frac{3}{2} & \frac{1}{2} \\
-\frac{3}{2} & 8 & \frac{9}{2} \\
\frac{1}{2} & \frac{9}{2} & 5
\end{array}\right]\)

And, \(Q=\frac{1}{2}\left(A-A^t\right)\)

= \(\frac{1}{2} \cdot\left\{\left[\begin{array}{rrr}
1 & 3 & 5 \\
-6 & 8 & 3 \\
-4 & 6 & 5
\end{array}\right]-\left[\begin{array}{rrr}
1 & -6 & -4 \\
3 & 8 & 6 \\
5 & 3 & 5
\end{array}\right]\right\}\)

= \(\frac{1}{2} \cdot\left[\begin{array}{rrr}
0 & 9 & 9 \\
-9 & 0 & -3 \\
-9 & 3 & 0
\end{array}\right]=\left[\begin{array}{rrr}
0 & \frac{9}{2} & \frac{9}{2} \\
-\frac{9}{2} & 0 & -\frac{3}{2} \\
-\frac{9}{2} & \frac{3}{2} & 0
\end{array}\right]\)

Hence, A = P + Q, where P is symmetric and Q is skew-symmetric.

Elementary Operations on Matrices

Given below are three row operations and three column operations on a matrix, which are called elementary operations or transformations.

Equivalent Matrices Two matrices A and B are said to be equivalent if one is obtained from the other by one or more elementary operations and we write, A ~ B.

Three Elementary Row Operations

(1) Interchange of any two rows The interchange of ith and jth rows is denoted by Ri ⟷ Rj.

Example Let A = \(\left[\begin{array}{rrr}
3 & 2 & -1 \\
\sqrt{2} & 4 & 6 \\
5 & -3 & 7
\end{array}\right]\)

Applying R2 ⟷ R3, we get A ~ \(\left[\begin{array}{rrr}
3 & 2 & -1 \\
5 & -3 & 7 \\
\sqrt{2} & 4 & 6
\end{array}\right] \text {. }\)

(2) Multiplication of the elements of a row by a nonzero number Suppose each element of ith row of a given matrix is multiplied by a nonzero number k.

Then, we denote it by Ri → kRi.

Example Let A = \(\left[\begin{array}{rrr}
3 & 2 & -1 \\
\sqrt{3} & -5 & 6 \\
1 & 8 & 4
\end{array}\right]\)

Applying R2 → 4R2, we get A ~ \(\left[\begin{array}{rrr}
3 & 2 & -1 \\
4 \sqrt{3} & -20 & 24 \\
1 & 8 & 4
\end{array}\right]\)

(3) Multiplying each element of a row by a nonzero number and then adding them to the corresponding elements of another row Suppose each element of jth row of a matrix A is multiplied by a nonzero number k and then added to the corresponding elements of ith row.

We dentoe it by Ri → Ri + kRj.

Example Let A = \(\left[\begin{array}{rrr}
2 & -1 & 5 \\
-3 & 4 & \sqrt{2} \\
7 & 6 & 3
\end{array}\right]\)

Applying R1 → R1 + 2R3, we get A ~ \(\left[\begin{array}{rrr}
16 & 11 & 11 \\
-3 & 4 & \sqrt{2} \\
7 & 6 & 3
\end{array}\right]\)

Three Elementary Column Operations

(1) Interchange of any two columns The interchange of ith and jth columns is denoted by \(c_i \leftrightarrow c_j .\)

Example Let A = \(\left[\begin{array}{rrr}
2 & 1 & -3 \\
-1 & 5 & 4 \\
6 & 3 & \frac{1}{2}
\end{array}\right]\)

Applying \(c_1 \leftrightarrow c_2\), we get A ~ \(\left[\begin{array}{rrr}
1 & 2 & -3 \\
5 & -1 & 4 \\
3 & 6 & \frac{1}{2}
\end{array}\right] \text {. }\)

(2) Multiplying each element of a column by a nonzero number Suppose each element of ith column of matrix A is multiplied by a nonzero number k.

Then, we write, \(\mathcal{C}_i \rightarrow k \mathcal{C}_i\)

Example Let A = \(\left[\begin{array}{rrr}
3 & 1 & -5 \\
\sqrt{2} & -2 & 4 \\
6 & 2 & 8
\end{array}\right]\)

Applying \(\mathcal{C}_3 \rightarrow 2 \mathcal{C}_3\), we get A ~ \(\left[\begin{array}{rrr}
3 & 1 & -10 \\
\sqrt{2} & -2 & 8 \\
6 & 2 & 16
\end{array}\right]\)

(3) Multiplying each element of a column of a given matrix A by a nonzero number and then adding to the corresponding elements of another column Suppose each element of ith column of a given matrix A is multiplied by a nonzero number k and then added to the corresponding elements of ith column.

Then we write, \(C_i \rightarrow C_i+k C_j .\)

Example Let A = \(\left[\begin{array}{rrr}
2 & 0 & 4 \\
-1 & 3 & 1 \\
5 & -2 & 6
\end{array}\right]\)

Applying \(C_3 \rightarrow C_3+2 C_1\), we get A ~ \(\left[\begin{array}{rrr}
2 & 0 & 8 \\
-1 & 3 & -1 \\
5 & -2 & 16
\end{array}\right]\)

Invertible Matrices A square matrix A of order n is said to be invertible if there exists a square matrix B of order n such that AB = BA = I.

Also, then B is called the inverse of A and we write, \(A^{-1}=B\).

Example Let A = \(\left[\begin{array}{ll}
3 & 5 \\
1 & 2
\end{array}\right]\) and B = \(\left[\begin{array}{rr}
2 & -5 \\
-1 & 3
\end{array}\right]\). Then,

AB = \(\left[\begin{array}{ll}
3 & 5 \\
1 & 2
\end{array}\right]\left[\begin{array}{rr}
2 & -5 \\
-1 & 3
\end{array}\right]=\left[\begin{array}{rr}
6-5 & -15+15 \\
2-2 & -5+6
\end{array}\right]=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]=I\),

BA = \(\left[\begin{array}{rr}
2 & -5 \\
-1 & 3
\end{array}\right]\left[\begin{array}{ll}
3 & 5 \\
1 & 2
\end{array}\right]=\left[\begin{array}{rr}
6-5 & 10-10 \\
-3+3 & -5+6
\end{array}\right]=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]=I\)

∴ AB = BA = I.

Hence, \(A^{-1}=B\).

Theorem 1 (Uniqueness of Inverse) Every invertible square matrix has a unique inverse.

Proof

Let A be an invertible square matrix of order n.

If possible, let B as well as C be the inverse of A.

Then, AB = BA = I and AC = CA = I.

Now, AC = I ⇒ B(AC) = B.I = B,

BA = I ⇒ (BA)C = I.C = C.

But, B(AC) = (BA)C [by associative law of multiplication]

∴ B = C.

Hence, an invertible matrix has a unique inverse.

Inverse Of A Matrix By Elementary Row Operations

Let A be a square matrix of order n.

We can write, A = I.A …(1)

Now, let a sequence of elementary row operations reduce A on LHS of (1) to I and I on RHS of (1) to a matrix B.

Then, I = BA ⇒ \(I \cdot A^{-1}=(B A) A^{-1}=B\left(A A^{-1}\right)=B I\)

⇒ \(A^{-1}=B\).

We can summarise the above method as given below.

Method Step 1. Write A = I.A.

Step 2. By using elementary row operations on A, transform it into a unit matrix.

Step 3. In the same order we apply elementary operations on I to convert it into a matrix B.

Step 4. Then, \(A^{-1}=B\).

Remark If on applying one or more elementary row operations on A, we obtain all zeros in one or more rows, then we say that \(A^{-1}\) does not exist.

Solved Examples

Example 1 By using elementary row operations, find the inverse of the matrix A = \(\left[\begin{array}{ll}
1 & -2 \\
2 & -6
\end{array}\right]\)

Solution

We have

\(\left[\begin{array}{cc}
1 & -2 \\
2 & -6
\end{array}\right]=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right] \cdot A\)

⇒ \(\left[\begin{array}{ll}
1 & -2 \\
0 & -2
\end{array}\right]=\left[\begin{array}{rr}
1 & 0 \\
-2 & 1
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow R_2-2 R_1\right]\)

⇒ \(\left[\begin{array}{rr}
1 & -2 \\
0 & 1
\end{array}\right]=\left[\begin{array}{cc}
1 & 0 \\
1 & \frac{-1}{2}
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow\left(\frac{-1}{2}\right) R_2\right]\)

⇒ \(\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]=\left[\begin{array}{rr}
3 & -1 \\
1 & -\frac{1}{2}
\end{array}\right] \cdot A\)

\(\left[R_1 \rightarrow R_1+2 R_2\right] .\)

Hence, \(A^{-1}=\left[\begin{array}{rr}
3 & -1 \\
1 & -\frac{1}{2}
\end{array}\right]\)

Example 2 By using elementary row operations, find the inverse of the matrix A = \(\left[\begin{array}{rr}
3 & -1 \\
-4 & 2
\end{array}\right]\)

Solution

We have

A = \(\left[\begin{array}{rr}
3 & -1 \\
-4 & 2
\end{array}\right]=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right] \cdot A\)

⇒ \(\left[\begin{array}{cc}
-1 & 1 \\
-4 & 2
\end{array}\right]=\left[\begin{array}{ll}
1 & 1 \\
0 & 1
\end{array}\right] \cdot A\)

\(\left[R_1 \rightarrow R_1+R_2\right]\)

⇒ \(\left[\begin{array}{rr}
1 & -1 \\
-4 & 2
\end{array}\right]=\left[\begin{array}{rr}
-1 & -1 \\
0 & 1
\end{array}\right] \cdot A\)

\(\left[R_1 \rightarrow(-1) \cdot R_1\right]\)

⇒ \(\left[\begin{array}{ll}
1 & -1 \\
0 & -2
\end{array}\right]=\left[\begin{array}{cc}
-1 & -1 \\
-4 & -3
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow R_2+4 R_1\right]\)

⇒ \(\left[\begin{array}{rr}
1 & -1 \\
0 & 1
\end{array}\right]=\left[\begin{array}{rr}
-1 & -1 \\
2 & \frac{3}{2}
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow\left(\frac{-1}{2}\right) R_2\right]\)

⇒ \(\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right]=\left[\begin{array}{ll}
1 & \frac{1}{2} \\
2 & \frac{3}{2}
\end{array}\right] \cdot A\)

\(\left[R_1 \rightarrow R_1+R_2\right] .\)

Hence, \(A^{-1}=\left[\begin{array}{ll}
1 & \frac{1}{2} \\
2 & \frac{3}{2}
\end{array}\right] \text {. }\)

Example 3 If A = \(\left[\begin{array}{rr}
6 & -3 \\
-2 & 1
\end{array}\right]\), show that \(A^{-1}\) does not exist.

Solution

We have

\(\left[\begin{array}{rr}
6 & -3 \\
-2 & 1
\end{array}\right]=\left[\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}\right] \cdot A\)

⇒ \(\left[\begin{array}{rr}
1 & -\frac{1}{2} \\
-2 & 1
\end{array}\right]=\left[\begin{array}{ll}
\frac{1}{6} & 0 \\
0 & 1
\end{array}\right] \cdot A\)

\(\left[R_1 \rightarrow \frac{1}{6} R_1\right]\)

⇒ \(\left[\begin{array}{rr}
1 & -\frac{1}{2} \\
0 & 0
\end{array}\right]=\left[\begin{array}{rr}
\frac{1}{6} & 0 \\
\frac{1}{3} & 1
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow R_2+2 R_1\right]\)

Thus, we have all zeros in second row of the left-hand side matrix. Hence, \(A^{-1}\) does not exist.

Example 4 By using elementary row operations, find the inverse of the matrix A = \(\left[\begin{array}{rrr}
1 & 3 & -2 \\
-3 & 0 & -5 \\
2 & 5 & 0
\end{array}\right]\)

Solution

We have

\(\left[\begin{array}{rrr}
1 & 3 & -2 \\
-3 & 0 & -5 \\
2 & 5 & 0
\end{array}\right]=\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

⇒ \(\left[\begin{array}{rrr}
1 & 3 & -2 \\
0 & 9 & -11 \\
0 & -1 & 4
\end{array}\right]=\left[\begin{array}{rrr}
1 & 0 & 0 \\
3 & 1 & 0 \\
-2 & 0 & 1
\end{array}\right] \cdot A\)

\(\left[\begin{array}{l}
R_2 \rightarrow R_2+3 R_1 \\
R_3 \rightarrow R_3-2 R_1
\end{array}\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 3 & -2 \\
0 & -1 & 4 \\
0 & 9 & -11
\end{array}\right]=\left[\begin{array}{rrr}
1 & 0 & 0 \\
-2 & 0 & 1 \\
3 & 1 & 0
\end{array}\right] \cdot A\)

\(\left[R_2 \leftrightarrow R_3\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 0 & 10 \\
0 & -1 & 4 \\
0 & 0 & 25
\end{array}\right]=\left[\begin{array}{rrr}
-5 & 0 & 3 \\
-2 & 0 & 1 \\
-15 & 1 & 9
\end{array}\right] \cdot A\)

\(\left[\begin{array}{l}
R_1 \rightarrow R_1+3 R_2 \\
R_3 \rightarrow R_3+9 R_2
\end{array}\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 0 & 10 \\
0 & 1 & -4 \\
0 & 0 & 25
\end{array}\right]=\left[\begin{array}{rrr}
-5 & 0 & 3 \\
2 & 0 & -1 \\
-15 & 1 & 9
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow(-1) \cdot R_2\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 0 & 10 \\
0 & 1 & -4 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{ccc}
-5 & 0 & 3 \\
2 & 0 & -1 \\
\frac{-3}{5} & \frac{1}{25} & \frac{9}{25}
\end{array}\right]\)

\(\left[R_3 \rightarrow \frac{1}{25} R_3\right]\)

⇒ \(\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{ccc}
1 & \frac{-2}{5} & \frac{-3}{5} \\
\frac{-2}{5} & \frac{4}{25} & \frac{11}{25} \\
\frac{-3}{5} & \frac{1}{25} & \frac{9}{25}
\end{array}\right] \cdot A\)

\(\left[\begin{array}{l}
R_1 \rightarrow R_1-10 R_3 \\
R_2 \rightarrow R_2+4 R_3
\end{array}\right]\)

Hence, \(A^{-1}=\left[\begin{array}{ccc}
1 & \frac{-2}{5} & \frac{-3}{5} \\
\frac{-2}{5} & \frac{4}{25} & \frac{11}{25} \\
\frac{-3}{5} & \frac{1}{25} & \frac{9}{25}
\end{array}\right] \text {. }\)

Example 5 By using elementary row operations, find the inverse of the matrix A = \(\left[\begin{array}{rrr}
3 & -1 & -2 \\
2 & 0 & -1 \\
3 & -5 & 0
\end{array}\right] .\)

Solution

We have

\(\left[\begin{array}{rrr}
3 & -1 & -2 \\
2 & 0 & -1 \\
3 & -5 & 0
\end{array}\right]=\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

⇒ \(\left[\begin{array}{rrr}
1 & -1 & -1 \\
2 & 0 & -1 \\
3 & -5 & 0
\end{array}\right]=\left[\begin{array}{rrr}
1 & -1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

\(\left[R_1 \rightarrow R_1-R_2\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & -1 & -1 \\
0 & 2 & 1 \\
0 & -2 & 3
\end{array}\right]=\left[\begin{array}{rrr}
1 & -1 & 0 \\
-2 & 3 & 0 \\
-3 & 3 & 1
\end{array}\right] \cdot A\)

\(\left[\begin{array}{l}
R_2 \rightarrow R_2-2 R_1 \\
R_3 \rightarrow R_3-3 R_1
\end{array}\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & -1 & -1 \\
0 & 1 & \frac{1}{2} \\
0 & -2 & 3
\end{array}\right]=\left[\begin{array}{rrr}
1 & -1 & 0 \\
-1 & \frac{3}{2} & 0 \\
-3 & 3 & 1
\end{array}\right] \cdot A\)

\(\left\{R_2 \rightarrow \frac{1}{2} R_2\right\}\)

⇒ \(\left[\begin{array}{ccc}
1 & 0 & \frac{-1}{2} \\
0 & 1 & \frac{1}{2} \\
0 & 0 & 4
\end{array}\right]=\left[\begin{array}{rrr}
0 & \frac{1}{2} & 0 \\
-1 & \frac{3}{2} & 0 \\
-5 & 6 & 1
\end{array}\right] \cdot A\)

\(\left\{\begin{array}{l}
R_1 \rightarrow R_1+R_2 \\
R_3 \rightarrow R_3+2 R_2
\end{array}\right\}\)

⇒ \(\left[\begin{array}{ccc}
1 & 0 & \frac{-1}{2} \\
0 & 1 & \frac{1}{2} \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{rrr}
0 & \frac{1}{2} & 0 \\
-1 & \frac{3}{2} & 0 \\
\frac{-5}{4} & \frac{3}{2} & \frac{1}{4}
\end{array}\right] \cdot A\)

\(\left\{R_3 \rightarrow \frac{1}{4} R_3\right\}\)

⇒ \(\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{ccc}
\frac{-5}{8} & \frac{5}{4} & \frac{1}{8} \\
\frac{-3}{8} & \frac{3}{4} & \frac{-1}{8} \\
\frac{-5}{4} & \frac{3}{2} & \frac{1}{4}
\end{array}\right] \cdot A\)

\(\left\{\begin{array}{l}
R_1 \rightarrow R_1+\frac{1}{2} R_3 \\
R_2 \rightarrow R_2-\frac{1}{2} R_3
\end{array}\right\} .\)

Hence, \(A^{-1}=\left[\begin{array}{ccc}
\frac{-5}{8} & \frac{5}{4} & \frac{1}{8} \\
\frac{-3}{8} & \frac{3}{4} & \frac{-1}{8} \\
\frac{-5}{4} & \frac{3}{2} & \frac{1}{4}
\end{array}\right] \text {. }\)

Example 6 By using elementary row transformations, find the invers of the matrix A = \(\left[\begin{array}{rrr}
2 & 0 & -1 \\
5 & 1 & 0 \\
0 & 1 & 3
\end{array}\right]\)

Solution

We have

\(\left[\begin{array}{rrr}
2 & 0 & -1 \\
5 & 1 & 0 \\
0 & 1 & 3
\end{array}\right]=\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

⇒ \(\left[\begin{array}{rrr}
2 & 0 & -1 \\
1 & 1 & 2 \\
0 & 1 & 3
\end{array}\right]=\left[\begin{array}{rrr}
1 & 0 & 0 \\
-2 & 1 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow R_2-2 R_1\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 1 & 2 \\
2 & 0 & -1 \\
0 & 1 & 3
\end{array}\right]=\left[\begin{array}{rrr}
-2 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

\(\left[R_1 \leftrightarrow R_2\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 1 & 2 \\
0 & -2 & -5 \\
0 & 1 & 3
\end{array}\right]=\left[\begin{array}{rrr}
-2 & 1 & 0 \\
5 & -2 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

\(\left[R_2 \rightarrow R_2-2 R_1\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 1 & 2 \\
0 & 1 & 3 \\
0 & -2 & -5
\end{array}\right]=\left[\begin{array}{rrr}
-2 & 1 & 0 \\
0 & 0 & 1 \\
5 & -2 & 0
\end{array}\right] \cdot A\)

\(\left[R_2 \leftrightarrow R_3\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & 0 & -1 \\
0 & 1 & 3 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{rrr}
-2 & 1 & -1 \\
0 & 0 & 1 \\
5 & -2 & 2
\end{array}\right] \cdot A\)

\(\left[\begin{array}{l}
R_1 \rightarrow R_1-R_2 \\
R_3 \rightarrow R_3+2 R_2
\end{array}\right]\)

⇒ \(\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right]=\left[\begin{array}{rrr}
3 & -1 & 1 \\
-15 & 6 & -5 \\
5 & -2 & 2
\end{array}\right] \cdot A\)

\(\left[\begin{array}{l}
R_1 \rightarrow R_1+R_3 \\
R_2 \rightarrow R_2-3 R_3
\end{array}\right]\)

Hence, \(A^{-1}=\left[\begin{array}{rrr}
3 & -1 & 1 \\
-15 & 6 & -5 \\
5 & -2 & 2
\end{array}\right]\)

Example 7 If A = \(\left[\begin{array}{rrr}
1 & -1 & 1 \\
2 & 1 & -1 \\
-1 & -2 & 2
\end{array}\right]\), show that \(A^{-1}\) does not exist.

Solution

We have

\(\left[\begin{array}{rrr}
1 & -1 & 1 \\
2 & 1 & -1 \\
-1 & -2 & 2
\end{array}\right]=\left[\begin{array}{lll}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right] \cdot A\)

⇒ \(\left[\begin{array}{rrr}
1 & -1 & 1 \\
0 & 3 & -3 \\
0 & -3 & 3
\end{array}\right]=\left[\begin{array}{rrr}
1 & 0 & 0 \\
-2 & 1 & 0 \\
1 & 0 & 1
\end{array}\right], A\)

\(\left[\begin{array}{l}
R_2 \rightarrow R_2-2 R_1 \\
R_3 \rightarrow R_3+R_1
\end{array}\right]\)

⇒ \(\left[\begin{array}{rrr}
1 & -1 & 1 \\
0 & 3 & -3 \\
0 & 0 & 0
\end{array}\right]=\left[\begin{array}{rrr}
1 & 0 & 0 \\
-2 & 1 & 0 \\
-1 & 1 & 1
\end{array}\right] \cdot A\)

\(\left[R_3 \rightarrow R_3+R_2\right] .\)

Thus, we have all zeros in 3rd row of the left-hand side matrix.

Hence, A^{-1} does not exist.

Leave a Comment