Inverse of a square Matrix

The inverse of a square matrix M is defined as a matrix that transforms the matrix M into the unit matrix if multiplied with it. It is defined for square matrixes only and the inverse of the matrix M is M-1.

An interesting point is the usage of the inverse matrix to solve a matrix equation. In the elimination algorithm of Gauss, to solve a matrix equation, the elimination is normally done to build a right triangle matrix. The elimination can as well be carried out till we get a diagonal matrix. If we afterwards divide each row now to get each diagonal element to 1 we get following form:


2x2 Matrix




That means in the solution vector y’ we already have the solution of the matrix equation. The interesting point is that we do all the transformations to get this form of the matrix by multiplying it by its inverse matrix and if we multiply the solution vector as well we get exactly the form above.

That means:

To solve a matrix equation we can build the inverse matrix of the x matrix, multiply it by the solution vector and the result of this multiplication is the solution of the matrix equation.



Gauss-Jordan algorithm

The idea of the Gauss-Jordan algorithm to invert a Matrix M is to start with a matrix B that can be multiplied by M without changing it. Such a matris is the unit matrix. This matrix we place beside the matrix M and now we transform matrix M into a unit matrix and perform exactly the same manipulations that we use on matrix M on matrix B too. At the end, when matrix M has been transformed to a unit matrix, matrix B will be the inverse matrix of M. As we performed all the transformations on B as well we can just multiply M * B and get the unit matrix as result. That means matrix B is the inverse of matrix M.To transform matrix M into a unit matrix I used my algorithm for the elimination of elements according to Gauss (see elimination algorithm by Gauss)I just had to extend it a bit.

There is one point to take care for: As we multiply entire rows each time to start elimination. We have to take care that the elimination of the elements on the right side does not influence the elimination on the left side and vice versa. This can be avoided by starting the elimination on the left side, eliminate the elements of the first column (red marked here), going to the right side, eliminate the elements of the last column (green marked) and then going back to the left side, eliminating the elements of the second column and back to the right eliminating the elements of the second last column (blue marked) and so on.

2x2 Matrix

In this way the elimination works fine and we get a diagonal matrix at the end.

If we start to eliminate the first element m21, we multiply the first row by m21 and the second row by m11 and subtract row 1 from row 2. That eliminates element m21. The same manipulations we do on matrix B. Multiply row 1 by m21 and row 2 by m11 and subtract row 1 from row 2. The same we do for all the elements to be eliminated. The next would be to eliminate m31. Multiply row 1 by m31 and row 3 by m11 and subtract row 1 from row 3. Do the same on B anad go on with m41.

There is a small difference to the pure elimination algorithm by Gauss. If we just had to eliminate the elements of M it would be sufficient to run just from the element to be eliminated till the last element of the row to multiply and subtract. But on matrix B that is not sufficient. There we have to run through all elements of the actual row. Therefore we have to save the value of the element to be eliminated in a tempopary element to use it on the other elements. In code that would look like

   

for (k = 0; k <= MaxOrder - 2; k++)
{
     for (i = k; i <= MaxOrder - 2; i++)
     {
          if (Math.Abs(m[i + 1, k]) > 1e-8)
          {
              tempEl = m[i + 1, k];
              for (l = 0; l <= MaxOrder - 1; l++)
              {
                   if (l >= k)
                        m[i + 1, l] = m[i + 1, l] * m[k, k] - m[k, l] * tempEl;
                   inv[i + 1, l] = inv[i + 1, l] * m[k, k] - inv[k, l] * tempEl;
              }
          }
     }




for the left part.

The right part becomes a bit longer as we have to count from back side and run upside down.

  
for (k = 0; k <= MaxOrder - 2; k++)
     {
     if (Math.Abs(m[MaxOrder - 1 - k, MaxOrder - 1 - k]) > 1e-8)
     {
          for (i = k; i <= MaxOrder - 2; i++)
          {
               if (Math.Abs(m[MaxOrder - i - 2, MaxOrder - 1 - k]) > 1e-8)
               {
                   tempEl = m[MaxOrder - i - 2, MaxOrder - k - 1];
                   for (l = MaxOrder - 1; l >= 0; l--)
                   {
                        if (l <= MaxOrder - k - 1)
                             m[MaxOrder - i - 2, l] = m[MaxOrder - i - 2, l] * m[MaxOrder - 1 - k, MaxOrder - 1 - k] - m[MaxOrder - k - 1, l] * tempEl;
                        inv[MaxOrder - i - 2, l] = inv[MaxOrder - i - 2, l] * m[MaxOrder - 1 - k, MaxOrder - 1 - k] - inv[MaxOrder - k - 1, l] * tempEl;
                   }
               }
          }
     }
}



At the end of this loop we should have a diagonal matrix. Now there are two situations we have to take care for. First: The element that we want to eliminate is already 0. In this case we don’t have to do anything and leave the calculation. This is done by the check if (Math.Abs(m[i + 1, k]) > 1e-8). I don’t check for 0 as a real number almost never becomes 0.

The second case occurs if the diagonal element a[k, k] is 0. In this case the elimination does not work. In the Gaussian elimination algorithm we could switch rows in such a case. Here we can’t. The inverse matrix would become wrong if we did that. O.k. this case occurs more seldom than the first case. But anyhow there is some check required and if m[k, k] or m[MaxOrder - i - 2, MaxOrder - 1 - k] are 0 the calculation should return a calculation error.

If nothing went wrong and we have our diagonal matrix, we just have to divide each row by the value of a[k, k] to get the unit matrix in A and the inverse matrix in B. As we are looking for the inverse matrix, we just have to do this for B:


for (k = 0; k < MaxOrder; k++)
{
     for (i = 0; i < MaxOrder; i++)
     {
         inv[k, i] = inv[k, i] / m[k, k];
     }
}



It might not look very interesting to solve a matrix equation like this as we are already quite close to the solution when we have the diagonal matrix. We just had to divide the y values of the solution vector by the a[k,k] values to get the x values. But if we can calculate the inverse matrix a bit easier (and more elegant in my opinion :-) ), it could make sense.


Cramers method

Gabriel Cramers method to calculate an inverse Matrix is based on his rule to solve a matrix equation by the aid of its determinants (see Matrix determinant)

To deduce this method we have to transform the multiplication of the matrix A by its inverse B (I use a 3 * 3 matrix to keep the calculations easy)

2x2 Matrix

into the separate equations for all the elements of the unit matrix. That looks like

2x2 Matrix

For each column of the inverse matrix we use 3 of these equations now. For the first row that’s

2x2 Matrix

According to Cramer we have to replace the first column of the matrix A by the solution vector, take the determinant of this matrix and divide it by the determinant of M to get b11. Then we replace the second column of M by the solution vector and divide the determinant of this matrix by the determinant of M to get b21 and so on.

2x2 Matrix

2x2 Matrix

2x2 Matrix

The same we do for the second column of B. There we use

2x2 Matrix

To get b12, b22 and b32

And for the third column we use

2x2 Matrix

To get b13, b23 and b33

The interesting thing to implement this into an algorithm is, that we only have to change the solution vector in the equations to switch to the next column.

The complete matrix to calculate the inverse with Cramers method looks like:

2x2 Matrix


But anyhow, I didn’t implement this particular algorithm as there is a better solution :-)



Using cofactor matrixes

To deduce the method with the cofactor matrixes I spent quite some calculation effort. With the 9 equations for the unit matrix I started to eliminate elements, got equations like

2x2 Matrix


for instance for b31,

resolved and a bit converted this I got:

2x2 Matrix


The part in the denominator is the determinant of M (see Matrix determinant) and the part in the numerator is the determinant of the 2*2 matrix built by m21, m22, m31 and m32. Which are all the elements of M excluding the row and column where m13 is. The elements for the whole matrix would look like:

2x2 Matrix


Now I suddenly realised that this looks pretty much the same as the matrix of Cramer further above. And in fact it is the same :-) We only have to go one step further with Cramer’s Determinants:

If we look at the Determinant that is used in the equation for b11

2x2 Matrix


This can be written as

2x2 Matrix


And there only remains

2x2 Matrix


Or

2x2 Matrix


For b22. That’s

2x2 Matrix


This can be done for all the elements of Cramer’s matrix. And we get the same matrix as we just got it with the cofactor matrixes.


2x2 Matrix


But this matrix does not look to clear and seems not to have a useful system. But if we mirror it at its main diagonal and build the transposed matrix, we get:


2x2 Matrix


This is quite systematic: Each element bij is the determinant of a (n-1)*(n-1) matrix built by the elements excluding the column i and row j divided by the determinant of A. With this the inverted matrix can be calculated in an algorithm that builds the transposed matrix of the inverse first and mirrors that at the main diagonal. That’s it :-)


In code that looks like:

double det = CalcDet(m, MaxOrder);
double[,] mm = new double[MaxOrder - 1, MaxOrder - 1]; // temporary matrix
for (x = 0; x < MaxOrder; x++)
{
     signy = 1.0;
     for (y = 0; y < MaxOrder; y++) // index for the element of the top row
     {
         for (j = 0; j < MaxOrder - 1; j++) // index that runs down from second row to n
         {
              for (k = 0; k < MaxOrder - 1; k++) // index to copy the elements into a sub matrix
              {
                   if (j < y)

                   {
                        if (k < x)
                             mm[j, k] = m[j, k];
                        else

                            mm[j, k] = m[j, k + 1];
                   }
                   else
                   {
                        if (k < x)
                            mm[j, k] = m[j + 1, k];
                        else

                            mm[j, k] = m[j + 1, k + 1];
                   }
              }
         }
         b[x, y] = signy * signx * CalcDet(mm, MaxOrder - 1) / det;
 signy = -signy;
     }
     signx = -signx;
}




At the end of this loop the inverse matrix of M will be in B. I mirror the matrix just by switching the x and y index when copying the result into b.

But there is still something more: This algorithm only works for matrixes bigger 2*2. For a 2*2 matrix the inverse looks like:


2x2 Matrix


To deduce this matrix, the 4 equations
2x2 Matrix


must be resolved. No big deal :-)

This case has to be handled separately.





I implemented an online solver to invert a matrix by the method of Cramer. This solver can be found here :-)



C# Demo Project solving a matrix equation by using the inverse matrix

  • Matrix_Invers.zip
  • Matrix_Inv_Cramer.zip


  • Java Demo Project solving a matrix equation by using the inverse matrix

  • MatrixCramer.zip