The program to find the determinant of matrix

Here is the MATLAB program to find the determinant of a nxn matrix by the cofactor method.  I had to develop a separate function for each size of the matrix.  I may be wrong about having to do that – is there a single function that can be written to find the determinant of any nxn matrix using the cofactor method?

The mfile can be downloaded here.   Try the program for a 10×10 matrix – it took about 6 seconds of CPU time on my PC.  A 12×12 matrix determinant would take about 13 minutes of CPU time.  I stopped at a 12×12 matrix.  You can either write a function or generate the function via a program for matrices of 13×13 order and higher.

Contents

Finding the determinant of a matrix using the cofactor method

and comparing the CPU time with MATLAB det function

clc
clear all
format long

% n=Size of matrix
n=6;
% Choosing a matrix of nxn size with random numbers
A=rand(n,n);

% Calculating cputime by cofactor method
tbegin=cputime;
detval=det6(A);
TimeCrammer=cputime-tbegin;

% Calculating cputime by MATLAB det function
tbegin=cputime;
MatlabDet=det(A);
TimeMatlab=vpa(cputime-tbegin,32);

% Printing the times
fprintf('Size of matrix is %gx%g \n',n,n)
fprintf('Determinant by cofactor method = %g \n', detval)
fprintf('Determinant by Matlab function = %g \n', MatlabDet)
fprintf('Approximate CPU time taken by cofactor method = %g seconds\n',TimeCrammer)
fprintf('Approximate CPU time taken by MATLAB function = %e seconds\n',TimeMatlab)

Individual functions for determinant of a nxn matrix

function detvalue=det2(A)
detvalue=A(1,1)*A(2,2)-A(1,2)*A(2,1);
end

function detvalue=det3(A)
n=3;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det2(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det4(A)
n=4;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det3(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det5(A)
n=5;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det4(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det6(A)
n=6;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det5(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det7(A)
n=7;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det6(A(2:n,[1:j-1 j+1:n]));
end
end
function detvalue=det8(A)
n=8;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det7(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det9(A)
n=9;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det8(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det10(A)
n=10;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det9(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det11(A)
n=11;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det10(A(2:n,[1:j-1 j+1:n]));
end
end

function detvalue=det12(A)
n=12;
detvalue=0;
for j=1:1:n
    detvalue=detvalue+(-1)^(j+1)*A(1,j)*det11(A(2:n,[1:j-1 j+1:n]));
end
end
Size of matrix is 6x6 
Determinant by cofactor method = -0.0431 
Determinant by Matlab function = -0.0431 
Approximate CPU time taken by cofactor method = 0.140625 seconds
Approximate CPU time taken by MATLAB function = 1.562500e-02 seconds

The above mfile can be downloaded here.


This post is brought to you by

 

Time it takes to find a determinant

To make the point of how inefficient it is to find the determinant of a matrix by the cofactor method, I wrote a program in MATLAB to demonstrate this.  A google search for a program written in any language to find the determinant by using the cofactor method was not available beyond a 4×4 matrix.  So, I wrote one that finds the determinant of matrices of up to 12×12 order.

I ran the program on an Intel(R) Core(TM) i5-8500 CPU @3.00GHz PC.  Here is a table of the CPU time it took in seconds to find the determinant of a matrix as a function of its order.
______________________________________
Order of matrix               CPU Time to Find
Determinant (s)
______________________________________
6×6                                           0.015625
7×7                                           0.046875
8×8                                           0.203125
9×9                                           0.828125
10×10                                        5.14063
11×11                                       52.6406
12×12                                   623.953
______________________________________

If one continues to do this for a 25×25 matrix, it is estimated that it would take 8.1821198 \times 10^{17} seconds which is more than 25 billion years, and we all know that the estimated age of the universe is less at about 13.77 billion years.

The trend of the approximate time it takes to find the determinant of the next order of the matrix, nxn is approximately n times the time it takes to find the determinant of a (n-1)x(n-1) matrix.  For example, it took a time of 52.6406 seconds for finding the determinant of a 11×11 matrix, while it would be estimated to take approximately 12×52.6406=631.7 seconds for finding the determinant of a 12×12 matrix.  This is close to the 623.953 seconds it actually took.

The above approximate time calculations are in concurrence with the note written by Professor A.J. Wise in 1969, where he showed that the number of arithmetic operations required to evaluate a nxn determinant by cofactor method is given by [n!e]-2, and hence n!e for large n and the time it takes to find the determinant of the next order of the matrix, nxn is approximately n times the time it takes to find the determinant of a (n-1)x(n-1) matrix

Since the arithmetic operation required here are just addition, multiplication and subtraction, the computation time could be crudely estimated as 4Tn!e for large n, where T is the clock cycle time and we assume that an addition or a multiplication or a subtraction each use 4 clock cycle times.  Does it match?

For a 3.00 GHz machine, T={1}/({3 \times 10^9}) seconds to give an approximate time for a 12×12 matrix determinant calculation to be 4 \times {1}/{(3 \times 10^9)} \times 12! e = 1.736  seconds.  It is not even of similar order.  Many items go into calculating CPU time for a numerical algorithm, but to do a comparative analysis, these calculations are quite helpful.


This post is brought to you by

Matrix Algebra: Eigenvalues and Eigenvectors

Many university STEM major programs have reduced the credit hours for a course in Matrix Algebra or have simply dropped the course from their curriculum.   The content of Matrix Algebra in many cases is taught just in time where needed.  This approach can leave a student with many conceptual holes in the required knowledge of matrix algebra. In this series of blogs, we bring to you ten topics that are of immediate and intermediate interest for Matrix Algebra. Here is the tenth topic where we talk about the eigenvalue and eigenvectors of a square matrix. Learn how to define and find eigenvalues and eigenvector of a square matrix, and get introduced to some key theorems on eigenvalues and eigenvectors.  Get all the resources in form of textbook content, lecture videos, multiple choice test, problem set, and PowerPoint presentation. Eigenvalues and Eigenvectors
This post is brought to you by

Matrix Algebra: Adequacy of Solutions

Many university STEM major programs have reduced the credit hours for a course in Matrix Algebra or have simply dropped the course from their curriculum.   The content of Matrix Algebra in many cases is taught just in time where needed.  This approach can leave a student with many conceptual holes in the required knowledge of matrix algebra. In this series of blogs, we bring to you ten topics that are of immediate and intermediate interest for Matrix Algebra. Here is the ninth topic where we talk about the adequacy of solutions of simultaneous linear equations. Learn how this adequacy is measured through the condition number of the coefficient matrix, and how the condition number is calculated approximately as well.  Get all the resources in form of textbook content, lecture videos, multiple choice test, problem set, and PowerPoint presentation. Adequacy of Solutions
This post is brought to you by

Matrix Algebra: Gauss-Seidel Method

Many university STEM major programs have reduced the credit hours for a course in Matrix Algebra or have simply dropped the course from their curriculum.   The content of Matrix Algebra in many cases is taught just in time where needed.  This approach can leave a student with many conceptual holes in the required knowledge of matrix algebra. In this series of blogs, we bring to you ten topics that are of immediate and intermediate interest for Matrix Algebra. Here is the eighth topic where we talk about solving a set of simultaneous linear equations using the Gauss-Seidel method.  Learn this iterative method of solving simultaneous linear equations, and its pitfalls and advantages. Get all the resources in form of textbook content, lecture videos, multiple choice test, problem set, and PowerPoint presentation. Gauss-Seidel Method
This post is brought to you by

Matrix Algebra: LU Decomposition Method

Many university STEM major programs have reduced the credit hours for a course in Matrix Algebra or have simply dropped the course from their curriculum.   The content of Matrix Algebra in many cases is taught just in time where needed.  This approach can leave a student with many conceptual holes in the required knowledge of matrix algebra. In this series of blogs, we bring to you ten topics that are of immediate and intermediate interest for Matrix Algebra. Here is the seventh topic where we talk about solving a set of simultaneous linear equations using the LU decomposition method.  First, the LU decomposition method is discussed along with its motivation.  The LU decomposition method to find the inverse of a square matrix is discussed. Get all the resources in form of textbook content, lecture videos, multiple choice test, problem set, and PowerPoint presentation. LU Decomposition Method
This post is brought to you by

Matrix Algebra: Gaussian Elimination Method

Many university STEM major programs have reduced the credit hours for a course in Matrix Algebra or have simply dropped the course from their curriculum.   The content of Matrix Algebra in many cases is taught just in time where needed.  This approach can leave a student with many conceptual holes in the required knowledge of matrix algebra. In this series of blogs, we bring to you ten topics that are of immediate and intermediate interest for Matrix Algebra. Here is the sixth topic where we talk about solving a set of simultaneous linear equations using Gaussian elimination method – both Naive and partial pivoting methods are discussed. How to find determinants by using the forward elimination step of Gaussian elimination is also discussed.  Get all the resources in form of textbook content, lecture videos, multiple choice test, problem set, and PowerPoint presentation. Gaussian Elimination Method
This post is brought to you by