Edit or run this notebook
1.9 s
19.1 μs

Math: The gradient of the determinant

152 μs

Theorem: (detA)=cofactor(A)=(detA)AT=adj(AT)
d(detA)=tr(det(A)A1dA)=tr(adj(A)dA)=tr(cofactor(A)TdA)
Note:the adjugate is defined as (detA)A1; it is the transpose of the cofactor.

228 μs
adj (generic function with 1 method)
246 μs
cofactor (generic function with 1 method)
231 μs

Formulas at a glance:
A1 = adj(A)/det(A) = cofactor(A)T/det(A)
adj(A) = det(A)A1 = cofactor(A)T
cofactor(A) = AT/det(A) = adj(A)T

352 μs

The adjugate is the transpose of the cofactor matrix. You may remember that the cofactor matrix has as (i,j) entry (1)i+j times the determinant obtained by deleting row i and column j.

164 μs
868 ns
M

[acbd]

1.1 μs

[dbca]

8.5 ms

[dcba]

7.8 ms

[dadbccadbcbadbcaadbc]

5.2 ms

Numerical Demonstration

5.3 ms
5×5 Matrix{Float64}:
 6.23738e-6  9.80317e-6  9.49748e-6  2.16775e-6  7.13224e-6
 4.29949e-7  1.04401e-6  9.43453e-6  3.49108e-6  3.30058e-6
 2.43275e-6  6.93099e-6  9.77647e-6  4.73139e-6  1.52485e-6
 4.08123e-6  5.79363e-6  5.63512e-6  9.05209e-6  5.3378e-6
 3.20542e-7  9.13855e-6  1.20064e-6  2.504e-6    3.50599e-6
1.3 μs
13.6 μs

Forward Diff Autodiff

137 μs
5×5 Matrix{Float64}:
  0.227566    -0.231878    0.00127616   0.0537115   -0.113963
 -0.441303     0.453903    0.109316    -0.188827     0.132947
  0.0485198   -0.0217404  -0.0335765    0.068198    -0.0835962
 -0.00220219  -0.0833639   0.0166832    0.0266121   -0.0155698
  0.124369    -0.124389   -0.105723     0.00347312   0.0503901
11.4 μs
5×5 Matrix{Float64}:
  0.227566    -0.231878    0.00127616   0.0537115   -0.113963
 -0.441303     0.453903    0.109316    -0.188827     0.132947
  0.0485198   -0.0217404  -0.0335765    0.068198    -0.0835962
 -0.00220219  -0.0833639   0.0166832    0.0266121   -0.0155698
  0.124369    -0.124389   -0.105723     0.00347312   0.0503901
6.9 μs

Reverse Mode Autodiff

135 μs

Zygote does reverse ad

163 μs
21.9 μs

Symbolic Demonstration

134 μs

[dbca]

204 μs

[dbca]

371 ms

Direct Proof

A direct proof where you just differentiate the scalar with respect to every input can be obtained as a simple consequence from the cofactor expansion a.k.a. the Laplace expansion of the determinant based on the ith row.

det(A)=Ai1Ci1+Ai2Ci2+AinCin

Recall that the determinant is a linear (affine) function of any element with slope equal to the cofactor.

We then readily obtain det(A)Aij=Cij from which we conclude (detA)=C.


Example:

463 μs

[41534567a]

31.3 μs

125+13a

3.7 ms

13.0

1.8 s

Fancy Proof

Figure out linearization near the identity I

det(I+dA) = trace(dA) (think of the n! terms in the determinant and drop higher terms)

257 μs

det(A+A(A1dA))=det(A)det(I+A1dA)=det(A)tr(A1dA)=tr(det(A)A1dA)

159 μs

Application to derivative of the characteristic polynomial

163 μs

Direct derivative (freshman calculus):

ddxi(xλi) = iji(xλj)=i(xλi){i(xλi)1}


Perfectly good simple proof, but if you want to show off...

304 μs

With our new technology:

d(det(xIA))=det(xIA)tr((xIA)1d(xIA))
=det(xIA)tr(xIA)1dx

Note: d(xIA)=dxI when M is constant and
tr(Mdx)=tr(M)dx since dx is a scalar.

357 μs

Check:

182 μs
13.6 μs

Application d(log(det(A)))

d(log(det(A)))=det(A1)d(det(A))=tr(A1)dA

The logarithmic derivative shows up a lot in applied mathematics. For example the key term in Newton's method: f(x)/f(x) may be written 1/{logf(x)}

299 μs

Math: The Jacobian of the Inverse

A1A=Id(A1A)=0=d(A1)A+A1dA from the product rule
d(A1)=A1(dA)A1=(ATA1)dA

138 μs
true
754 ms
Loading...
ii