Matrix determinant from the exterior algebra viewpoint

We look at the ordinary matrix determinant from the exterior algebra point of view.
Author

Paweł Czyż

Published

July 26, 2023

In every linear algebra course matrix determinant is a must. Often it is introduced in the following form:

Definition 1 (Determinant) Let A=(Aji) be a matrix and Sn be the permutation group of {1,2,,n}. The determinant is defined to be the number detA=σSnsgnσA1σ(1)A2σ(2)Anσ(n)

The definition above has a lot of advantages, but it also has an important drawback — the “why” of this construction is hidden and appears only later in a long list of its properties.

We’ll take an alternative viewpoint, which I have learned from Darling (), and is based around the exterior algebra.

Note

All our vector spaces will be finite-dimensional.

We will assume that all vector spaces considered are over real numbers R. Mutatis mutandi everything works also over complex numbers C. However, not all theorems and exercises may work with finite fields, especially F2=Z/2Z.

Motivational examples

Consider V=R3. For vectors v and w we can define their vector product v×w with the following properties:

  • Bilinearity: (λv+v)×w=λ(v×w)+v×w and v×(λw+w)=λ(v×w)+v×w.
  • Antisymmetry: v×w=w×v.

Geometrically we can think of it as of a signed area of the parallelepiped spanned by v and w.

For three vectors v,w,u we can form signed volume: v,w,u=v(w×u), which has similar properties:

  • Trilinearity: λv+v,w,u=λv,w,u+v,w,u (and similarly in w and u arguments).
  • Antisymmetry: when we swap any two arguments the sign changes, e.g., v,w,u=w,v,u=w,u,v=u,w,v.

Exterior algebra will be a generalisation of the above construction beyond the three-dimensional space V=R3.

Exterior algebra

Let’s start with the natural definition:

Definition 2 (Antisymmetric multilinear function) Let V and U be vector spaces and f:V×V××VU be a function. We will say that it is multilinear if for all i=1,2,,n it holds that f(v1,v2,,λvi+vi,vi+1,,vn)=λf(v1,,vi,,vn)+f(v1,,vi,,vn). We will say that it is antisymmetric if it changes the sign whenever we swap any two arguments: f(v1,,vi,,vj,,vn)=f(v1,,vj,,vi,,vn).

As we have seen above both (v,w)v×w and (v,w,u)v(w×u) are antisymmetric multilinear functions.

Note that for every σSn it holds that f(v1,,vn)=sgnσf(vσ(1),,vσ(n)) as sgnσ counts transpositions modulo 2.

Exercise 1 Let f:V×VU be multilinear. Show that the following are equivalent:

  • f is antisymmetric, i.e., f(v,w)=f(w,v) for every v,wV.
  • f is alternating, i.e., f(v,v)=0 for every vV.

Generalise to multilinear mappings f:V×V××VU.

Expand f(v+w,v+w) using multilinearity.

Now we are ready to construct (a particular) exterior algebra.

Definition 3 (Second exterior power) Let V be a vector space. Its second exterior power 2V we be the vector space of expressions λ1v1w1++λnvnwn with the following rules:

  1. The wedge operator is bilinear, i.e., (λv+v)w=λvw+vw and v(λw+w)=λvw+vw.
  2. is antisymmetric, i.e., vw=wv (or, equivalently, vv=0).
  3. If e1,,en is a basis of V, then e1e2,e1e3,,e1en,e2e3,,e2enen1en is a basis of 2V.

Note that vw has the interpretation of a signed area of the parallelepiped spanned by v and w. Such parallelepipeds can be formally added and there is a resemblance between the wedge product and the vector product in R3.

We just need to prove that such a space actually exists (this construction can be skipped at the first reading): similarly to the tensor space, build the free vector space on the set V×V. Now quotient it by expressions like (v,v), (λv,w)(v,λw), (v+v,w)(v,w)(v,w) and (v,w+w)(v,w)(v,w).

Then define vw to be the equivalence class [(v,w)].

If we had introduced the determinant by other means, we could construct the exterior algebra kV also as the space of antisymmetric multilinear functions V×VR (where V is the dual space) by

(vw)(α,β):=det(α(v1)α(v2)β(v1)β(v2))

Analogously we can construct:

Definition 4 (Exterior power) Let V be a vector space. We define 0V=R, 1V=V and for k2 its kth exterior power kV as the vector space of expressions λ1a1a2ak++λnv1v2vk such that the wedge operator is multilinear and antisymmetric (alternating) and that if e1,,en is a basis of V, then the set {ei1ei2eiki1<i2<<ik}

is a basis of kV.

Exercise 2 Show that if dimV=n, then dimkV=(nk). (And that in particular for k>n we have kV=0, the trivial vector space).

The introduced space can be used to convert between antisymmetric multilinear and linear functions by the means of the universal property:

Theorem 1 (Universal property) Let f:V×V×VU be an antisymmetric multilinear function. Then, there exists a unique linear mapping f~:kVU such that for every set of vectors v1,,vk f(v1,,vk)=f~(v1vk).

Proof. (Can be skipped at the first reading.)

As f is multlilinear, its values are determined by the values on the tuples (ei1,,eik), where {e1,,en} is a basis of V.

We can use antisymmetry to show that by “sorting out” the elements such that i1i2ik and defining f~(ei1,eik)=f(ei1,,eik) we obtain a well-defined mapping. Linearity is easy to proof.

Now the uniqueness is proven by observing that antisymmetry and multilinearity uniquely prescribe the values at the basis elements of kV.

Its importance is the following: to show that a linear map kVU is well-defined, one can construct a multilinear antisymmetric map V×V××VU.

Determinants

Finally, we can define the determinant. Note that if dimV=n, then dimnV=1.

Definition 5 (Determinant) Let n=dimV and A:VV be a linear mapping. We consider the mapping (v1,,vn)(Av1)(Avn).

As it is antisymmetric and multilinear, we know that it induces a unique linear mapping nVnV.

Because nV is one-dimensional, this mapping must be multiplication by a number. Namely, we define the determinant detA to be the number such that for every set of vectors v1,,vn Av1Avn=detA(v1vn).

In other words, determinant measures the volume stretch of the parallelepiped spanned by the vectors after they are transformed by the mapping.

I like this geometric intuition, especially that it is clear that determinant depends only on the linear map, rather than a particular matrix representation — it is independent on the chosen basis.

We can now show a number of lemmata.

Proposition 1 If idV:VV is the identity mapping, then detidV=1.

Proof. Obvious from the definition! Similarly, it’s clear that det(λidV)=λdimV.

Proposition 2 For every two mappings A,B:VV it holds that det(BA)=detBdetA.

Proof. For every set of vectors we have det(BA)v1vn=(BAv1)(BAvn)=B(Av1)B(Avn)=detB(Av1)(Avn)=detBdetAv1vn.

Proposition 3 (Only invertible matrices have non-zero determinants) A mapping is an isomorphism if and only if it has non-zero determinant.

Proof. If the mapping is invertible, then AA1=id and we have detAdetA1=1, so its determinant must be non-zero.

Now assume that the mapping is non-invertible. This means that there exists a non-zero vector kkerA such that Ak=0. Let’s complete k to a basis k,e1,,en1. Then detAke1en1=(Ak)(Aen1)=0, which means that detA=0 as {ke1en1} is a basis of nV.

Let’s now connect the usual definition of the determinant to the one coming from exterior algebra:

Proposition 4 (Recovering the standard expression) Let e1,,en be a basis of V and (Aji) be the matrix of coordinates, i.e., Aek=iAkiei. Then the determinant detA can be calculated as detA=σSnsgnσA1σ(1)A2σ(2)Anσ(n).

Proof. Observe that detAe1en=Ae1Aen=(i1A1i1ei1)(inAninein)=i1,,inAi1Ai2Ainei1ein.

Now we see that repeated indices give zero contribution to this sum, so we can only consider the indices which are permutations of 1,2,,n. We also see that ei1ein can be then written as ±1e1en, where the sign is the number of required transpositions, that is the sign of the permutation. This ends the proof.

Going just a bit further into exterior algebra we can also show that matrix transposition does not change the determinant.

To represent matrix transposition, we will use the dual mapping: if A:VV there is the dual mapping A:VV, given as (Aω)(v):=ω(Av).

We can therefore build the nth exterior power of V: n(V) and consider the determinant detA.

We will formally show that

Proposition 5 (Determinant of the transpose) Let A:VV be a linear map and A:VV be its dual. Then detA=detA.

Proof. To do this we will need an isomorphism ι:n(V)(nV) given on basis elements by ι(ω1ωn)(v1vn)=det(ωi(vj))i,j=1,,n, where on the right side we use any already known formula for the determinant. It is easy to show that this mapping is well-defined and linear, as it descends from a multilinear alternating mapping.

Having this, the proof becomes straightforward calculation: detAι(ω1ωn)(v1vn)=ι(detAω1ωn)(v1vn)=ι(Aω1Aωn)(v1vn)=det((Aωi)(vj))=det(ωi(Avj))=ι(ω1ωn)(Av1Avn)=ι(ω1ωn)(detAv1vn)=detA ι(ω1ωn)(v1vn)

Establishing such isomorphisms is quite a nice technique, which also can be used to prove

Proposition 6 (Determinant of a block-diagonal matrix) Let A:VV and B:WW be two linear mappings and AB:VWVW be the mapping given by (AB)(v,w)=(Av,Bw).

Then det(AB)=detAdetB.

Proof. We will use this approach: there exists an isomorphism p(VW)kkVpkW, so if we take n=dimV and m=dimW and note that pV=0 for p>n (and similarly for W) we have ι:n+m(VW)nVmW. If i:VVW and j:WVW are the two “canonical” inclusions, this isomorphism is given as ι(iv1ivnjw1jwm)=(v1vn)(w1wm). Now we calculate: (AB)(iv1ivnjw1jwm)=iAv1iAvnjBw1jBwm=ι1(Av1AvnBw1Bwm)=ι1(detAdetBv1vnw1wm)=detAdetBι1(v1vnw1wm)=detAdetBiv1ivnjw1jwm.

Proposition 7 (Determinant of an upper-triangular matrix) Let A:VV be a linear mapping and e1,,en be a basis of V such that matrix (Aji) is upper-triangular, that is Ae1=A11e1Ae2=A21e1+A22e2Aen=An1e1+An2e2++Annen Then detA=i=1nAii.

Once proven, this result can also be used for lower-triangular matrices due to Proposition 5.

Proof. Recall that whenever there is ij=ik, then ei1ein=0. Hence, there is only one term that may be non-zero: Ae1Ae2Aen=A11e1Annen=i=1nAiie1en.

Acknowledgements

I would like to thank Adam Klukowski for helpful editing suggestions.

References

Darling, R. W. R. 1994. Differential Forms and Connections. Differential Forms and Connections. Cambridge University Press. https://books.google.pl/books?id=TdCaahMK0z4C.