nLab
determinant

Contents

Idea

The determinant is the (essentially unique) universal alternating multilinear map.

Definition

Preliminaries on exterior algebra

Let Vect k{}_k be the category of vector spaces over a field kk, and assume for the moment that the characteristic char(k)2char(k) \neq 2. For each j0j \geq 0, let

sgn j:S jhom(k,k)sgn_j \colon S_j \to \hom(k, k)

be the 1-dimensional sign representation? on the symmetric group S jS_j, taking each transposition (ij)(i j) to 1k *-1 \in k^\ast. We may linearly extend the sign action of S jS_j, so that sgnsgn names a (right) kS jk S_j-module with underlying vector space kk. At the same time, S jS_j acts on the j thj^{th} tensor product of a vector space VV by permuting tensor factors, giving a left kS jk S_j-module structure on V jV^{\otimes j}. We define the Schur functor

Λ j:Vect kVect k\Lambda^j \colon Vect_k \to Vect_k

by the formula

Λ j(V)=sgn j kS jV j.\Lambda^j(V) = sgn_j \otimes_{k S_j} V^{\otimes j}.

It is called the j thj^{th} alternating power (of VV).

Proposition

If VV is nn-dimensional, then Λ j(V)\Lambda^j(V) has dimension (nj)\binom{n}{j}. In particular, Λ n(V)\Lambda^n(V) is 1-dimensional.

Proof

If e 1,,e ne_1, \ldots, e_n is a basis for VV, then expressions of the form e n 1e n je_{n_1} \otimes \ldots \otimes e_{n_j} form a basis for V jV^{\otimes j}. Let e n 1e n je_{n_1} \wedge \ldots \wedge e_{n_j} denote the image of this element under the quotient map V jΛ j(V)V^{\otimes j} \to \Lambda^j(V). We have

e n 1e n ie n i+1e n j=e n 1e n i+1e n ie n je_{n_1} \wedge \ldots \wedge e_{n_i} \wedge e_{n_{i+1}} \wedge \ldots \wedge e_{n_j} = -e_{n_1} \wedge \ldots \wedge e_{n_{i+1}} \wedge e_{n_i} \wedge \ldots \wedge e_{n_j}

(consider the transposition in S jS_j which swaps ii and i+1i+1) and so we may take only such expressions on the left where n 1<<n jn_1 \lt \ldots \lt n_j as forming a spanning set for Λ j(V)\Lambda^j(V), and indeed these form a basis. The number of such expressions is (nj)\binom{n}{j}.

Remark

In the case where char(k)=2char(k) = 2, the same development may be carried out by simply decreeing that e n 1e n j=0e_{n_1} \wedge \ldots \wedge e_{n_j} = 0 whenever n i=n in_i = n_{i'} for some pair of distinct indices ii, ii'.

Now let VV be an nn-dimensional space, and let f:VVf \colon V \to V be a linear map. By the proposition, the map

Λ n(f):Λ n(V)Λ n(V),\Lambda^n(f) \colon \Lambda^n(V) \to \Lambda^n(V),

being an endomorphism on a 1-dimensional space, is given by multiplying by a scalar D(f)kD(f) \in k. It is manifestly functorial since Λ n\Lambda^n is, i.e., D(fg)=D(f)D(g)D(f g) = D(f) D(g). The quantity D(f)D(f) is called the determinant of ff.

Determinant of a matrix

We see then that if VV is of dimension nn,

det:End(V)k\det \colon End(V) \to k

is a homomorphism of multiplicative monoids; by commutativity of multiplication in kk, we infer that

det(UAU 1)=det(A)\det(U A U^{-1}) = \det(A)

for each invertible linear map UGL(V)U \in GL(V).

If we choose a basis of VV so that we have an identification GL(V)Mat n(k)GL(V) \cong Mat_n(k), then the determinant gives a function

det:Mat n(k)k\det \colon Mat_n(k) \to k

that takes products of n×nn \times n matrices to products in kk. The determinant however is of course independent of choice of basis, since any two choices are related by a change-of-basis matrix UU, where AA and its transform UAU 1U A U^{-1} have the same determinant.

By following the definitions above, we can give an explicit formula:

det(A)= σS nsgn(σ) i=1 na iσ(i).\det(A) = \sum_{\sigma \in S_n} sgn(\sigma) \prod_{i = 1}^n a_{i \sigma(i)}.

Properties

We work over fields of arbitrary characteristic. The determinant satisfies the following properties, which taken together uniquely characterize the determinant. Write a square matrix AA as a row of column vectors (v 1,,v n)(v_1, \ldots, v_n).

  1. det\det is separately linear in each column vector:

    det(v 1,,av+bw,,v n)=adet(v 1,,v,,v n)+bdet(v 1,,w,,v n)\det(v_1, \ldots, a v + b w, \ldots, v_n) = a\det(v_1, \ldots, v, \ldots, v_n) + b\det(v_1, \ldots, w, \ldots, v_n)
  2. det(v 1,,v n)=0\det(v_1, \ldots, v_n) = 0 whenever v i=v jv_i = v_j for distinct i,ji, j.

  3. det(I)=1\det(I) = 1, where II is the identity matrix.

Other properties may be worked out, starting from the explicit formula or otherwise:

  • If AA is a diagonal matrix, then det(A)\det(A) is the product of its diagonal entries.

  • More generally, if AA is an upper (or lower) triangular matrix, then det(A)\det(A) is the product of the diagonal entries.

  • If E/kE/k is an extension field and ff is a kk-linear map VVV \to V, then det(f)=det(E kf)\det(f) = \det(E \otimes_k f). Using the preceding properties and the Jordan normal form? of a matrix, this means that det(f)\det(f) is the product of its eigenvalues (counted with multiplicity), as computed in the algebraic closure of kk.

  • If A tA^t is the transpose of AA, then det(A t)=det(A)\det(A^t) = \det(A).

Cramer’s rule

A simple observation which flows from these basic properties is

Proposition

(Cramer’s Rule)

Let v 1,,v nv_1, \ldots, v_n be column vectors of dimension nn, and suppose

w= ja jv j.w = \sum_j a_j v_j.

Then for each ii we have

a jdet(v 1,,v i,,v n)=det(v 1,,w,,v n)a_j \det(v_1, \ldots, v_i, \ldots, v_n) = \det(v_1, \ldots, w, \ldots, v_n)

where ww occurs as the i thi^{th} column vector on the right.

Proof

This follows straightforwardly from properties 1 and 2 above.

For instance, given a square matrix AA such that det(A)0\det(A) \neq 0, and writing A=(v 1,,v n)A = (v_1, \ldots, v_n), this allows us to solve the equation

Aa=wA \cdot a = w

and we easily conclude that AA is invertible if det(A)0\det(A) \neq 0.

Remark

This holds true even if we replace the field kk by an arbitrary commutative ring RR, and we replace the condition det(A)0\det(A) \neq 0 by the condition that det(A)\det(A) is a unit. (The entire development given above goes through, mutatis mutandis.)

Characteristic polynomial and Cayley-Hamilton theorem

Given a linear endomorphism f:MMf: M\to M of a finite rank free unital module over a commutative unital ring, one can consider the zeros of the characteristic polynomial det(t1 Vf)\det(t \cdot 1_V - f). The coefficients of the polynomial are the concern of the Cayley-Hamilton theorem.

Over the real numbers: volume and orientation

A useful intuition to have for determinants of real matrices is that they measure change of volume. That is, an n×nn \times n matrix with real entries will map a standard unit cube in n\mathbb{R}^n to a parallelpiped in n\mathbb{R}^n (quashed to lie in a hyperplane if the matrix is singular), and the determinant is, up to sign, the volume of the parallelpiped. It is easy to convince oneself of this in the planar case by a simple dissection of a parallelogram, rearranging the dissected pieces in the style of Euclid to form a rectangle. In algebraic terms, the dissection and rearrangement amount to applying shearing or elementary column operations to the matrix which, by the properties discussed earlier, leave the determinant unchanged. These operations transform the matrix into a diagonal matrix whose determinant is the area of the corresponding rectangle. This procedure easily generalizes to nn dimensions.

The sign itself is a matter of interest. An invertible transformation f:VVf \colon V \to V is said to be orientation-preserving if det(f)\det(f) is positive, and orientation-reversing if det(f)\det(f) is negative. Orientations play an important role throughout geometry and algebraic topology, for example in the study of orientable manifolds (where the tangent bundle as GL(n)GL(n)-bundle can be lifted to a GL +(n)GL_+(n)-bundle structure, GL +(n)GL(n)GL_+(n) \hookrightarrow GL(n) being the subgroup of matrices of positive determinant). See also KO-theory.

Finally, we include one more property of determinants which pertains to matrices with real coefficients (which works slightly more generally for matrices with coefficients in a local field):

  • If AA is an n×nn \times n matrix, then det(exp(A))=\det(\exp(A)) = exp(\exp(trace(A))(A))

In terms of Berezinian integrals

see Pfaffian for the moment

References

The proof of Proposition 3 on surjective endomorphisms of finitely generated modules was extracted from

  • Stacks Project, Commutative Algebra, section 13 (pdf)

Revised on January 22, 2013 22:04:12 by Urs Schreiber (89.204.135.189)