nLab determinant

Contents

Context

Linear algebra

homotopy theory, (∞,1)-category theory, homotopy type theory

flavors: stable, equivariant, rational, p-adic, proper, geometric, cohesive, directed

models: topological, simplicial, localic, …

see also algebraic topology

Introductions

Definitions

Paths and cylinders

Homotopy groups

Basic facts

Theorems

Algebra

Contents

Idea

The determinant is the (essentially unique) universal alternating multilinear map.

Definition

Preliminaries on exterior algebra

Let Vect k{}_k be the category of vector spaces over a field kk, and assume for the moment that the characteristic char(k)2char(k) \neq 2. For each j0j \geq 0, let

sgn j:S jhom(k,k)sgn_j \colon S_j \to \hom(k, k)

be the 1-dimensional sign representation on the symmetric group S jS_j, taking each transposition (ij)(i j) to 1k ×-1 \in k^\times. We may linearly extend the sign action of S jS_j, so that sgnsgn names a (right) kS jk S_j-module with underlying vector space kk. At the same time, S jS_j acts on the j thj^{th} tensor product of a vector space VV by permuting tensor factors, giving a left kS jk S_j-module structure on V jV^{\otimes j}. We define the Schur functor

Λ j:Vect kVect k\Lambda^j \colon Vect_k \to Vect_k

by the formula

Λ j(V)=sgn j kS jV j.\Lambda^j(V) = sgn_j \otimes_{k S_j} V^{\otimes j}.

It is called the j thj^{th} alternating power (of VV).

Another point of view on the alternating power is via superalgebra. For any cosmos V\mathbf{V} let CMon(V)CMon(\mathbf{V}) be the category of commutative monoid objects in V\mathbf{V}. The forgetful functor CMon(V)VCMon(\mathbf{V}) \to \mathbf{V} has a left adjoint

exp(V)= n0V n/S n\exp(V) = \sum_{n \geq 0} V^{\otimes n}/S_n

whose values are naturally regarded as graded by degree nn.

This applies in particular to V\mathbf{V} the category of supervector spaces; if VV is a supervector space concentrated in odd degree, say with component V oddV_{odd}, then each symmetry σ:VVVV\sigma: V \otimes V \to V \otimes V maps vwwvv \otimes w \mapsto -w \otimes v for elements v,wV oddv, w \in V_{odd}. It follows that the graded component exp(V) n\exp(V)_n is concentrated in parity(n)parity(n) degree, with component Λ n(V odd)\Lambda^n(V_{odd}).

Proposition

There is a canonical natural isomorphism Λ n(VW) j+k=nΛ j(V)Λ k(W)\Lambda^n(V \oplus W) \cong \sum_{j + k = n} \Lambda^j(V) \otimes \Lambda^k(W).

Proof

Again take V\mathbf{V} to be the category of supervector spaces. Since the left adjoint exp:VCMon(V)\exp: \mathbf{V} \to CMon(\mathbf{V}) preserves coproducts and since the tensor product \otimes of V\mathbf{V} provides the coproduct for commutative monoid objects, we have a natural isomorphism

exp(VW)exp(V)exp(W).\exp(V \oplus W) \cong \exp(V) \otimes \exp(W).

Examining the grade nn component exp(VW) n\exp(V \oplus W)_n, this leads to an identification

exp(VW) n= j+k=nexp(V) jexp(W) k.\exp(V \oplus W)_n = \sum_{j + k = n} \exp(V)_j \otimes \exp(W)_k.

and now the result follows by considering the case where V,WV, W are concentrated in odd degree.

Corollary

If VV is nn-dimensional, then Λ j(V)\Lambda^j(V) has dimension (nj)\binom{n}{j}. In particular, Λ n(V)\Lambda^n(V) is 1-dimensional.

Proof

By induction on dimension. If dim(V)=1\dim(V) = 1, we have that Λ 0(V)\Lambda^0(V) and Λ 1(V)\Lambda^1(V) are 11-dimensional, and clearly Λ n(V)=0\Lambda^n(V) = 0 for n2n \geq 2, at least when char(k)2char(k) \neq 2.

We then infer

Λ j(Vk) p+q=jΛ p(V)Λ q(k) Λ j(V)Λ j1(V)\array{ \Lambda^j(V \oplus k) & \cong & \sum_{p + q = j} \Lambda^p(V) \otimes \Lambda^q(k) \\ & \cong & \Lambda^j(V) \oplus \Lambda^{j-1}(V) }

where the dimensions satisfy the same recurrence relation as for binomial coefficients: (n+1j)=(nj)+(nj1)\binom{n+1}{j} = \binom{n}{j} + \binom{n}{j-1}.

More concretely: if e 1,,e ne_1, \ldots, e_n is a basis for VV, then expressions of the form e n 1e n je_{n_1} \otimes \ldots \otimes e_{n_j} form a basis for V jV^{\otimes j}. Let e n 1e n je_{n_1} \wedge \ldots \wedge e_{n_j} denote the image of this element under the quotient map V jΛ j(V)V^{\otimes j} \to \Lambda^j(V). We have

e n 1e n ie n i+1e n j=e n 1e n i+1e n ie n je_{n_1} \wedge \ldots \wedge e_{n_i} \wedge e_{n_{i+1}} \wedge \ldots \wedge e_{n_j} = -e_{n_1} \wedge \ldots \wedge e_{n_{i+1}} \wedge e_{n_i} \wedge \ldots \wedge e_{n_j}

(consider the transposition in S jS_j which swaps ii and i+1i+1) and so we may take only such expressions on the left where n 1<<n jn_1 \lt \ldots \lt n_j as forming a spanning set for Λ j(V)\Lambda^j(V), and indeed these form a basis. The number of such expressions is (nj)\binom{n}{j}.

Remark

In the case where char(k)=2char(k) = 2, the same development may be carried out by simply decreeing that e n 1e n j=0e_{n_1} \wedge \ldots \wedge e_{n_j} = 0 whenever n i=n in_i = n_{i'} for some pair of distinct indices ii, ii'.

Now let VV be an nn-dimensional space, and let f:VVf \colon V \to V be a linear map. By the corollary, the map

Λ n(f):Λ n(V)Λ n(V),\Lambda^n(f) \colon \Lambda^n(V) \to \Lambda^n(V),

being an endomorphism on a 1-dimensional space, is given by multiplying by a scalar D(f)kD(f) \in k. It is manifestly functorial since Λ n\Lambda^n is, i.e., D(fg)=D(f)D(g)D(f g) = D(f) D(g). The quantity D(f)D(f) is called the determinant of ff.

Determinant of a matrix

We see then that if VV is of dimension nn,

det:End(V)k\det \colon End(V) \to k

is a homomorphism of multiplicative monoids; by commutativity of multiplication in kk, we infer that

det(UAU 1)=det(A)\det(U A U^{-1}) = \det(A)

for each invertible linear map UGL(V)U \in GL(V).

If we choose a basis of VV so that we have an identification End(V)Mat n(k)End(V) \cong Mat_n(k), then the determinant gives a function

det:Mat n(k)k\det \colon Mat_n(k) \to k

or by restriction to invertible matrices with invertible determinants a function

det:GL n(k)k *\det\colon GL_n(k) \to k^*

that takes products of n×nn \times n matrices to products in kk. The determinant however is of course independent of choice of basis, since any two choices are related by a change-of-basis matrix UU, where AA and its transform UAU 1U A U^{-1} have the same determinant. The above map is furthermore natural in kk, hence is a natural transformation det:GL n *det\colon GL_n\rightarrow -^* from the general linear group to the group of units of a field (or more generally a ring), which are both functors from Field (or more generally Ring) to Grp.

By following the definitions above, we can give an explicit formula:

det(A)= σS nsgn(σ) i=1 na iσ(i) \det(A) \;=\; \sum_{\sigma \in S_n} sgn(\sigma) \prod_{i = 1}^n a_{i \sigma(i)} \;

This may equivalently be written using the Levi-Civita symbol ϵ\epsilon and the Einstein summation convention as

(1)det(A)=a 1j 1a 2j 2a nj nϵ j 1j 2j n \det(A) \;=\; a_{1 j_1} a_{2 j_2} \cdots a_{n j_n} \, \epsilon^{j_1 j_2 \cdots j_n}

which in turn may be re-written more symmetrically as

(2)det(A)=1n!ϵ i 1i 2i na i 1j 1a i 2j 2a i nj nϵ j 1j 2j n \det(A) \;=\; \frac{1}{n!} \epsilon^{ i_1 i_2 \cdots i_n } \, a_{i_1 j_1} a_{i_2 j_2} \cdots a_{i_n j_n} \, \epsilon^{j_1 j_2 \cdots j_n}

Properties

We work over fields of arbitrary characteristic. The determinant satisfies the following properties, which taken together uniquely characterize the determinant. Write a square matrix AA as a row of column vectors (v 1,,v n)(v_1, \ldots, v_n).

  1. det\det is separately linear in each column vector:

    det(v 1,,av+bw,,v n)=adet(v 1,,v,,v n)+bdet(v 1,,w,,v n)\det(v_1, \ldots, a v + b w, \ldots, v_n) = a\det(v_1, \ldots, v, \ldots, v_n) + b\det(v_1, \ldots, w, \ldots, v_n)
  2. det(v 1,,v n)=0\det(v_1, \ldots, v_n) = 0 whenever v i=v jv_i = v_j for distinct i,ji, j.

  3. det(I)=1\det(I) = 1, where II is the identity matrix.

Other properties may be worked out, starting from the explicit formula or otherwise:

  • If AA is a diagonal matrix, then det(A)\det(A) is the product of its diagonal entries.

  • More generally, if AA is an upper (or lower) triangular matrix, then det(A)\det(A) is the product of the diagonal entries.

  • If E/kE/k is a field extension and ff is a kk-linear map VVV \to V, then det(f)=det(E kf)\det(f) = \det(E \otimes_k f). Using the preceding properties and the Jordan normal form? of a matrix, this means that det(f)\det(f) is the product of its eigenvalues (counted with multiplicity), as computed in the algebraic closure of kk.

  • If A tA^t is the transpose of AA, then det(A t)=det(A)\det(A^t) = \det(A).

Cramer’s rule

A simple observation which flows from these basic properties is

Proposition

(Cramer’s Rule)

Let v 1,,v nv_1, \ldots, v_n be column vectors of dimension nn, and suppose

w= ja jv j.w = \sum_j a_j v_j.

Then for each ii we have

a jdet(v 1,,v i,,v n)=det(v 1,,w,,v n)a_j \det(v_1, \ldots, v_i, \ldots, v_n) = \det(v_1, \ldots, w, \ldots, v_n)

where ww occurs as the i thi^{th} column vector on the right.

Proof

This follows straightforwardly from properties 1 and 2 above.

For instance, given a square matrix AA such that det(A)0\det(A) \neq 0, and writing A=(v 1,,v n)A = (v_1, \ldots, v_n), this allows us to solve for a vector aa in an equation

Aa=wA \cdot a = w

and we easily conclude that AA is invertible if det(A)0\det(A) \neq 0.

Remark

This holds true even if we replace the field kk by an arbitrary commutative ring RR, and we replace the condition det(A)0\det(A) \neq 0 by the condition that det(A)\det(A) is a unit. (The entire development given above goes through, mutatis mutandis.)

Characteristic polynomial and Cayley-Hamilton theorem

Given a linear endomorphism f:MMf: M\to M of a finite rank free unital module over a commutative unital ring, one can consider the zeros of the characteristic polynomial det(t1 Vf)\det(t \cdot 1_V - f). The coefficients of the polynomial are the concern of the Cayley-Hamilton theorem.

Over the real numbers: volume and orientation

A useful intuition to have for determinants of real matrices is that they measure change of volume. That is, an n×nn \times n matrix with real entries will map a standard unit cube in n\mathbb{R}^n to a parallelpiped in n\mathbb{R}^n (quashed to lie in a hyperplane if the matrix is singular), and the determinant is, up to sign, the volume of the parallelpiped. It is easy to convince oneself of this in the planar case by a simple dissection of a parallelogram, rearranging the dissected pieces in the style of Euclid to form a rectangle. In algebraic terms, the dissection and rearrangement amount to applying shearing or elementary column operations to the matrix which, by the properties discussed earlier, leave the determinant unchanged. These operations transform the matrix into a diagonal matrix whose determinant is the area of the corresponding rectangle. This procedure easily generalizes to nn dimensions.

The sign itself is a matter of interest. An invertible transformation f:VVf \colon V \to V is said to be orientation-preserving if det(f)\det(f) is positive, and orientation-reversing if det(f)\det(f) is negative. Orientations play an important role throughout geometry and algebraic topology, for example in the study of orientable manifolds (where the tangent bundle as GL(n)GL(n)-bundle can be lifted to a GL +(n)GL_+(n)-bundle structure, GL +(n)GL(n)GL_+(n) \hookrightarrow GL(n) being the subgroup of matrices of positive determinant). See also KO-theory.

Finally, we include one more property of determinants which pertains to matrices with real coefficients (which works slightly more generally for matrices with coefficients in a local field):

As a polynomial in traces of powers

On the relation between determinant and trace:

If AA is an n×nn \times n matrix, the determinant of its exponential equals the exponential of its trace

det(exp(A))=exp(tr(A)). \det(\exp(A)) = \exp(tr(A)) \,.

More generally, the determinant of AA is a polynomial in the traces of the powers of AA:

For 2×22 \times 2-matrices:

det(A)=12(tr(A) 2tr(A 2)) det(A) \;=\; \tfrac{1}{2}\left( tr(A)^2 - tr(A^2) \right)

For 3×33 \times 3-matrices:

det(A)=16((tr(A)) 33tr(A 2)tr(A)+tr(A 3)) det(A) \;=\; \tfrac{1}{6} \left( (tr(A))^3 - 3 tr(A^2) tr(A) + tr(A^3) \right)

For 4×44 \times 4-matrices:

det(A)=124((tr(A)) 46tr(A 2)(tr(A)) 2+3(tr(A 2)) 2+8tr(A 3)tr(A)6tr(A 4)) det(A) \;=\; \tfrac{1}{24} \left( (tr(A))^4 - 6 tr(A^2)(tr(A))^2 + 3 (tr(A^2))^2 + 8 tr(A^3) tr(A) - 6 tr(A^4) \right)

Generally for n×nn \times n-matrices (Kondratyuk-Krivoruchenko 92, appendix B):

(3)det(A)=k 1,,k n=1nk =nl=1n(1) k l+1l k lk l!(tr(A l)) k l det(A) \;=\; \underset{ { k_1,\cdots, k_n \in \mathbb{N} } \atop { \underoverset{\ell = 1}{n}{\sum} \ell k_\ell = n } }{\sum} \underoverset{ l = 1 }{ n }{\prod} \frac{ (-1)^{k_l + 1} }{ l^{k_l} k_l ! } \left(tr(A^l)\right)^{k_l}
Proof of (3)

It is enough to prove this for semisimple matrices AA (matrices that are diagonalizable upon passing to the algebraic closure of the ground field) because this subset of matrices is Zariski dense (using for example the nonvanishing of the discriminant of the characteristic polynomial) and the set of AA for which the equation holds is Zariski closed.

Thus, without loss of generality we may suppose that AA is diagonal with nn eigenvalues λ 1,,λ n\lambda_1, \ldots, \lambda_n along the diagonal, where the statement can be rewritten as follows. Letting p k=tr(A k)=λ 1 k++λ n kp_k = tr(A^k) = \lambda_1^k + \ldots + \lambda_n^k, the following identity holds:

i=1 nλ i=k 1,,k n=1nk =nl=1n(1) k l+1l k lk l!p l k l \prod_{i=1}^n \lambda_i = \underset{ { k_1,\cdots, k_n \in \mathbb{N} } \atop { \underoverset{\ell = 1}{n}{\sum} \ell k_\ell = n } }{\sum} \underoverset{ l = 1 }{ n }{\prod} \frac{ (-1)^{k_l + 1} }{ l^{k_l} k_l ! } p_l^{k_l}

This of course is just a polynomial identity, one closely related to various of the Newton identities that concern symmetric polynomials in indeterminates x 1,,x nx_1, \ldots, x_n. Thus we again let p k=x 1 k++x n kp_k = x_1^k + \ldots + x_n^k, and define the elementary symmetric polynomials σ k=σ k(x 1,,x n)\sigma_k = \sigma_k(x_1, \ldots, x_n) via the generating function identity

k0σ kt k= i=1 n(1+x it). \sum_{k \geq 0} \sigma_k t^k = \prod_{i=1}^n (1 + x_i t).

Then we compute

k0σ kt k = i=1 n(1+x it) = exp( i=1 nlog(1+x it)) = exp( i=1 n k1(1) k+1x i kkt k) = exp( k1(1) k+1p kkt k)\array{ \sum_{k \geq 0} \sigma_k t^k & = & \prod_{i=1}^n (1 + x_i t) \\ & = & \exp\left(\sum_{i=1}^n \log(1 + x_i t)\right) \\ & = & \exp\left(\sum_{i=1}^n \sum_{k \geq 1} (-1)^{k+1} \frac{x_i^k}{k} t^k \right)\\ & = & \exp\left( \sum_{k \geq 1} (-1)^{k+1} \frac{p_k}{k} t^k\right) }

and simply match coefficients of t nt^n in the initial and final series expansions, where we easily compute

x 1x 2x n= n=k 1+2k 2++nk n l=1 n1(k l)!(p ll) k l(1) k l+1 x_1 x_2 \ldots x_n = \sum_{n = k_1 + 2k_2 + \ldots + n k_n} \prod_{l=1}^n \frac1{(k_l)!} \left(\frac{p_l}{l}\right)^{k_l} (-1)^{k_l+1}

This completes the proof.

In terms of Berezinian integrals

see Pfaffian for the moment

References

See also

One derivation of the formula (3) for the determinant as a polynomial in traces of powers is spelled out in appendix B of

  • L. A. Kondratyuk, I. Krivoruchenko, Superconducting quark matter in SU(2)SU(2) colour group, Zeitschrift für Physik A Hadrons and Nuclei March 1992, Volume 344, Issue 1, pp 99–115 (doi:10.1007/BF01291027)

Last revised on February 4, 2024 at 02:29:57. See the history of this page for a list of all contributions to it.