# John Baez Circuit theory

This is a draft of a never-completed paper by John Baez. Much of this, but not the part on cohomology, found its way into here:

## Abstract

There is a dagger-compact category whose morphisms are equivalence classes of electrical circuits made of linear resistors. To construct this category, we begin by recalling work going back to Weyl which expresses Kirchhoff’s laws and Ohm’s law in terms of chains and cochains on a graph. We show that a ‘lumped’ circuit made of linear resistors—that is, a circuit of this sort treated as a ‘black box’ whose inside workings we cannot see—amounts mathematically to a Dirichlet form: a finite-dimensional real vector space with a chosen basis and a quadratic form obeying some conditions. There are rules for composing and tensoring Dirichlet forms, which correspond to the operations of composing circuits in series and setting circuits side by side. However, these rules do not give a category, because the would-be identity morphisms are made of wires with zero resistance, which fall outside the Dirichlet form framework. The most elegant solution is to treat Dirichlet forms as a special case of Lagrangian correspondences. This leads to a dagger-compact category of electrical circuits that can include wires with zero resistance.

# Contents

## Basic Concepts

The concept of an electrical circuit made of linear resistors is well-known in electrical engineering, but we need to formalize it with more precision than usual. The basic idea is that an electrical circuit is a graph whose edges are labelled by positive real numbers called ‘resistances’, and whose set of vertices is equipped with two disjoint subsets: the ‘inputs’ and ‘outputs’.

### Circuits

All graphs in this paper will be directed. So, define a graph to be a pair of functions $s,t : E \to V$ where $E$ and $V$ are finite sets. We call elements of $E$ edges and elements of $V$ vertices. We say that the edge $e \in E$ has source $s(e)$ and target $t(e)$, and we also say that $e$ is an edge from $s(e)$ to $t(e)$.

Define an open graph to be a graph where the set of vertices is equipped with subsets $V_-$ and $V_+$, called inputs and outputs. We do not require that $V_-$ and $V_+$ are disjoint. Often the difference between inputs and outputs will not matter, so we define $\partial V = V_- \cup V_+$, and call elements of this set terminals.

Define a circuit (made of linear resistors) to be an open graph together with a function called the resistance

$R : E \to (0,+\infty)$

assigning to each edge a positive real number. We will use $\Gamma$ to stand for a circuit:

$\Gamma = \left(s,t : E \to V, V_{\pm}, R: E \to (0,+\infty) \right)$

Suppose we have another circuit

$\Gamma' = \left(s',t' : E' \to V', V'_{\pm}, R': E' \to (0,+\infty) \right)$

Then there is an obvious notion of a map of circuits $f : \Gamma \to \Gamma'$. Such a map consists of a function sending vertices to vertices and a function sending edges to edges, both called $f$:

$f : V \to V'$
$f : E \to E'$

which preserve sources and targets, inputs and outputs, and resistances:

$s'(f(e)) = f(s(e)), t'(f(e)) = f(t'(e))$
$v \in V_+ \implies f(v) \in V'_+$
$v \in V_- \implies f(v) \in V'_-$
$R'(f(e)) = R(e)$

This definition makes circuits into the objects of a category.

Given any circuit $\Gamma$, there are three other circuits we can build from it. They are all rather trivial, since they have no edges, only vertices. Nonetheless they are very important in what follows.

First, we have a circuit $\Gamma_+$ whose set of vertices is $V_+$ and whose set of edges is empty. We call this the input of $\Gamma$. There is an obvious map of circuits

$\iota_+ : \Gamma_+ \to \Gamma$

coming from the inclusion $V_+ \hookrightarrow V$ and the inclusion $\emptyset \hookrightarrow E$.

Similarly, there is a circuit $\Gamma_-$, called the output of $\Gamma$, whose set of vertices is $V_-$ and whose set of edges is empty. There is an obvious map

$\iota_- : \Gamma_- \to \Gamma$

Finally, there is a circuit $\partial \Gamma$ whose set of vertices is $\partial V$ and whose set of edges is empty. We call this the boundary of $\Gamma$. Yet again there is an obvious map

$\iota : \partial \Gamma \to \Gamma$

### Chain Complexes from Circuits

In 1923, Hermann Weyl published a paper in Spanish which described electrical circuits in terms of the homology and cohomology of graphs (W). In this approach, Kirchhoff’s voltage and current laws simply say that voltage is a 1-coboundary and current is a 1-cycle. Furthermore, the electrical resistances labelling edges of the graphs put an inner product on the space of 1-chains, allowing us to identify them with 1-cochains. Ohm’s law then says that voltage may then be identified with the current.

In the late 1960’s and early 1970’s, these ideas were further developed by authors including Paul Slepian (Sl), G. E. Ching (C), J. P. Roth (R) and Stephen Smale (Sm). By now they are well-known. The textbook by Bamberg and Sternberg (BS) uses electrical circuits to motivate homology, cohomology and the beginnings of Hodge theory. The text by Gross and Kotiuga (GK) uses chain and cochain complexes to tackle a wide variety of problems in electromagnetism. What follows is a terse review of the basics.

Any circuit $\Gamma$ determines a chain complex of real vector spaces, $C_*(\Gamma)$. As we shall see, a 1-chain in this complex can be used to describe the electrical current flowing through wires (that is, edges) of our circuit.

In fact, $C_*(\Gamma)$ is just the usual chain complex associated to a graph. So, it has only two nonzero terms:

$C_0(\Gamma) = \mathbb{R}^V$
$C_1(\Gamma) = \mathbb{R}^E$

with differential

$\partial : C_1(\Gamma) \to C_0(\Gamma)$

given by

$\partial(e) = t(e) - s(e)$

We can make $C_*(\Gamma)$ a chain complex of finite-dimensional real Hilbert spaces, since the resistance $R : E \to (0,+\infty)$ defines an inner product on $C_1(\Gamma)$ by

$\langle e, e' \rangle = R(e) \delta_{e,e'}$

and there is also an inner product on $C_0(\Gamma)$ for which the vertices form an orthonormal basis:

$\langle v, v' \rangle = \delta_{v,v'}$

### Cochain Complexes From Circuits

The dual of the chain complex $C_*(\Gamma)$ is a cochain complex of finite-dimensional real Hilbert spaces, $C^*(\Gamma)$. As we shall see, a 1-cochain in this complex can be used to describe the voltage across wires of our circuit.

We call the differential in this cochain complex $d$. It is is given by

$(d\phi)(e) = \phi(t(e)) - \phi(s(e))$

But since any real Hilbert space is equipped with a canonical isomorphism to its dual, we get isomorphisms

$r: C_0(\Gamma) \to C^0(\Gamma)$
$r: C_1(\Gamma) \to C^1(\Gamma)$

Explicitly, these are given by:

(1)$a(\beta) = \langle r(a), \beta \rangle$

where $a \in C_i(\Gamma)$ and $\beta \in C^i(\Gamma)$.

Using these isomorphisms, we can transfer the differential $\partial$ on $C_*(\Gamma)$ to a differential on $C^*(\Gamma)$, which we call

$d^\dagger : C^1(\Gamma) \to C^0(\Gamma)$

In other words, we define $d^\dagger$ so that this diagram commutes:

$\array{ C_0(\Gamma) & \stackrel{\partial}{\leftarrow} & C_1(\Gamma) \\ r\downarrow && \downarrow r \\ C^0(\Gamma) & \stackrel{d^\dagger}{\leftarrow} & C^1(\Gamma) }$

or in other words:

(2)$d^\dagger r = r \partial$

We use the dagger notation because $d^\dagger$ really is the Hilbert space adjoint of $d$:

(3)$\langle d^\dagger \alpha, \beta \rangle = \langle \alpha, d \beta \rangle$

for all $\alpha \in C^1(\Gamma)$, $\beta \in C^0(\Gamma)$. This follows immediately from (1) and (2) if we choose $a$ with $r a = \alpha$:

$\begin{array}{ccl} \langle d^\dagger \alpha, \beta \rangle &=& \langle d^\dagger r a, \beta \rangle \\ &=& \langle r \partial a , \beta \rangle \\ &=& (\partial a)(\beta) \\ &=& \langle a, d \beta \rangle \end{array}$

The inclusion of circuits

$\iota : \partial \Gamma \to \Gamma$

gives an inclusion of chain complexes

$\iota_* : C_*(\partial \Gamma) \to C_*(\Gamma)$

and then, by taking duals, a map of cochain complexes $\iota^* : C^*(\Gamma) \to C^*(\partial \Gamma)$. Henceforth we call this map

$p: C^*(\Gamma) \to C^*(\partial \Gamma)$

This map is zero on 1-cochains, and on 0-cochains it simply amounts to restricting a function on the set of vertices $V$ to a function on the set of terminals.

Since we have cochain complexes of finite-dimensional real Hilbert spaces, we can also take the Hilbert space adjoint to get a map $p^\dagger : C^*(\partial \Gamma) \to C^*(\Gamma)$. We write this map as

$i: C^*(\partial \Gamma) \to C^*(\Gamma)$

This map is zero on 1-cochains, and on 0-cochains it simply amounts to extending a function on the set of terminals to a function on the set of vertices that is zero on the vertices that are not terminals.

The following standard facts will come in handy:

###### Proposition

If the maps $i,r,p$ and $\iota_*$ are defined as above, then

(4)$i r = r \iota_*$
(5)$p i = 1$

and

(6)$(ker p)^\perp = im i$
###### Proof

FILL IN DETAILS.

Since $i = p^\dagger$, Equation (6) follows from a general fact about a linear map $T$ between finite-dimensional Hilbert spaces: $(ker T)^\perp = im T$.

### Kirchhoff’s Laws

Given a circuit, we shall focus on two quantities: a 1-chain $I \in C_1(\Gamma)$ called the current and a 1-cochain $V \in C^1(\Gamma)$ called the voltage. In 1847, Gustav Kirchhoff formulated two laws governing these quantities.

We say Kirchhoff’s voltage law holds if

$V = d \phi$

for some $\phi \in C^0(\Gamma)$ called the potential. If Kirchhoff’s voltage law holds for some voltage $V$, the potential $\phi$ is hardly ever unique. But we can say exactly how much it fails to be unique. Given $\phi_1, \phi_2 \in C^0(\Gamma)$, then $d \phi_1 = d \phi_2$ if and only if their difference is constant on each connected component of the graph $\Gamma$.

We say Kirchhoff’s current law holds if

(7)$\partial I = \iota_* J$

for some $J \in C_0(\partial \Gamma)$, called the boundary current. This says that the total current flowing in or out of any vertex is zero unless that vertex is a terminal. If Kirchhoff’s current law holds for $I$, the boundary current $J$ is unique, since $\iota_* : C_0(\partial \Gamma) \to C_0(\Gamma)$ is one-to-one.

### Ohm’s Law

In 1827 Georg Ohm published a book which included a relation between the voltage and current for circuits made of resistors (O). At the time, the critical reception was harsh: one contemporary called Ohm’s work “a web of naked fancies, which can never find the semblance of support from even the most superficial of observations”, and the German Minister of Education said that a professor who preached such heresies was unworthy to teach science (D,H). However, a simplified version of his relation is now widely used under the name of “Ohm’s law”.

As we have seen, the resistance lets us define an inner product on the vector space $C_1(\Gamma)$, which gives an isomorphism $r: C_1(\Gamma) \to C^1(\Gamma)$ as defined in (1). We say Ohm’s law holds if the voltage $V$ and current $I$ are related as follows:

(8)$V = r I$

This allows us to express $I$ in terms of $V$:

$I = r^{-1} V$

Kirchhoff’s voltage law then lets us write $I$ in terms of $\phi$:

$I = r^{-1} d \phi$

Given this, what does Kirchhoff’s current law say in terms of $\phi$? The answer is this:

###### Proposition

Kirchhoff’s current law holds for $I = r^{-1} d \phi$ if and only if

(9)$d^\dagger d \phi = i \chi$

for some $\chi \in C^0(\partial \Gamma)$. Moreover, in this case we can take $\chi$ to be given by

(10)$\chi = r J$

where $J$ is the boundary current given by Kirchhoff’s current law.

###### Proof

Assume Kirchhoff’s current law: $\partial I = \iota_* J$ for some $J$. Then we have

(11)$d^\dagger d \phi = d^\dagger V = d^\dagger r I = r \partial I = r \iota_* J = i r J$

Here the first step uses Kirchhoff’s voltage law, the second uses Ohm’s law; the third uses (2), the fourth uses Kirchhoff’s current law, and the last step uses (4). Thus $d^\dagger d \phi = i \chi$ if we take $\chi = r J$.

Conversely, suppose $d^\dagger d \phi = i \chi$. Then taking $J = r^{-1} \chi$, the same sort of reasoning shows that $d^\dagger d \phi = i \chi$.

### The Principle of Minimum Power

In this section we always assume Kirchhoff’s voltage law and Ohm’s law.

Given a circuit $\Gamma$ with voltage $V$ and current $I$, the power dissipated by a circuit is defined to be

$P = V(I)$

where we are pairing the 1-chain $I$ and the 1-cochain $V$ to get a real number. Ohm’s law allows us to rewrite $I$ as $r^{-1} V$, so the power can be expressed in terms of the voltage:

$P = V(r^{-1} V) = \langle V, V \rangle$

Kirchhoff’s voltage law allows us to write $V$ as $d \phi$, so the power can also be expressed in terms of the potential:

$P = \langle d \phi, d \phi \rangle$

This expression lets us formulate the ‘principle of minimum power’, which gives us information about the potential $\phi$ given its restriction to the boundary of $\Gamma$. This restriction is an element of $C^0(\partial \Gamma)$, and in general we call any element of this space a boundary potential.

###### Definition

We say a potential $\phi \in C^0(\Gamma)$ obeys the principle of minimum power for a boundary potential $\psi \in C^0(\partial \Gamma)$ if $\phi$ minimizes the power $\langle d \phi, d \phi \rangle$ subject to the constraint that $p \phi = \psi$.

###### Proposition

A potential $\phi$ obeys the principle of minimum power for some boundary potential $\psi$ if and only if $I = r^{-1} d \phi$ obeys Kirchhoff’s current law.

###### Proof

If $\phi$ obeys the principle of minimum power for some boundary potential $\psi$, then for any $\phi' \in C^0(\Gamma)$ with $p \phi' = 0$ we must have

$\left. \frac{d}{d t} \langle d(\phi + t \phi'), d(\phi + t \phi') \rangle \right|_{t = 0} = 0$

or in other words:

$\langle d \phi', d \phi \rangle = 0$

or

$\langle \phi' , d^\dagger d \phi \rangle = 0$

This means that $d^\dagger d \phi \in (ker p)^\perp$, so by (6) we have

$d^\dagger d \phi = i \chi$

for some boundary voltage $\chi$. By Proposition this equation implies Kirchhoff’s current law for $I = r^{-1} d \phi$. Conversely, Kirchhoff’s current law for $I$ implies the above equation and thus, running the above calculation backwards,

$\left. \frac{d}{d t} \langle d(\phi + t \phi'), d(\phi + t \phi') \rangle \right|_{t = 0} = 0$

It follows that $\phi$ is a critical point for the power as a function on potentials satisfying the constraint $p \phi = \psi$. But since the power is a nonnegative quadratic form, $\phi$ must minimize the power among such potentials.

### The Dirichlet problem

We have seen that a potential $\phi$ gives a solution of all three basic equations governing electric circuits made from linear resistors—Kirchhoff’s voltage law, Kirchhoff’s current law and Ohm’s law—if and only if this equation holds:

(12)$d^\dagger d \phi = i \chi$

Our next task is to solve this equation. But first, some remarks are in order.

The operator

$d^\dagger d : C^0(\Gamma) \to C^0(\Gamma)$

acts as a discrete analogue of the Laplacian for the graph $\Gamma$, so we call this operator the Laplacian of $\Gamma$. Equation (12) is thus a version of Laplace’s equation with boundary conditions. It says the Laplacian of the potential $\phi \in \C^0(\Gamma)$ equals zero except on the boundary of $\Gamma$, where it equals $\chi$.

We could try to solve for $\phi$ given $\chi$. However, we prefer a slightly different approach, which emphasizes the role of the boundary potential $\psi = p \phi$. After all, we have seen that $\phi$ solves Equation (12) for some $\chi$ if and only if $\phi$ obeys the principle of minimum power for some boundary potential $\psi$. We call the problem of finding a potential $\phi$ that minimizes the power for a fixed value of $\psi = p \phi$ is a discrete version of the Dirichlet problem.

As we shall see, this version of the Dirichlet problem always has a solution. However, the solution is not necessarily unique. If we take a solution $\phi$ and add to it some $\alpha \in C^0(\Gamma)$ with $d \alpha = 0$ and $p \alpha = 0$, we clearly get another solution. It should be intuitively clear that such an $\alpha$ is a function on the vertices of $\Gamma$ that is constant on each connected component and vanishes on the boundary of $\Gamma$. To make this precise we need some standard concepts from graph theory:

###### Definition

Given two vertices $v, w$ of a graph $\Gamma$, a path from $v$ to $w$ is a finite sequence of vertices $v = v_0, v_1, \dots , v_n = w$ and edges $e_1, \dots , e_n$ such that for each $1 \le i \le n$, either $e_i$ is an edge from $v_i$ to $v_{i+1}$, or an edge from $v_{i+1}$ to $v_i$.

###### Definition

A subset $S$ of the vertices of a graph $\Gamma$ is connected if for each pair of vertices in $S$, there is a path from one to the other.

###### Definition

A connected component of a graph $\Gamma$ is a maximal connected subset of the vertices of $\Gamma$.

In the theory of directed graphs, the qualifier ‘strongly’ is commonly used before the word ‘connected’ in the last two definitions. However, we never consider any other sort of connectedness, so we omit this qualifier.

###### Definition

A connected component of $\Gamma$ touches the boundary if it contains a vertex in $\partial \Gamma$.

It easy to see that $\alpha \in C^0(\Gamma)$ obeys $d \alpha = 0$ if and only if it is constant on each connected component of $\Gamma$. If moreover $p \alpha = 0$, then $\alpha$ must vanish on all connected components touching the boundary.

With these preliminaries in hand, we can solve the Dirichlet problem:

###### Proposition

For any boundary potential $\psi \in C^0(\partial \Gamma)$ there exists a potential $\phi$ obeying the principle of minimum power for $\psi$. If we also demand that $\phi$ vanish on every connected component of $\Gamma$ not touching the boundary, then $\phi$ is unique, and depends linearly on $\psi$.

###### Proof

For existence, note that a nonnegative quadratic form restricted to an affine subspace of a real vector space must reach a minimum somewhere on this subspace. So, because the power $\langle d \phi, d \phi \rangle$ defines a nonnegative quadratic form on the space $C^0(\Gamma)$, for any $\psi \in C^0(\partial \Gamma)$ the power must reach a minimum somewhere on the affine subspace

$X = \{ \phi : p \phi = \psi \} .$

For uniqueness, suppose that $\phi, \phi' \in X$ both minimize the power. Let

$\alpha = \phi' - \phi .$

Then $p \alpha = 0$, so $\phi + t \alpha$ lies in $X$ for all $t \in \mathbb{R}$. Thus, the function

$P(t) = \langle d(\phi + t \alpha), d(\phi + t \alpha) \rangle$

attains its minimum value both at $t = 0$ and at $t = 1$. Since this function is smooth, we must have $P'(0) = 0$. Since

$P(t) = \langle d \phi, d \phi \rangle + 2 t \langle d \phi, d \alpha \rangle + t^2 \langle d \alpha, d \alpha \rangle,$

it follows that $\langle d \phi, d \alpha \rangle = 0$. Thus

$P(t) = \langle d \phi, d \phi \rangle + t^2 \langle d \alpha, d \alpha \rangle .$

Since this function takes on the same value at $t = 0$ and $t = 1$, we must have $d \alpha = 0$. This implies that $\alpha$ is constant on each connected component of $\Gamma$. Furthermore, since $p \alpha = 0$, $\alpha$ vanishes on each connected component of $\Gamma$ touching the boundary.

Thus, if we demand that both $\phi$ and $\phi'$ vanish on every connected component of $\Gamma$ that does not touch the boundary, $\alpha = \phi' - \phi$ vanishes on every connected component of $\Gamma$. It follows that $\phi = \phi'$, giving the desired uniqueness.

To prove that $\phi$ depends linearly on $\psi$, suppose that for $i = 1,2$ the potential $\phi_i$ obeys the principle of minimum power for $\psi_i$ and vanishes on every component of $\Gamma$ not touching the boundary. Then by Propositions and , we have

$d^\dagger d \psi_i = i \chi_i$

for some $\chi_i \in C^0(\partial \Gamma)$. It follows that for any real numbers $c_1$ and $c_2$, the potential $\phi = c_1 \phi_1 + c_2 \phi_2$ obeys

$d^\dagger d \phi = i \chi$

where $\chi = c_1 \chi_1 + c_2 \chi_2$. By another application of Propositions and , it follows that $\phi$ obeys the principle of minimum power for some boundary potential $\psi$. But since

$p \phi = p(c_1 \phi_1 + c_2 \phi_2) = c_1 \psi_1 + c_2 \psi_2 ,$

we must have $\psi = c_1 \psi_1 + c_2 \psi_2$. So, $\phi$ depends linearly on $\psi$.

Note from the proof of the above proposition that:

###### Proposition

Suppose $\psi \in C^0(\partial \Gamma)$ and $\phi$ is a potential obeying the principle of minimum power for $\psi$. Then $\phi'$ obeys the principle of minimum power for $\psi$ if and only if the difference $\phi' - \phi$ is constant on every connected component of $\Gamma$ and it vanishes on every connected component touching the boundary of $\Gamma$.

Bamberg and Sternberg (BS) describe another way to solve the Dirichlet problem, going back to Weyl (W).

### Lumped Circuits

In this section we always assume that the principle of minimum power holds, as well as Kirchhoff’s voltage law and Ohm’s law.

Under these circumstances, we shall see that the boundary potential determines the boundary current. A ‘lumped circuit’ is an equivalence class of circuits, where two are considered equivalent when the boundary current is the same function of the boundary potential. The idea is that the boundary current and boundary potential are all that can be observed ‘from outside’, i.e. by making measurements at the terminals. Restricting our attention to what can be observed by making measurements at the terminals amounts to treating a circuit as a ‘black box’: that is, treating its interior as hidden from view. So, two circuits give the same lumped circuit when they behave the same as ‘black boxes’.

First let us check that the boundary current is a function of the boundary potential. For this we introduce an important quadratic form on the space of boundary potentials:

###### Definition

For any $\psi \in C^0(\partial \Gamma)$, let

$Q(\psi) = \frac{1}{2} \inf_{\{\phi : p \phi = \psi\} } \; \langle d \phi, d \phi \rangle$

Since $\langle d \phi, d \phi \rangle$ defines a nonnegative quadratic form on the finite-dimensional vector space $C^0(\Gamma)$ and the constraint $p \phi = \psi$ picks out a linear subspace of this subspace, the infimum above is actually attained. One can check that $Q(\psi)$ is a nonnegative quadratic form on $C^0(\partial \Gamma)$.

Up to a factor of $\frac{1}{2}$, $Q(\psi)$ is just the power dissipated by the circuit when the boundary voltage is $\psi$, thanks to the principle of minimum power. The factor of $\frac{1}{2}$ simplifies the next proposition, which uses $Q$ to compute the boundary current as a function of the boundary voltage.

Since $Q$ is a smooth real-valued function on $C^0(\partial \Gamma)$, its differential $d Q$ at any given point $\psi \in C^0(\partial \Gamma)$ defines an element of the dual space $C_0(\partial \Gamma)$, which we denote by $d Q_\psi$. In fact, this element is equal to the boundary current $J$ corresponding to the boundary voltage $\psi$:

###### Proposition

Suppose $\psi \in C^0(\partial \Gamma)$. Suppose $\phi$ is any potential minimizing the power $\langle d \phi , d \phi \rangle$ subject to the constraint $p \phi = \psi$. Let $V = d \phi$ be the corresponding voltage, $I = r^{-1} V$ the current, and $\partial I = \iota^* J$ where $J$ is the corresponding boundary current. Then

(13)$d Q_\psi = J .$
###### Proof

Note first that while there may be several choices of $\phi$ minimizing the power subject to the constraint that $p \phi = \psi$, Proposition says that the difference between any two choices vanishes on all components touching the boundary of $\Gamma$. Thus, these two choices give the same value for $J$. So, with no loss of generality we may assume $\phi$ is the unique choice that vanishes on all components not touching the boundary. By Proposition , there is a linear operator

$f: C^0(\partial \Gamma) \to C^0(\Gamma)$

sending $\psi \in C^0(\partial \Gamma)$ to this choice of $\phi$, and then

$Q(\psi) = \frac{1}{2} \langle d f \psi, d f \psi \rangle .$

Given any $\psi' \in C^0(\partial \Gamma)$, we thus have

$\begin{array}{ccl} d Q_\psi (\psi') &=& \left. \frac{d}{d t} Q(\psi + t \psi') \right|_{t = 0} \\ &=& \frac{1}{2} \left. \frac{d}{d t} \langle d f(\psi + t \psi'), d f (\psi + t \psi') \rangle \right|_{t = 0} \\ &=& \langle d f \psi, d f \psi' \rangle \\ &=& \langle d^\dagger d f \psi , f \psi' \rangle \\ &=& \langle i r J , f \psi' \rangle \end{array}$

where in the last step we use Equation (11). Since $i^\dagger = p$, we obtain

$\begin{array}{ccl} d Q_\psi (\psi') &=& \langle r J, p f \psi' \rangle \\ &=& \langle r J , \psi' \rangle \\ &=& J(\psi') \end{array}$

where in the last step we use Equation (1). It follows that $d Q_\psi = J$.

## Categories of Circuits

In this section we define a category of circuits, and also a category of lumped circuits. Both these are dagger-compact categories

There is a category where objects are finite sets of points, and a morphism $f : S \to T$ is an equivalence class of circuits

$\Gamma = \left(s,t : E \to V, V_{\pm}, R: E \to (0,+\infty) \right)$

equipped with bijections

$i: S \to V_-, \qquad j: T \to V_+ .$

The equivalence relation is as follows: $(\Gamma, i, j)$ is equivalent to $(\Gamma', i', j')$ if there is an isomorphism of circuits $f : \Gamma \to \Gamma'$ such that

$f i = i' , \qquad f j = j'.$

The composition of circuits is given by pushout of cospans…

This category is symmetric monoidal, and in fact a dagger-compact category. WHY???

### Dirichlet forms

We have seen that a lumped circuit is completely specified by the vector space $C^0(\partial \Gamma)$ along with its distinguished basis and the quadratic form $Q$. Now we describe which quadratic forms can arise this way. They are known as ‘Dirichlet forms’, and they admit a number of equivalent characterizations. We start with the simplest.

Given a finite set $S$, let $\mathbb{R}^S$ be the vector space of functions $\psi: S \to \mathbb{R}^S$. A Dirichlet form on $S$ will be a certain sort of quadratic form on $\mathbb{R}^S$:

###### Definition

Given a finite set $S$, a Dirichlet form on $S$ is a quadratic form $Q: \mathbb{R}^S \to \mathbb{R}$ given by the formula

$Q(\psi) = \sum_{i,j} c_{i j} (\psi_i - \psi_j)^2$

for some nonnegative real numbers $c_{i j}$.

Note that we may assume without loss of generality that $c_{i i} = 0$ and $c_{i j} = c_{j i}$; we do this henceforth. Any Dirichlet form is nonnegative: $Q(\psi) \ge 0$ for all $\psi \in \mathbb{R}^S$. However, not all nonnegative quadratic forms are Dirichlet forms. For example, if $S = \{1, 2\}$:

$Q(\psi) = (\psi_1 + \psi_2)^2$

is not a Dirichlet form.

In fact, the concept of Dirichlet form is vastly more general: such quadratic forms are studied not just on finite-dimensional vector spaces $\mathbb{R}^S$ but on $L^2$ of any measure space. When this measure space is just a finite set, the concept of Dirichlet form reduces to the definition above. For a thorough introduction Dirichlet forms, see the text by Fukushima (F). For a fun tour of the underlying ideas, see the paper by Doyle and Snell (DS).

We will not really need any other characterizations of Dirichlet forms, but they do help illuminate the concept:

###### Proposition

Given a finite set $S$ and a quadratic form $Q : \mathbb{R}^S \to \mathbb{R}$, the following are equivalent:

1. $Q$ is a Dirichlet form.

2. $Q(\phi) \le Q(\psi)$ whenever $|\phi_i - \phi_j| \le |\psi_i - \psi_j|$ for all $i, j$.

3. $Q(\phi) = 0$ whenever $\phi_i$ is independent of $i$, and $Q$ obeys the Markov property: $Q(\phi) \le Q(\psi)$ when $\psi_i = \min (\phi_i, 1)$.

###### Proof

See Fukushima (F).

## A Category of Lumped Circuits

We begin with a naive attempt to construct a category where the morphisms are lumped circuits. This naive attempt doesn’t quite work, because it doesn’t include identity morphisms. However, it points in the right direction.

Given finite sets $S$ and $T$, let $S+T$ denote their disjoint union. Let $D(S,T)$ be the set of Dirichlet forms on $\mathbb{R}^{S + T}$. There is a way to compose these Diriclet forms:

$\circ : D(T,U) \times D(S,T) \to D(S,U)$

defined as follows. Given $Q \in D(S,T)$ and $R \in D(T,U)$, let

$(R \circ Q)(\gamma, \alpha) = \inf_{\beta \in \R^T} Q(\gamma, \beta) + R(\beta, \alpha)$

where $\alpha \in R^S, \gamma \in R^U$. Moreover, this composition is associative:

$(P \circ Q) \circ R = P \circ (Q \circ R)$

However, there is typically no Dirichlet form $1_S \in D(S,S)$ playing the role of the identity for this composition. A ‘category without identity morphisms’ is called a semicategory, so we see

###### Proposition

There is a semicategory where:

• the objects are finite sets,

• a morphism from $T$ to $S$ is a Dirichlet form $Q \in D(S,T)$.

• composition of morphisms is given by

$(R \circ Q)(\gamma, \alpha) = \inf_{\beta \in \R^T} Q(\gamma, \beta) + R(\beta, \alpha) .$

We would like to make this into a category. The easy way is to formally adjoin identity morphisms; this trick works for any semicategory. This amounts to introducing some circuits that contains wires with zero resistance. However, we obtain a better category if we include more morphisms: more circuits having wires with zero resistance.

• S. Abramsky and B. Coecke, A categorical semantics of quantum protocols, in Proceedings of the 19th IEEE conference on Logic in Computer Science (LiCS’04), IEEE Computer Science Press, ????, 2004. Also available as http://arxiv.org/abs/quant-ph/0402130.
• P. Bamberg and S. Sternberg, A Course of Mathematics for Students of Physics 2, Chap. 12: The theory of electrical circuits, Cambridge University, Cambridge, 1982.
• G. E. Ching, Topological concepts in networks; an application of homology theory to network analysis, Proc. 11th. Midwest Conference on Circuit Theory, University of Notre Dame, 1968, pp. 165-175.
• B. Davies, A web of naked fancies?, Phys. Educ. 15 (1980), 57-61.
• M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.
• P. W. Gross and P. R. Kotiuga, Electromagnetic Theory and Computation: A Topological Approach, Cambridge University Press, 2004.
• I. B. Hart, Makers of Science, Oxford U. Press, London, 1923, p. 243.
• P. Katis, N. Sabadini, R. F. C. Walters, On the algebra of systems with feedback and boundary, Rendiconti del Circolo Matematico di Palermo Serie II, Suppl. 63 (2000), 123–156.

• J. Kigami, Analysis on Fractals, Cambridge U. Press. First 60 pages available at http://www-an.acs.i.kyoto-u.ac.jp/~kigami/AOF.pdf.

• Z.-M. Ma and M. Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1991.

• G. Ohm, Die Galvanische Kette, Mathematisch Bearbeitet, T. H. Riemann, Berlin, 1827. Also available at http://www.ohm-hochschule.de/bib/textarchiv/Ohm.Die_galvanische_Kette.pdf.

• J. P. Roth, Existence and uniqueness of solutions to electrical network problems via homology sequences, Mathematical Aspects of Electrical Network Theory, SIAM-AMS Proceedings III, 1971, pp. 113-118.

• C. Sabot, Existence and uniqueness of diffusions on finitely ramified self-similar fractals, Section 1: Dirichlet forms on finite sets and electrical networks, Annales Scientifiques de l’École Normale Supérieure, Sér. 4, 30 (1997), 605-673. Also available at http://www.numdam.org/numdam-bin/item?id=ASENS_1997_4_30_5_605_0.
• C. Sabot, Electrical networks, symplectic reductions, and application to the renormalization map of self-similar lattices, Proc. Sympos. Pure Math. 72 (2004), 155-205. Also available as arXiv:math-ph/0304015.
• P. Selinger, Dagger compact closed categories and completely positive maps, in Proceedings of the 3rd International Workshop on Quantum Programming Languages (QPL 2005), ENTCS 170 (2007), 139–163. Also available at http://www.mscs.dal.ca/~selinger/papers/dagger.pdf.

• P. Slepian, Mathematical Foundations of Network Analysis, Springer, Berlin, 1968.

• S. Smale, On the mathematical foundations of electrical network theory, J. Diff. Geom. 7 (1972), 193-210.