Type theory

natural deduction metalanguage, practical foundations

  1. type formation rule
  2. term introduction rule
  3. term elimination rule
  4. computation rule

type theory (dependent, intensional, observational type theory, homotopy type theory)

syntax object language

computational trinitarianism = propositions as types +programs as proofs +relation type theory/category theory

logiccategory theorytype theory
trueterminal object/(-2)-truncated objecth-level 0-type/unit type
falseinitial objectempty type
proposition(-1)-truncated objecth-proposition, mere proposition
proofgeneralized elementprogram
cut rulecomposition of classifying morphisms / pullback of display mapssubstitution
cut elimination? for implicationcounit for hom-tensor adjunctionbeta reduction
introduction rule for implicationunit for hom-tensor adjunctioneta conversion
conjunctionproductproduct type
disjunctioncoproduct ((-1)-truncation of)sum type (bracket type of)
implicationinternal homfunction type
negationinternal hom into initial objectfunction type into empty type
universal quantificationdependent productdependent product type
existential quantificationdependent sum ((-1)-truncation of)dependent sum type (bracket type of)
equivalencepath space objectidentity type
equivalence classquotientquotient type
inductioncolimitinductive type, W-type, M-type
higher inductionhigher colimithigher inductive type
completely presented setdiscrete object/0-truncated objecth-level 2-type/preset/h-set
setinternal 0-groupoidBishop set/setoid
universeobject classifiertype of types
modalityclosure operator, (idemponent) monadmodal type theory, monad (in computer science)
linear logic(symmetric, closed) monoidal categorylinear type theory/quantum computation
proof netstring diagramquantum circuit
(absence of) contraction rule(absence of) diagonalno-cloning theorem
synthetic mathematicsdomain specific embedded programming language

homotopy levels




In computer science, polymorphism is the definition of more than one function with the same name. One usually distinguishes two types of polymorphism: ad hoc polymorphism and parametric polymorphism.

Ad hoc polymorphism

In ad hoc polymorphism, one simply defines multiple functions with the same name and different types, relying on the compiler (or, in some cases, the run-time system) to determine the correct function to call based on the types of its arguments and return value. This is also called overloading. For instance, using a mathematical notation, one might define functions

add:× add : \mathbb{N} \times \mathbb{N} \to \mathbb{N}
add:× add : \mathbb{R} \times \mathbb{R} \to \mathbb{R}

and then when add(3,2)add(3,2) is invoked, the compiler knows to call the first function since 33 and 22 are natural numbers, whereas when add(4.2,π)add(4.2,\pi) is invoked it calls the second function since 4.24.2 and π\pi are real numbers.

Note that there is nothing which stipulates that the behavior of a class of ad-hocly polymorphic functions with the same name should be at all similar. Nothing prevents us from defining add:× add : \mathbb{N} \times \mathbb{N} \to \mathbb{N} to add its arguments but add:× add : \mathbb{R} \times \mathbb{R} \to \mathbb{R} to subtract its arguments. Of course, it is good programming practice to make overloaded functions similar in their behavior.

In the example above, there might even be a coercion? function c:c : \mathbb{N} \to \mathbb{R}, to be invoked whenever a natural number appears where the compiler expects a real number, giving a commutative diagram

× add c×c c × add \array { \mathbb{N} \times \mathbb{N} & \overset{add}\to & \mathbb{N} \\ \mathllap{c \times c} \downarrow & & \downarrow \mathrlap{c} \\ \mathbb{R} \times \mathbb{R} & \underset{add}\to & \mathbb{R} }

But thing don't always work out this way.

Parametric polymorphism

In parametric polymorphism, one writes code to define a function once, which contains a “type variable” that can be instantiated at many different types to produce different functions. For instance, we can define a function

first:A×AA first : A\times A \to A

where AA is a type variable (or parameter), by

first(x,y)x. first(x,y) \coloneqq x.

Now the compiler automatically instantiates a copy of this function, with identical code, for any type at which it is called. Thus we can behave as if we had functions

first:× first : \mathbb{N} \times \mathbb{N} \to \mathbb{N}
first:× first : \mathbb{R} \times \mathbb{R} \to \mathbb{R}

and so on, for any types we wish. In contrast to ad hoc polymorphism, in this case we do have a guarantee that all these same-named functions are doing “the same thing”, because they are all instantiated by the same original polymorphic code.

In a dependently typed programming language with a type of types, such as Coq or Agda, a parametrically polymorphic family of functions can simply be considered to be a single dependently typed function whose first argument is a type. Thus our function above would be typed as

first: A:TypeA×AA first : \prod_{A:Type} A\times A \to A

However, parametric polymorphism makes sense and is very useful even in languages with less rich type systems, such as Haskell and ML?.


  • Robert Atkey, Neil Ghani and Patricia Johann, A Relationally Parametric Model of Dependent Type Theory, In Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2014). 2014. (web)

Revised on February 16, 2014 12:53:18 by Urs Schreiber (