nLab natural deduction

Contents

Context

Deduction and Induction

Type theory

natural deduction metalanguage, practical foundations

  1. type formation rule
  2. term introduction rule
  3. term elimination rule
  4. computation rule

type theory (dependent, intensional, observational type theory, homotopy type theory)

syntax object language

computational trinitarianism =
propositions as types +programs as proofs +relation type theory/category theory

logicset theory (internal logic of)category theorytype theory
propositionsetobjecttype
predicatefamily of setsdisplay morphismdependent type
proofelementgeneralized elementterm/program
cut rulecomposition of classifying morphisms / pullback of display mapssubstitution
introduction rule for implicationcounit for hom-tensor adjunctionlambda
elimination rule for implicationunit for hom-tensor adjunctionapplication
cut elimination for implicationone of the zigzag identities for hom-tensor adjunctionbeta reduction
identity elimination for implicationthe other zigzag identity for hom-tensor adjunctioneta conversion
truesingletonterminal object/(-2)-truncated objecth-level 0-type/unit type
falseempty setinitial objectempty type
proposition, truth valuesubsingletonsubterminal object/(-1)-truncated objecth-proposition, mere proposition
logical conjunctioncartesian productproductproduct type
disjunctiondisjoint union (support of)coproduct ((-1)-truncation of)sum type (bracket type of)
implicationfunction set (into subsingleton)internal hom (into subterminal object)function type (into h-proposition)
negationfunction set into empty setinternal hom into initial objectfunction type into empty type
universal quantificationindexed cartesian product (of family of subsingletons)dependent product (of family of subterminal objects)dependent product type (of family of h-propositions)
existential quantificationindexed disjoint union (support of)dependent sum ((-1)-truncation of)dependent sum type (bracket type of)
logical equivalencebijection setobject of isomorphismsequivalence type
support setsupport object/(-1)-truncationpropositional truncation/bracket type
n-image of morphism into terminal object/n-truncationn-truncation modality
equalitydiagonal function/diagonal subset/diagonal relationpath space objectidentity type/path type
completely presented setsetdiscrete object/0-truncated objecth-level 2-type/set/h-set
setset with equivalence relationinternal 0-groupoidBishop set/setoid with its pseudo-equivalence relation an actual equivalence relation
equivalence class/quotient setquotientquotient type
inductioncolimitinductive type, W-type, M-type
higher inductionhigher colimithigher inductive type
-0-truncated higher colimitquotient inductive type
coinductionlimitcoinductive type
presettype without identity types
set of truth valuessubobject classifiertype of propositions
domain of discourseuniverseobject classifiertype universe
modalityclosure operator, (idempotent) monadmodal type theory, monad (in computer science)
linear logic(symmetric, closed) monoidal categorylinear type theory/quantum computation
proof netstring diagramquantum circuit
(absence of) contraction rule(absence of) diagonalno-cloning theorem
synthetic mathematicsdomain specific embedded programming language

homotopy levels

semantics

Foundations

foundations

The basis of it all

 Set theory

set theory

Foundational axioms

foundational axioms

Removing axioms

Contents

Idea

Formalization of mathematical reasoning can be presented in different forms. The framework of natural deduction describes a particular class of deductive systems which is supposed to be close to “natural” deductive reasoning insofar it is based on the idea of reasoning from assumptions in contrast to proof systems that reason from ‘truths’ in the tradition of Hilbertian axiomatics. It is used particularly to present the syntax of formal logic and type theory.

Remark

The phrase “natural deduction” is not always used to mean the same thing. Some people use it semi-informally to refer to a “sort of reasoning” that involves making assumptions, perhaps formalized using hypothetical judgments. Other people take it to refer specifically to deductive systems which are presented using introduction and elimination rules, which is the meaning we adopt on this page. There is significant overlap between the two meanings, but they are not identical.

Summary

Introduction and elimination

A system of natural deduction is a deductive system containing a class of judgments generated by some “constructor” operations, and for which each constructor comes with two relevant classes of rules:

  • Introduction rules, which allow us to conclude a judgment built using the constructor from simpler judgments; and

  • Elimination rules, which allow us to conclude some other judgment by using a judgment involving the constructor.

For instance, in a system of propositional logic whose judgments are the truth of propositions, one such constructor might be the “conjunction” connective \wedge. The introduction rule for \wedge would be

AtrueBtrueABtrue \frac{A \; true \qquad B\;true}{A\wedge B\; true}

while its elimination rules might be

ABtrueAtrueABtrueBtrue. \frac{A\wedge B\; true}{A\;true} \qquad \frac{A\wedge B\;true}{B\;true}.

The introduction and elimination rules must fit together in an appropriate way, sometimes referred to as “harmony”.

Similarly, in a system of type theory, the relevant judgments are typing judgments of the form a:Aa:A, meaning that the term aa belongs to the type AA. In this case, an analogous constructor might be the cartesian product type, whose rules are analogous, but keeping track of the specific terms involved (see propositions as types):

a:Ab:B(a,b):A×B \frac{a:A \qquad b:B}{(a,b):A\times B}

and

p:A×Bπ 1(p):Ap:A×Bπ 2(p):B. \frac{p:A\times B}{\pi_1(p):A} \qquad \frac{p:A\times B}{\pi_2(p):B}.

Formation and computation

In natural deduction systems for type theory, there are usually two other classes of rules:

  • Formation rules, which declare on which basis a new type can be introduced; and

  • Computation rules or conversion rules, which constrain the result of combining term introduction with term elimination.

These two rules refer to different classes of judgments than the introduction and elimination rules: judgments that a certain thing is a type for the formation rules, and judgments that one term reduces to, or converts with, another, for the computation rules.

For instance, the cartesian product type would come with a formation rule

AtypeBtypeA×Btype \frac{A\;type \qquad B\;type}{A\times B\;type}

and computation rules such as beta-reduction

a:Ab:Bπ 1(a,b) βaa:Ab:Bπ 2(a,b) βb \frac{a:A \qquad b:B}{\pi_1(a,b) \to_\beta a} \qquad \frac{a:A \qquad b:B}{\pi_2(a,b) \to_\beta b}

(and possibly eta-conversion).

Technically, the propositional logic system could also come with a formation rule involving a judgment “is a proposition”:

ApropBpropABprop \frac{A\;prop \qquad B\;prop}{A\wedge B\;prop}

But it would not have a computation rule, as there are no terms inhabiting its propositions. (It could, however, be embedded into a type theory via propositions as types, where there would be terms inhabiting its propositions regarded as types).

More generally, natural deduction with computation rules gives a formulation of computation. See computational trinitarianism for discussion of this unification of concepts.

On the other hand, in type theories that have a type of types, there may be no need for a separate judgment “AtypeA\;type”, as it can be replaced by a typing judgment “A:TypeA:Type” that AA belongs to a type of types.

Hypothetical reasoning

Natural deduction also generally involves hypothetical judgments or reasoning from assumptions. For instance, the introduction rule for implication in a system of propositional logic says that if, assumingAA”, we can derive “BB”, then we can derive “ABA\Rightarrow B”. This is sometimes written as

(1)[A] BAB \frac{\array{[A]\\ \vdots \\ B}}{A\Rightarrow B}

(We now follow the common practice of writing the judgment “AtrueA\;true” as simply “AA”.) Here the \vdots indicate an arbitrary derivation tree, while the brackets around AA indicate that this assumption has been “discharged” and is no longer an assumption in the conclusion ABA\Rightarrow B. To be precise, we should annotate each bracket somehow to indicate which rule discharged that assumption.

Another way to indicate hypothetical reasoning (which also allows it to fit once again within the general notion of deductive system) is to indicate in each judgment the hypotheses on which it depends. Thus, we might write “ABA\vdash B” to mean a hypothetical judgment of BB assuming AA. With this notation, the introduction rule for \Rightarrow can be written as:

ABAB.\frac{A\vdash B}{\vdash A\Rightarrow B}.

The rule (1), where \vdots indicates an arbitrary derivation, can then be written as

AA ABAB. \frac{\array{\frac{}{A\vdash A} \\ \vdots \\ A\vdash B}}{\vdash A\Rightarrow B}.

Here we have prefixed the entire derivation \vdots in (1) by “AA\vdash” to indicate that it is all performed with AA as an assumption, and the discharge of that assumption is indicated by removing this prefix from the final conclusion. We also begin the deduction with the axiom AAA\vdash A (the identity rule).

To be fully precise, we should now allow all rules to take place under arbitrary additional hypotheses; thus the introduction rule of \Rightarrow should really be

Γ,ABΓAB.\frac{\Gamma, A\vdash B}{\Gamma\vdash A\Rightarrow B}.

where Γ\Gamma denotes an arbitrary list of hypothesis truth judgments. However, often this unchanged ambient context is unstated, but implicitly assumed to be present.

Remark

Hypothetical judgments of this sort are very similar to the sequents which appear in sequent calculus, and are (as is evident) written with very similar notation. However, the rules of natural deduction are different from the rules of sequent calculus, as are its formal properties.

This treatment of hypothetical judgments applies also to type theories, where the individual judgments are typing judgments. In this case, the assumed judgments on the left of the \vdash are generally restricted to be of the form x:Ax:A with xx a variable (rather than an arbitrary term), and these assumed judgments are referred to as the context. When the judgment on the right of the \vdash may involve variables that occur to its left, one speaks of generic judgments rather than “hypothetical” ones. Thus, a generic judgment is of the form

(x:A)(b(x):B) (x : A) \;\vdash\; (b(x) : B)

to be read as

In a context where we have a term xx of a type AA, there is a term b(x)b(x) of type BB.

Thus, for instance, the introduction rule for the function type is

(x:A)(b:B)(λx.b:AB). \frac{(x:A) \;\vdash\; (b:B)}{\vdash (\lambda x.b:A\to B)}.

In natural deduction for dependent type theory, we can also have type judgments with hypotheses, such as

(x:A)B(x)type (x : A) \;\vdash\; B(x) \; type

to be read as

For each xx in AA there is a type B(x)B(x). Or: BB is a type in context AA, a type dependent on AA.

In a type theory with a type of types, this judgment could be written as (x:A)(B(x):Type)(x:A)\;\vdash\;(B(x):Type).

Remark

So far, we have been considering hypothetical judgments such as ABA\vdash B and generic judgments such as (x:A)(b(x):B)(x:A)\vdash (b(x):B) to be “atomic” judgments in a particular deductive system. In particular, the turnstile symbol \vdash has been simply another symbol that we use to build judgments according to a particular syntax, analogous to the colon ::. As remarked at deductive system, this usage of \vdash is a priori completely unrelated to its use to indicate provability of theorems in a particular deductive system (such as a system of natural deduction), and therefore perhaps ought to be denoted by a different symbol.

However, it is also possible to incorporate some “knowledge” about the meaning of hypothetical and generic judgments into the deductive system, and thereby bring the two meanings of \vdash back into alignment. See logical framework for a development of this idea.

Properties

Relation to category theory

The four classes of rules of natural deduction combined give alternative specifications of universal constructions in category theory. In categorical semantics one considers categories which are such that their objects are regarded as types and their generalized elements as terms, then the rules of natural deductions describe the possible construction of morphisms in that category.

For instance, the connectives of type theory are presentations of functors, but their action on morphisms is not explicitly defined, because the action is definable from the introduction and elimination rules.

(…)

Decidability

(…)

References

In a broad sense, the idea of natural deduction can be traced back to Aristotle‘s Prior Analytics as classical work by J. Bacon, J. Corcoran and T. Smiley around 1970 has argued that his theory of syllogism can be viewed as a fragment of natural deduction. For more on this see e.g.

  • J. Bacon, Natural-deduction rules for syllogistic , JSL 31 (1966) pp.686-7.

  • J. Corcoran, Completeness of an ancient logic , JSL 37 (1972) pp.696-702.

  • T. Smiley, What is a syllogism , JPL 2 (1973) pp.136-154.

The next step to the modern conception was an observation by the Polish logician (and Aristotle scholar) J. Lukasiewicz in 1926 that the formalization of mathematical reasoning proposed by D. Hilbert or Principia Mathematica fails to comply with the actual reasoning style of mathematicians. This was taken up by S. Jaskowski who published a system of natural deduction in 1934:

  • S. Jaskowski, On the rules of suppositions in formal logic , Studia Logica 1 Warsaw 1934. (pdf)

At about the same time G. Gentzen developed a system of natural deduction in his thesis1 - a landmark of 20th century mathematical logic introducing proof systems for predicate logic (in both classical and intuitionistic versions) and the concept of cut-elimination - that was published as

and in English translation as :

  • Gerhard Gentzen, Investigations into Logical Deduction, in M. E. Szabo (ed.), The Collected Papers of Gerhard Gentzen, Studies in Logic and the Foundations of Mathematics 55, Springer (1969) 68-131 [ISBN:978-0-444-53419-4, pdf]

In order to prove the famous Hauptsatz for proof normalization Gentzen invented the sequent calculus though he sketched a similar result for natural deduction in his later work on the consistency of arithmetics. Normalization for systems of natural deduction was established by D. Prawitz in his thesis in 1965:

  • D. Prawitz, Natural Deduction - A Proof-theoretical Study , Dover reprint New York 2006[1965].

Textbook accounts of proof systems in natural deduction format, sequent calculi or Hilbert-style systems:

Inference rules in type theory, for type formation, term introduction and term elimination:

Introductions:

A formalization of the general logical framework of natural deduction is discussed in section 3 of

A good account is at

The standard rules for type-formation, term introduction/elimination and computation in dependent type theory are listed for instance in part I of

Introductory textbook account:


  1. It can be said of Gentzen that it was he who first showed how proof theory should be done. (M. Dummett, Frege- Philosophy of Language , Harvard UP 19812^2. p.434)

Last revised on August 9, 2023 at 16:45:07. See the history of this page for a list of all contributions to it.