Gentzen liberated the quantifiers from their old metaphysical interpretation as infinitary truth-functions and expressed them as elementary sequent rules. In Chapter I we showed that his natural deduction really does agree with the mathematical vernacular. Dag Prawitz, Nikolas de Bruijn and Per Martin-Löf developed calculi which their followers have used to formalise many mathematical arguments, and even for programming. These help us to understand very clearly the extent to which equality, the quantifiers, powersets and the other components of logic are actually employed.
We have claimed that universal properties in category theory deserve to be called ``foundational'' like the logical connectives. In a tradition which began with the axiomatisation of Abelian categories, Chapter V showed that sum types and relational algebra also have an intimate connection with the idioms of mathematics. Employing a technology from algebraic geometry, Bill Lawvere saw that the rules for the quantifiers say that they too are characterised by universal properties. By similar methods, Jean Bénabou reduced the infinitary limits and colimits in category theory to elementary form. Robert Seely concluded this phase of development by extending the logic of Remark 5.2.8 to locally cartesian closed categories.
The introduction by Jean-Yves Girard and Thierry Coquand of calculi that have quantification over all types and even universal types forced a rethink of Lawvere's ideas. These calculi do not have direct set-theoretic interpretations, so many sorts of domain, sometimes themselves categories, were investigated in the search for their semantics. As a result, we now have very efficient ways of calculating quantifiers in semantic categories , which we shall illustrate for Sp (such a quantifier was found in 1965). Together with the previous one, this chapter makes the link between categories and syntax precise, to the point that we cease to distinguish between them notationally.
The study of these powerful calculi has given us ways to tackle other ``big'' questions in logic. In category theory, what does it mean to treat categories such as Set and functors between them as single entities? Are the axiom of replacement and large cardinals in set theory of relevance to mathematical constructions? How can we understand Gödel's incompleteness theorem on the one hand, and the proofs of consistency afforded by the gluing construction on the other?
Although this book has kept its promise to remain within or equivalent to Zermelo-Fraenkel set theory, our study of the roles of propositions and types in the quantifiers enables us to change the rules of engagement between them. Semantically, we may look for pullback-stable classes of maps in topology and elsewhere which obey the rules for the quantifiers, and then start to use the notation of logic to describe geometric and other constructions. Syntactically, the blanket assumption of such a clumsy framework as ZF for mathematical reasoning can be fine-tuned to capture exactly the arguments which we want to express, and thereby apply them in novel situations. It seems to me that the quantifiers and equalities on which Cantor's diagonalisation argument and the Burali-Forti paradox rely are stronger than are justified by logical intuition. We now have both the syntax and the semantics to weaken them.
Such a revolution in mathematical presentation may be another century away: excluded middle and Choice were already on the agenda in 1908, but the consensus of that debate has yet to swing around to the position on which such developments depend. On the other hand, the tide of technology will drive mathematicians into publishing their arguments in computer-encoded form. Theorems which provide nirvana will lose out to those that are programs and do calculations for their users on demand. A new philosophical and semantic basis is needed to save mathematics from being reduced yet again to programming.