Previous Up Next

3  Topology as lambda-calculus

Now we shall introduce Abstract Stone Duality. It is a direct axiomatisation of general topology whose aim is to integrate it with computability theory and denotational semantics, without sacrificing important properties such as the Heine–Borel theorem. The basic building blocks — spaces and maps — are taken as fundamental, rather than being manufactured from sets by imposing extra structure.

The ASD calculus formalises the notation that we have already used for the set-theoretic constructions in the previous section (Notation 2.11ff):

  1. the Sierpiński space Σ and its lattice structure;
  2. λ-terms of type ΣX to encode open subspaces of X;
  3. ∃ for combinatorial structure;
  4. ∀ to say when the direct image of a closed bounded interval is contained in an open interval;
  5. real numbers represented as Dedekind cuts, considered as pairs (δ,υ) of λ-terms; and
  6. the Σ-splitting I that is related to compactness of the interval [d,u].


As a type theory, ASD has types, terms and equations between terms. The types are also called spaces, and the terms maps or functions. The three basic spaces 1, ℕ and Σ are axiomatised by their universal properties. From them, we may construct products, X× Y, and exponentials of the form ΣX, but not arbitrary YX. The theory also provides certain “Σ-split” subspaces, and it is our main objective to construct R in this way, using the idea of Proposition 2.15.

As Notation 2.11 suggests, terms of type ΣX behave very much like predicates on X, and, more basically, those of type Σ like propositions. We can form conjunctions and disjunctions of such terms, but not implications or negations — these become equations between terms. In some cases there are also operators ∃X : ΣX → Σ and ∀X : ΣX → Σ that satisfy the same formal properties as the quantifiers (cf. Remark 4.16). A space X is called compact when it has a ∀X operator, and overt when it has ∃X.

Statements in the theory are expressed as equations between terms, but we use ⊑ or ⇒ as syntactic sugar in the lattices ΣX and Σ. Since not every formula of (for example) the predicate calculus is of this form, the theory imposes limitations on what can be said within it.


You may think that we are making a rod for our own backs by restricting ourselves to such a weak calculus. This criticism puts us in good company with constructive mathematicians, who often face the same lack of understanding. Indeed, one of the reasons why the papers on ASD are so long is that proofs in weak calculi — where they exist at all — necessarily have far more steps than those in stronger systems: it’s like digging a trench with a teaspoon.

But the ASD calculus is the calculus of spaces and maps. If some feature is missing from it, this is not because of our asceticism, but because spaces and maps do not possess it. The justification of this claim is that, starting from the axioms of ASD, we may reconstruct the categories of computably based locally compact locales [G] and of general locales over an elementary topos [H].

In such a weak calculus, it will not surprise you to hear that, as a rule, it is very difficult to know how to say anything at all. But experience has shown something very remarkable, namely that, once we have some way of expressing an idea, it usually turns out to be the right way. In contrast, stronger calculi, i.e. those that take advantage of the logic of sets, type theory or a topos, offer numerous candidates for formulating an idea, but these then often lead to distracting counterexamples.


It will be useful to state the formal relationship between ASD and traditional topology.



Definition 3.1 In the classical interpretation of ASD, each type X denotes a locally compact topological space. This space need not be Hausdorff, so by “local compactness” we mean that each xUX has some xVKUX, where U and V are open and K is compact [GHK++80]. Similarly, each term x:Xf x:Y denotes a continuous function f:XY. All maps between spaces are continuous, simply because the calculus never introduces discontinuous functions.

The exponential ΣX is the topology (lattice of open subspaces) of X, regarded not as a set but as another topological space, equipped with the Scott topology (Definition 2.6); since X is locally compact, ΣX is a continuous lattice. Maps ΣX→ΣY are Scott-continuous functions. The connectives ⊤, ⊥, ∧ and ∨ are interpreted in the obvious way in the lattice, and ∃N by N-indexed unions or joins. When K is a compact space, ∀KK→Σ denotes the Scott-continuous function in Exercise 2.7.



Remark 3.2 We usually think of a map φ:X→Σ as representing an open subspace of X, namely the inverse image of the open point ⊤∈Σ. But, by considering the inverse image of the closed point ⊥∈Σ instead, the same term also corresponds to a closed subspace.

Hence there is a bijection between open and closed subspaces, but it arises from their common classifiers, not by complementation. Indeed, there is no such thing as complementation, as there is no underlying theory of sets (or negation).

This extensional correspondence between terms and both open and closed subspaces gives rise to Axiom 4.7 and Lemma 4.9, which are known as the Phoa principle. This makes the difference between a λ-notation for topology that must still refer to set theory to prove its theorems, and a calculus that can prove theorems for itself, indeed in a computational setting. It was the subject of the first published paper on ASD [C].


Constructive or intuitionistic logicians may, however, feel uncomfortable using this axiom, because it looks like classical logic. Nevertheless, the “classical” interpretation above is also valid for intuitionistic locally compact locales (LKLoc) over any elementary topos.

The reason for this is that ASD is a higher-order logic of closed subspaces, just as much as it is of open ones. When it is functioning in its “closed” role, we are essentially using the symbols the wrong way round: for example, ∨ denotes intersection.

The discomfort will be particularly acute in this paper, as it deals with a Hausdorff space, where the diagonal RR× R is closed (Definition 4.12). This means that we can discuss equality of two real numbers, a,b:R, using a predicate of type Σ. But since the logic of closed predicates is upside down, the (open) predicate is aR b, and for equality we have to write the equation (ab)⇔⊥ between terms of type Σ. For the integers, on the other hand, we can state equality as (n=m)⇔⊤.

That equality of real numbers is “doubly negated” like this was recognised by Brouwer, cf. [Hey56], and arguably even by Archimedes. However, we stress that this is not what we are doing in ASD: ≠ is primitive, not derived from equality, and there is no negation of predicates. The equational statements (−)⇔ ⊤ and (−)⇔ ⊥ merely say that the parameters of the expression (−) belong respectively to certain open or closed subspaces. These are of the same status in the logic.



Remark 3.3 As we shall see, it is the part of the calculus that concerns (sub)types that is crucial to the compactness of the closed interval.

There are many approaches to topology that are based on sets of points, although the word “set” may be replaced by (Martin–Löf) type or object (of a topos), and “functions” may be accompanied by programs. In all of these systems, a new object X is constructed by specifying a predicate on a suitable ambient object Y, and then carving out the desired subobject i:XY using the subset-forming tools provided by the foundations of the system. For example, the real line may be constructed as the set (type, object) of those pairs (D,U) or (δ,υ) that satisfy the properties required of a Dedekind cut.

What is the topology on such a subset? We fall on the mercy of the underlying logic of sets of points to determine this, and so to prove important theorems of mathematics such as compactness of the real closed interval. As we shall see in Section 15, in many interesting logical systems, especially those that are designed to capture recursion, the logic may not oblige us with a proof, but may yield a counterexample instead.

We consider that this topological problem is not about the underlying logic but about the existence of certain continuous functions of higher type. We saw in Proposition 2.15 that, in the classical situation, we may use the Heine–Borel theorem to define a Scott-continuous map I that extends any open subset of X≡ℝ to one of Y≡Σ×Σ. The other foundational systems in which Heine–Borel is provable can also define this map, whilst others do neither.


The key feature of Abstract Stone Duality is that it provides the map IX↣ΣY (called a Σ-splitting) axiomatically whenever we define a subspace i:XY, thereby taking account of the intended topology. It is important to understand that the types that are generated by the language that we describe are therefore abstract ones, not sets of points.

The essence of our construction in the rest of the paper is then that the quantifier ∀[0,1] that makes [0,1] compact may conversely be derived from this map I.

Beware, however, that there are some subspaces (equalisers, Definition 5.3) that have no Σ-splitting in ASD or any other system of topology. For example, i:ℕ↣Σℕ×ℕ can be expressed as an equaliser of topological spaces or locales, but it does not have a Scott-continuous Σ-splitting I. The direct image map, called i*, does satisfy the relevant equation, i*· i*=id, where i* is localic notation for our Σi, but i* is only monotone, not Scott-continuous. Consequently, ℕ is not definable in ASD as the calculus is described here. On the other hand, a particular inclusion may have many different Σ-splittings, as we shall find in Section 7.

We formalise the subspace calculus in Section 5. However, as we shall see in Sections 7 and 8, it is not as easy to use as set-theoretic comprehension — the rule has been to introduce only one subspace per paper. So far, it only gives a robust account of locally compact spaces, but there is work in progress to extend it.



Remark 3.4 Since ASD is presented as a λ-calculus, it is reasonable to ask whether

  1. it is powerful enough for us to write programs in it;
  2. it has some computational interpretation, at least in principle; and
  3. we can actually print out digits of real numbers using it.

Each of these questions deserves a paper in itself. You will find several remarks here that are motivated by practical computation, and we are actively thinking about a prototype implementation [Bau08], but we do not say very much about it in this paper.


The language was indeed partly inspired by ideas from theoretical computer science. Computation is a natural part of the calculus, and there is no need to bolt a clumsy, old-fashioned theory of recursion on to the front of it. Domain theory can be developed within ASD [F], and then programming languages such as Plotkin’s PCF can be translated into this, using the denotational semantics in [Plo77].

Computationally, we understand the elements ⊤ and ⊥ of Σ as termination (“yes”) and non-termination (“wait”) respectively; Σ is not a “Boolean” type, as there is no definite “no”. The naïve interpretation of the Phoa principle is then that any function F:Σ→Σ is specified by its effect on these two possible inputs. It cannot interchange them, as that would solve the Halting Problem, so F⊥⇒ F⊤, leaving essentially only three possible behaviours. More generally, all maps ΣY→ΣX are monotone with respect to the lattice order in both topology and computation.

In fact, ASD is topologically more expressive than domain theory: its types denote exactly the “computably based” locally compact locales, and the maps are the computable continuous functions [G].


Conversely, the topological features of the calculus can be “normalised out” of its terms (Section A 11). Given that the most important type, Σ, captures propositions rather than functions, the result of this normalisation is a kind of logic program. However, to put this into practice would require a combination of three different generalisations of Prolog, namely the addition of λ-calculus, parallelism (for ∨) and the manipulation of intervals or constraints.

In logic programming, a predicate is interpreted as a program whose objective is to find a proof and report instantiations for the free and existentially bound variables. If the predicate is false, the program never halts (in a “successful” way: it might abort). The basic process for doing this is unification, in which variables are assigned values according to the constraints imposed by the rest of the program, i.e. the values that they must have if the program is ever to terminate with a proof of the original predicate. The usual notion of unification only makes sense for combinatorial structures, but [Cle87] suggests an analogue for the reals.



Remark 3.5 Applying this to the issues of constructivity in the Introduction, logic programming gives an effective meaning to the existential quantifier. However, in ASD this is weaker than that found in other constructive foundational systems: in our version the free variables of the predicate must also be of overt discrete Hausdorff type. That is, they must be either natural numbers or something very similar, such as rationals. They cannot be real numbers, functions or predicates. Our choice principle is therefore only ℕ–ℕ for Σ10-predicates. This is topologically significant, because it constrains the sense in which the closed interval has the Heine–Borel “finite sub-cover” property in Section 10.

As you would expect, “de Morgan” duality between ∧ and ∨ extends to the quantifiers, i.e. from ∃ to ∀. Once again, however, this is different in ASD from proof theory: whilst ∃ is true more often in ASD, ∀ is true less often, as we shall explain in Section 15.


Previous Up Next