We shall show how to translate the formal rules into the vernacular (English, French, etc prose) and back. But it is important to understand that this translation cannot be a precise one like those amongst logical, programming and categorical languages which we give elsewhere in the book. This is because the meaning of a vernacular sentence (if it has one) is not given structurally in terms of the component parts, but depends heavily on unstated contextual information. Sentences which a classical grammarian would parse alike may have very different logical meanings.
The art of translation between human languages lies in idiom: particular phrases in the two languages match, not literally, but as a whole. The theory of descriptions (Definition 1.2.10) has already illustrated this.
REMARK 1.6.1 The simplest idiom is the equational one:

Little more needs to be said for the other direct rules: the formal style places each assertion below the previous one, spelling out which rule justifies the step. By contrast, for reasons of space, it is usual to present a long argument as running text, divided into paragraphs and sentences to indicate the milestones in our progress. Arguments of this form are hardly literary prose, and we save hence, thence and whence from the grammatical graveyard simply to avoid the monotony of therefore.
Proof boxes delimited by keywords The vernacular has its own ways of accommodating simple departures from direct logic.
But the most interesting idioms are those corresponding to $:
Boxes can be avoided by dividing the presentation of a topic into lemmas, each of which deals with a single box. The idiomatic proof of a lemma with "x.q[x] as its conclusion or $y.f[y] as its hypothesis would not bother to state the quantified formulae or use any kind of box: the variables are simply global to the proof. The same applies to the proof of $z.q[z]: the witness z has to be introduced at some point and its properties developed. So only when the proof is complex, with heavily nested (Þ Á ) and ("Á )rules and no natural way of packaging the parts, must we use a formal style to make the argument clear.
Although nested boxes can be handled by additional lemmas, they make us deaf to anything logic may have to say about a problem: for example in (ed)proofs in analysis, information (the degree d of approximation) flows backwards from output to input. Induction, with its nested ", involves nested ("E ) boxes, of which Section 2.6 gives examples.
It is easy to translate formal proofs mechanically into the vernacular, though some creativity is needed to make them readable, cf word by word translations from a foreign language using a dictionary. The other way is much more difficult  it would be quicker to reconstruct the proof from scratch (as I often find myself). This raises questions about the usefulness of proofs in printed journals in future.
Often the conceptual structure of an argument may already be present in a simpler version of the result, the more substantial one involving some difficult calculation. If the former, which explains the theorem, had been laid out, the readers could have supplied or omitted the details of the calculation for themselves.
Importing and exporting formulae One should think of a box as a separate logical world, interacting with our own only across a membrane, which allows any hypothesis to enter from above (unless, of course, this means taking it out of another nested box).
A formula within a box which has no proof there is a hypothesis; this is replaced by evidence if the formula is imported from outside (breduction, Remark 1.5.10). Indeed, the weakening rule specifically allows this for external hypotheses, and nothing stops us from repeating parts of the development; so we may as well import the conclusion instead. The weakening rule for variables is given in Remark 2.3.8.
These are exactly the rules for scoping in blockstructured programs.
LEMMA 1.6.3 In the case of the Ú we have the distributive law


PROOF: To show LHS\vdash RHS we need to import g.
=0.4omitted proofbox environment omitted proofbox environment
In general, the ability to use formulae from earlier in the proof is known as referential transparency, it was discussed by Willard Quine for natural language [Qui60]. For us, it is a manifestation of invariance under substitution. This phenomenon will arise
REMARK 1.6.4 The box rules allow us to export q from an (Þ Á )box in the form fÞ q, and q[x] from ("Á) as "x.q[x]. Although we presented these rules with just one such formula, and wrote it on the last line of the box, in fact any number of formulae may be exported from any line of the box, if they are appropriately qualified (by "x or fÞ ). A formula may be exported unaltered from an (ÚE )box if it occurs on both sides.
Openended boxes The (ÚE )rule provides a proof in each of the two cases, without prejudice as to which holds. Similarly the ($E )rule gives a demonstration in terms of an unspecified witness. Dependence on the alternative or witness means that a box is needed, but we shall now show that the ($E)box is openended below (we have just seen that all boxes are open above). This rule is the least well understood in the practice of mathematics, although it has a bearing on the use of structure such as products (Remark 4.5.12) and the meaning of finiteness (Remark 6.6.5).
REMARK 1.6.5 Since the conclusion (q) is arbitrary, the closing of the box may be postponed indefinitely, ie until the end of the enclosing box or proof. This is because any q¢ which we deduce from q outside the box (necessarily not mentioning the witness) may equally be deduced inside and exported as the conclusion instead of q.

Some unfinished business from Section 1.3 provides an example.
LEMMA 1.6.6 Composition preserves functionality and totality.
PROOF:

The properties of the ($E )box make it notationally redundant, and explain the idiomatic phrase ``there exists.'' However, the conclusions of such an argument cannot be exported from enclosing boxes, unless the witness is unique (f[x] is a description, Definition 1.2.10), in which case a functionsymbol may be introduced.
Enlarging the box as much as possible is appropriate for the existential quantifier. The conclusion of (ÚE ) is also indeterminate, but this is normally exploited in such a way as to shorten the proof, by closing the box as soon as the alternatives can be reunited. Remark 2.3.13 sets out the continuation rules which are needed in type theory to handle the openendedness, and Example 7.2.6 explains it categorically.
REMARK 1.6.7 Notice that the ($E )rule alone  one of the two halves of the meaning of the quantifier  suffices to give a formal justification of the introduction of an unspecified witness for any existentially quantified statement, and the continued use of this witness until the end of the enclosing box. It is quite unnecessary to postulate, as Hilbert and later Bourbaki [Bou57] did, a global process which selects such witnesses, and indeed to do so amounts to the axiom of choice (Definition 1.8.8).
DEFINITION 1.6.8 A very useful application of the ($E )box and its openendedness is the definition or declaration. For any well formed term t we may always introduce a fragment of proof of the form

Since the box is openended, we don't bother to write it, and condense the three steps above to `` putx = t.'' Although after the declaration the relationship between t and x is symmetrical, during the defining step they play different roles ; some authors indicate this by writing x: = t. The symbol x is called the definiendum or head and the term t the definiens or body. Similarly when t is a proposition we say ``if'' rather than ``iff'' or ``if and only if'' in a declaration.
Notice that any variables or hypotheses in the context are parameters or preconditions, so declarations cannot be exported from enclosing boxes.
REMARK 1.6.9 The phrase without loss of generality (wlog) is a variant of declaration which is analogous to assignment in imperative programming languages (Remark 4.3.3). For example,
These assumptions are desirable because the problem becomes simpler. They are permissible because the general problem may be transformed into the special case and its solution back again. One ought to take care to distinguish the general and special cases by different notation, such as x and x¢, but commonly this is not done. Nevertheless, when we say ``without loss of generality'' we must always state the twoway translation involved (the means of exchange).
Alternative methods Until Frege, Aristotle's syllogisms (Exercise 1.25) were the standard treatment of the quantifiers, and they are still taught to unfortunate philosophy students, despite significant advances in logic in both ancient times and the twelfth to fourteenth centuries.
Gottlob Frege was by far the best logician between the Renaissance and the First World War. The distinction in Section 1.2 between an expression and its value is due to him. In 1879, while others such as de Morgan and Schröder were still battling with the propositional connectives, he developed a modern theory of quantifiers, understood bound variables and used second order logic with confidence. His verbal explanations are crystal clear, but his spaceconsuming notation ( Begriffs schrift, conceptwriting) would have consigned his work to oblivion but for Russell's attention. It must have caused nightmares for his printer, but he argued (in a letter to Peano, who printed his own books) that this was not the important consideration: one ought to take advantage of the second dimension to explain mathematics. We use Frege's theory of sequences in Sections 3.8 and 6.4.
REMARK 1.6.10 Hilbert's logic, like that of Frege and Russell, had steps like
if ÙGÞ f is true then ÙD Þ q is also true,
and axioms (instead of rules) such as

All of the assertions in a Hilbertstyle proof are facts, whereas idiomatic arguments in mathematics use hypotheses. The first formal account of natural deduction, together with its equivalence with the Hilbert style, was given by Stanislaw Ja\'skowski [ J \' as34], based on ideas of Jan Lukasiewicz. It treated not only implication and conjunction but also the universal quantifier (and substitution), noting and correctly handling the problem of empty domains (Remark 1.5.6).
Gerhard Gentzen [Gen35], a student of Hilbert, treated the connectives individually (whereas previous authors had defined some in terms of others), recognising the symmetry in their rules. He gave translations amongst natural deduction (NK), sequent calculus (LK) and Hilbertstyle (LHK) classical and intuitionistic (NJ, LJ, LHJ) logic.
Frederic Fitch wrote the first textbook [Fit52] to make routine use of natural deduction using our proof boxes, modulo some syntactic sugar, and Nikolas de Bruijn developed the notation for AUTOMATH.
REMARK 1.6.11 Gentzen used the tree notation (Remark 1.4.5) for his natural deduction. This style remains the prevalent one amongst logicians, but it is highly unsatisfactory for the indirect rules, especially for extended proofs. The formulae at the leaves (with no rule above them) are hypotheses; when fÞ q has been deduced from q, f need no longer be a hypothesis and so may be discharged. The (ÚE )rule is similar. We have done this by closing a box, but it is traditionally indicated by striking through the formula:

Model theory Surely the simplest interpretation of proof is that
f is true, therefore q is true.
However, the verity of a formula such as

REMARK 1.6.12 A formula is not true or false of itself, but only when interpreted in a model M. This specifies
The meaning of general terms and formulae is defined by structural recursion; this is straightforward for the connectives Ù and Ú, and we write M\vDash f if f is valid in the interpretation. However, a quantified formula "x.f[x] is valid if f[a] holds for each individual a, which is an infinitary condition, so the naive metalanguage is no longer adequate. The arity of "x is the semantic object which interprets the type of the syntactic variable x. We shall consider infinitary operations, ie whose arities are objects of the world under mathematical study, in Section 6.1 . The quantification over individuals is then performed by the ("Á )rule in the metalogic, as we can never escape completely from finitary proof.
Having set this up, one can show that each of the logical rules preserves validity, ie whenever the premises are true, so is the conclusion. This is called soundness. To an algebraist, it says that the interpretation is a homomorphism for Ù, ", etc . By structural induction on a proof G\vdash f, if its hypotheses are valid (M\vDash G) then so is its conclusion (M\vDash f).
REMARK 1.6.13 On the other hand, suppose that every model M for which M\vDash G also satisfies f. In this case we say that G (semantically) entails f and write G\vDash f. As we have said, when there is a proof G\vdash f, then G\vDash f. But as the notions of proof and validity have been defined independently, G\vDash f may perhaps happen without a proof, either if f is supposed to be true but we have forgotten to state some rule of deduction needed to prove it, or if it ought to be false but our class of models is too poor to furnish a counterexample. The proof theory is said to be complete for its semantics if proof and truth do coincide. Kurt Gödel showed that first order logic (what we have considered so far) is complete, but second or higher order logic (Section 2.8) is not.
Both of these famous theorems raise a number of deep questions, many of which are beyond the scope and viewpoint of this book. As we have noted, there was a tendency in traditional logic to study propositions, regarding variables, individuals and terms as secondary. To correct this attitude we must add more detail, and begin with the algebraic fragment alone. The connectives and quantifiers are added one by one, so we shall not reach the classic model theoretic results. On the other hand, models in the old framework had to be discrete sets, whereas for us they may perhaps be topological spaces, or come from some more exotic world. In fact it is possible to fashion such worlds out of the syntactic theories themselves, and they contain generic models. Conversely, Section 7.6 shows how to design your syntax to fit your semantics.
Model theory, with the completeness theorem at its heart, is a ``three sides of a square'' approach to logic: given the axioms for, say, groups, instead of deducing theorems directly from the axioms, it seeks to find out what is true about the proper class of models. We aim to do things directly, in particular recognising the Lindenbaum algebra or ``classifying category'' of a theory as a syntactic construction.
REMARK 1.6.16 I have seen logic introduced to first year undergraduates both in the form of truthassignments and using proof boxes, and firmly believe that the box method is preferable. As this section has shown, it is a formal version of the way mathematicians (and informaticians) actually reason, even if they claim to use Boolean algebra when asked.
If you want to teach both interpretations, they have to be shown to be equivalent. The formal soundness of (ÚE ) and (Þ Á ) is more difficult to explain than one might suppose, as it depends on hypotheses, ie the notion of scope ( cf proof boxes). Soundness for a whole proof makes use of structural induction, which, whilst they should learn it during the first year, is unfamiliar to students just out of school. Although they need to be aware of the truthvalues interpretation, together with the statement and explanation of soundness and completeness, this should be given at the end of a first logic course, after the students have learned to construct some actual proofs.
Vernacular idioms for induction will be discussed in Sections 2.52.8 and 3.7 3.9, and declarative programming in Section 4.3.