Theorem Proving in Lean

8 Building Theories and Proofs

In this chapter, we return to a discussion of some of the pragmatic features of Lean that support the development of structured theories and proofs.

8.1 More on Coercions

In Section Coercions, we discussed coercions briefly. The goal of this section is to provide a more precise account.

The most basic type of coercion maps elements of one type to another. For example, a coercion from nat to int allows us to view any element n : nat as an element of int. But some coercions depend on parameters; for example, for any type A, we can view any element l : list A as an element of set A, namely, the set of elements occurring in the list. The corresponding coercion is defined on the "family" of types list A, parameterized by A.

In fact, Lean allows us to declare three kinds of coercions:

  • from a family of types to another family of types
  • from a family of types to the class of sorts
  • from a family of types to the class of function types

The first kind of coercion allows us to view any element of a member of the source family as an element of a corresponding member of the target family. The second kind of coercion allows us to view any element of a member of the source family as a type. The third kind of coercion allows us to view any element of the source family as a function. Let us consider each of these in turn.

In type theory terminology, an element F : Π x1 : A1, ..., xn : An, Type is called a family of types. For every sequence of arguments a1 : A1, ..., an : An, F a1 ... an is a type, so we think of F as being a family parameterized by these arguments. A coercion of the first kind is of the form

c : Π x1 : A1, ..., xn : An, y : F x1 ... xn, G b1 ... bm

where G is another family of types, and the terms b1 ... bn depend on x1, ..., xn, y. This allows us to write f t where t is of type F a1 ... an but f expects an argument of type G y1 ... ym, for some y1 ... ym. For example, if F is list : Π A : Type, Type, G is set Π A : Type, Type, then a coercion c : Π A : Type, list A → set A allows us to pass an argument of type list T for some T any time an element of type set T is expected. These are the types of coercions we considered in Section Coercions.

Let us now consider the second kind of coercion. By the class of sorts, we mean the collection of universes Type.{i}. A coercion of the second kind is of the form

c : Π x1 : A1, ..., xn : An, F x1 ... xn → Type

where F is a family of types as above. This allows us to write s : t whenever t is of type F a1 ... an. In other words, the coercion allows us to view the elements of F a1 ... an as types. We will see in a later chapter that this is very useful when defining algebraic structures in which one component, the carrier of the structure, is a Type. For example, we can define a semigroup as follows:

structure Semigroup : Type := (carrier : Type) (mul : carrier → carrier → carrier) (mul_assoc : ∀ a b c : carrier, mul (mul a b) c = mul a (mul b c)) notation a `*` b := Semigroup.mul _ a b

In other words, a semigroup consists of a type, carrier, and a multiplication, mul, with the property that the multiplication is associative. The notation command allows us to write a * b instead of Semigroup.mul S a b whenever we have a b : carrier S; notice that Lean can infer the argument S from the types of a and b. The function Semigroup.carrier maps the class Semigroup to the sort Type:

check Semigroup.carrier

If we declare this function to be a coercion, then whenever we have a semigroup S : Semigroup, we can write a : S instead of a : Semigroup.carrier S:

attribute Semigroup.carrier [coercion] example (S : Semigroup) (a b : S) : a * b * a = a * (b * a) := !Semigroup.mul_assoc

It is the coercion that makes it possible to write (a b : S).

By the class of function types, we mean the collection of Pi types Π z : B, C. The third kind of coercion has the form

c : Π x1 : A1, ..., xn : An, y : F x1 ... xn, Π z : B, C

where F is again a family of types and B and C can depend on x1, ..., xn, y. This makes it possible to write t s whenever t is an element of F a1 ... an. In other words, the coercion enables us to view elements of F a1 ... an as functions. Continuing the example above, we can define the notion of a morphism between semigroups:

structure morphism (S1 S2 : Semigroup) : Type := (mor : S1 → S2) (resp_mul : ∀ a b : S1, mor (a * b) = (mor a) * (mor b))

In other words, a morphism from S1 to S2 is a function from the carrier of S1 to the carrier of S2 (note the implicit coercion) that respects the multiplication. The projection morphism.mor takes a morphism to the underlying function:

check morphism.mor -- morphism ?S1 ?S2 → ?S1 → ?S2

As a result, it is a prime candidate for the third type of coercion.

attribute morphism.mor [coercion] example (S1 S2 : Semigroup) (f : morphism S1 S2) (a : S1) : f (a * a * a) = f a * f a * f a := calc f (a * a * a) = f (a * a) * f a : morphism.resp_mul f ... = f a * f a * f a : morphism.resp_mul f

With the coercion in place, we can write f (a * a * a) instead of morphism.mor f (a * a * a). When the morphism, f, is used where a function is expected, Lean inserts the coercion.

Remember that you can create a coercion whose scope is limited to the current namespace or section using the local modifier:

local attribute morphism.mor [coercion]

You can also declare a persistent coercion by assigning the attribute when you define the function initially, as described in Section Coercions. Coercions that are defined in a namespace "live" in that namespace, and are made active when the namespace is opened. If you want a coercion to be active as soon as a module is imported, be sure to declare it at the "top level," i.e. outside any namespace.

Remember also that you can instruct Lean's pretty-printer to show coercions with set_option, and you can print all the coercions in the environment using print coercions:

theorem test (S1 S2 : Semigroup) (f : morphism S1 S2) (a : S1) : f (a * a * a) = f a * f a * f a := calc f (a * a * a) = f (a * a) * f a : morphism.resp_mul f ... = f a * f a * f a : morphism.resp_mul f set_option pp.coercions true check test print coercions

Lean will also chain coercions as necessary. You can think of the coercion declarations as forming a directed graph where the nodes are families of types and the edges are the coercions between them. More precisely, each node is either a family of types, or the class of sorts, of the class of function types. The latter two are sinks in the graph. Internally, Lean automatically computes the transitive closure of this graph, in which the "paths" correspond to chains of coercions.

8.2 More on Implicit Arguments

In Section Implicit Arguments, we discussed implicit arguments. For example, if a term t has type Π {x : A}, P x, the variable x is implicit in t, which means that whenever you write t, a placeholder, or "hole," is inserted, so that t is replaced by @t _. If you don't want that to happen, you have to write @t instead.

Dual to the @ symbol is the exclamation mark, !, which essentially makes explicit arguments implicit by inserting underscores for them. Look at the terms that result from the following definitions to see this in action:

definition foo (n m k l : ℕ) : (n - m) * (k + l) = (k + l) * (n - m) := !mul.comm print foo -- definition foo : ∀ (n m k l : ℕ), (n - m) * (k + l) = (k + l) * (n - m) -- λ (n m k l : ℕ), mul.comm (n - m) (k + l) definition foo2 (n m k l : ℕ) : (n + k) + l = (k + l) + n := !add.assoc ⬝ !add.comm print foo2 -- definition foo2 : ∀ (n : ℕ), ℕ → (∀ (k l : ℕ), n + k + l = k + l + n) -- λ (n m k l : ℕ), add.assoc n k l ⬝ add.comm n (k + l) definition foo3 (l : ℕ) (H : ∀ (n : ℕ), l + 2 ≠ 2 * (n + 1)) (n : ℕ) : l ≠ 2 * n := assume K : l = 2 * n, absurd (show l + 2 = 2 * n + 2, from K ▸ rfl) !H print foo3 -- definition foo3 : ∀ (l : ℕ), -- (∀ (n : ℕ), l + 2 ≠ 2 * (n + 1)) → (∀ (n : ℕ), l ≠ 2 * n) -- λ (l : ℕ) (H : ∀ (n : ℕ), l + 2 ≠ 2 * (n + 1)) (n : ℕ) -- (K : l = 2 * n), -- absurd (show l + 2 = 2 * n + 2, from K ▸ rfl) (H n)

In the first two examples, the exclamation mark indicates that the arguments to mul.comm, add.assoc, and add.comm should be made implicit, saving us the trouble of having to write lots of underscores. Note, by the way, that in the last example we use a neat trick: to show l + 2 = 2 * n + 2 we take the reflexivity proof rfl : l + 2 = l + 2 and then substitute 2 * n for the second l.

More precisely, if t is of a function type, the expression !t makes all the arguments implicit up until the first argument that cannot be inferred from later arguments or the return type. This is usually what you want; for example, when applying a theorem, the arguments can often be inferred from context, but the hypothesis need to be provided explicitly.

In the following example, we declare P and p without implicit arguments, and then use the exclamation mark to make them implicit after the fact.

variables (P : Π (n m : ℕ) (v : vector bool n) (w : vector bool m), Type) (p : Π (n m : ℕ) (v : vector bool n) (w : vector bool m), P n m v w) (n m : ℕ) (v : vector bool n) (w : vector bool m) set_option pp.metavar_args false eval (!p : P n m v w) -- p n m v w eval (!p : P n n v v) -- p n n v v check !p -- p ?n ?m ?v ?w : P ?n ?m ?v ?w eval (!P v w : Type) -- P n m v w eval (!p : !P w v) -- p m n w v

Notice that we set pp.metavar_args to simplify the output. In the first eval, the expression !p inserts underscores for all explicit arguments of p, because the values of all of the placeholders in p _ _ _ _ can be inferred from its type P n m v w. The same is true in the second example. In the subsequent check statement, the arguments of p are inserted, but cannot be inferred. Hence there are still metavariables in the output.

For P things are different: if we know that the type of P _ _ _ _ is Type, we don't have enough information to assign values to the holes. However, we can fill the first two holes if we are given the last two arguments. Thus !P v w is interpreted as P _ _ v w, and from this we can infer that the holes must be n and m, respectively.

Here are some more examples of this behavior.

check @add_lt_add_right definition foo (n m k : ℕ) (H : n < m) : n + k < m + k := !(add_lt_add_right H) example {n m k l : ℕ} (H : n < m) (K : m + l < k + l) : n < k + l := calc n ≤ n + l : !le_add_right ... < m + l : !foo H ... < k + l : K

In the following example we show that a reflexive euclidean relation is both symmetric and transitive. Notice that we set the variable R to be an explicit argument of reflexive, symmetric, transitive and euclidean. However, for the theorems it is more convenient to make R implicit. We can do this with the command variable {R}, which makes R implicit from that point on.

variables {A : Type} (R : A → A → Prop) definition reflexive : Prop := ∀ (a : A), R a a definition symmetric : Prop := ∀ {a b : A}, R a b → R b a definition transitive : Prop := ∀ {a b c : A}, R a b → R b c → R a c definition euclidean : Prop := ∀ {a b c : A}, R a b → R a c → R b c variable {R} theorem th1 (refl : reflexive R) (eucl : euclidean R) : symmetric R := take a b : A, assume (H : R a b), show R b a, from eucl H !refl theorem th2 (symm : symmetric R) (eucl : euclidean R) : transitive R := take (a b c : A), assume (H : R a b) (K : R b c), have H' : R b a, from symm H, show R a c, from eucl H' K -- ERROR: /- theorem th3 (refl : reflexive R) (eucl : euclidean R) : transitive R := th2 (th1 refl eucl) eucl -/ theorem th3 (refl : reflexive R) (eucl : euclidean R) : transitive R := @th2 _ _ (@th1 _ _ @refl @eucl) @eucl

However, when we want to combine th1 and th2 into th3 we notice something funny. If we just write the proof th2 (th1 refl eucl) eucl we get an error. The reason is that eucl has type ∀ {a b c : A}, R a b → R a c → R b c, hence eucl is interpreted as @eucl _ _ _. Similarly, the types of th1 and th2 start with a quantification over implicit arguments, hence they are interpreted as th1 _ _ and th2 _ _ _, respectively. We can solve this by writing @eucl, @th1 and @th2, but this is very inconvenient.

A better solution is to use a weaker form of implicit argument, marked with the binders and , or, equivalently, {{ and }}. The first two can be inserted by typing \{{ and \}}, respectively.

definition symmetric : Prop := ∀ ⦃a b : A⦄, R a b → R b a definition transitive : Prop := ∀ ⦃a b c : A⦄, R a b → R b c → R a c definition euclidean : Prop := ∀ ⦃a b c : A⦄, R a b → R a c → R b c

Arguments in these binders are still implicit, but they are not added to a term t until t is applied to something. In other words, given an expression t of function type, if the next argument to t is a strong implicit argument, marked with { and }, that implicit argument is asserted aggressively; if the next argument to t is a weaker implicit argument, marked with and , the implicit argument is not inserted unless the term is applied to something else. With H : symmetric R, this is what we want. Because we now have H : ∀ ⦃a b : A⦄, R a b → R b a, the expression H is interpreted as @H, but H p is interpreted as @H _ _ p. This allows us to prove th3 in the expected way.

theorem th3 (refl : reflexive R) (eucl : euclidean R) : transitive R := th2 (th1 refl eucl) eucl

There is a third kind of implicit argument, used for type classes, and denoted with square brackets, [ amd ]. We will explain these kinds of arguments in Chapter Type Classes.

8.3 Elaboration and Unification

When you enter an expression like λ x y z, f (x + y) z for Lean to process, you are leaving information implicit. For example, the types of x, y, and z have to be inferred from the context, the notation + may be overloaded, and there may be implicit arguments to f that need to be filled in as well.

The process of taking a partially-specified expression and inferring what is left implicit is known as elaboration. Lean's elaboration algorithm is powerful, but at the same time, subtle and complex. Working in a system of dependent type theory requires knowing what sorts of information the elaborator can reliably infer, as well as knowing how to respond to error messages that are raised when the elaborator fails. To that end, it is helpful to have a general idea of how Lean's elaborator works.

When Lean is parsing an expression, it first enters a preprocessing phase. First, Lean inserts "holes" for implicit arguments. If term t has type Π {x : A}, P x, then t is replaced by @t _ everywhere. Then, the holes — either the ones inserted in the previous step or the ones explicitly written by the user — in a term are instantiated by metavariables ?M1, ?M2, ?M3, .... Each overloaded notation is associated with a list of choices, that is, the possible interpretations. Similarly, Lean tries to detect the points where a coercion may need to be inserted in an application s t, to make the inferred type of t match the argument type of s. These become choice points too. If one possible outcome of the elaboration procedure is that no coercion is needed, then one of the choices on the list is the identity.

After preprocessing, Lean extracts a list of constraints that need to be solved in order for the term to have a valid type. Each application term s t gives rise to a constraint T1 = T2, where t has type T1 and s has type Π x : T2, T3. Notice that the expressions T1 and T2 will often contain metavariables; they may even be metavariables themselves. Moreover, a definition of the form definition foo : T := t or a theorem of the form theorem bar : T := t generates the constraint that the inferred type of t should be T.

The elaborator now has a straightforward task: find expressions to substitute for all the metavariables so that all of the constraints are simultaneously satisfied. An assignment of terms to metavariables is known as a substitution, and the general task of finding a substitution that makes two expressions coincide is known as a unification problem. (If only one of the expressions contains metavariables, the task is a special case known as a matching problem.)

Some constraints are straightforwardly handled. If f and g are distinct constants, it is clear that there is no way to unify the terms f s_1 ... s_m and g t_1 ... t_n. On the other hand, one can unify f s_1 ... s_m and f t_1 ... t_m by unifying s_1 with t_1, s_2 with t_2, and so on. If ?M is a metavariable, one can unify ?M with any term t simply by assigning t to ?M. These are all aspects of first-order unification, and such constraints are solved first.

In contrast, higher-order unification is much more tricky. Consider, for example, the expressions ?M a b and f (g a) b b. All of the following assignments to ?M are among the possible solutions:

  • λ x y, f (g x) y y
  • λ x y, f (g x) y b
  • λ x y, f (g a) b y
  • λ x y, f (g a) b b

Such problems arise in many ways. For example:

  • When you use induction_on x for an inductively defined type, Lean has to infer the relevant induction predicate.
  • When you write eq.subst e p with an equation e : a = b to convert a proposition P a to a proposition P b, Lean has to infer the relevant predicate.
  • When you write sigma.mk a b to build an element of Σ x : A, B x from an element a : A and an element B : B a, Lean has to infer the relevant B. (And notice that there is an ambiguity; sigma.mk a b could also denote an element of Σ x : A, B a, which is essentially the same as A × B a.)

In cases like this, Lean has to perform a backtracking search to find a suitable value of a higher-order metavariable. It is known that even second-order unification is generally undecidable. The algorithm that Lean uses is not complete (which means that it can fail to find a solution even if one exists) and potentially nonterminating. Nonetheless, it performs quite well in ordinary situations.

Moreover, the elaborator performs a global backtracking search over all the nondeterministic choice points introduced by overloads and coercions. In other words, the elaborator starts by trying to solve the equations with the first choice on each list. Each time the procedure fails, it analyzes the failure, and determines the next viable choice to try.

To complicate matters even further, sometimes the elaborator has to reduce terms using the internal computation rules of the formal system. For example, if it happens to be the case that f is defined to be λ x, g x x, we can unify expressions f ?M and g a a by assigning ?M to a. In general, any number of computation steps may be needed to unify terms. It is computationally infeasible to try all possible reductions in the search, so, once again, Lean's elaborator relies on an incomplete strategy.

The interaction of computation with higher-order unification is particularly knotty. For the most part, Lean avoids performing computational reduction when trying to solve higher-order constraints. You can override this, however, by marking some symbols with the reducible attribute, as described in Section 1.4.

The elaborator relies on additional tricks and gadgets to solve a list of constraints and instantiate metavariables. Below we will see that users can specify that some parts of terms should be filled in by tactics, which can, in turn, invoke arbitrary automated procedures. In the next chapter, we will discuss the mechanism of class inference, which can be configured to execute a prolog-like search for appropriate instantiations of an implicit argument. These can be used to help the elaborator find implicit facts on the fly, such as the fact that a particular set is finite, as well as implicit data, such as a default element of a type, or the appropriate multiplication in an algebraic structure.

It is important to keep in mind that all these mechanisms interact. The elaborator processes its list of constraints, trying to solve the easier ones first, postponing others until more information is available, and branching and backtracking at choice points. Even small proofs can generate hundreds or thousands of constraints. The elaboration process continues until the elaborator fails to solve a constraint and has exhausted all its backtracking options, or until all the constraints are solved. In the first case, it returns an error message which tries to provide the user with helpful information as to where and why it failed. In the second case, the type checker is asked to confirm that the assignment that the elaborator has found does indeed make the term type check. If all the metavariables in the original expression have been assigned, the result is a fully elaborated, type-correct expression. Otherwise, Lean flags the sources of the remaining metavariables as "placeholders" or "goals" that could not be filled.

8.4 Reducible Definitions

Transparent identifiers can be declared to be reducible or irreducible or semireducible. By default, a definition is semireducible. This status provides hints that govern the way the elaborator tries to solve higher-order unification problems. As with other attributes, the status of an identifier with respect to reducibility has no bearing on type checking at all, which is to say, once a fully elaborated term is type correct, marking one of the constants it contains to be reducible does not change the correctness. The type checker in the kernel of Lean ignores such attributes, and there is no problem marking a constant reducible at one point, and then irreducible later on, or vice-versa.

The purpose of the annotation is to help Lean's unification procedure decide which declarations should be unfolded. The higher-order unification procedure has to perform case analysis, implementing a backtracking search. At various stages, the procedure has to decide whether a definition C should be unfolded or not.

  • An irreducible definition will never be unfolded during higher-order unification (but can still be unfolded in other situations, for example during type checking).
  • A reducible definition will be always eligible for unfolding.
  • A definition which is semireducible can be unfolded during simple decisions and won't be unfolded during complex decisions. An unfolding decision is simple if the unfolding does not require the procedure to consider an extra case split. It is complex if the unfolding produces at least one extra case, and consequently increases the search space.

You can assign the reducible attribute when a symbol is defined:

definition pr1 [reducible] (A : Type) (a b : A) : A := a

The assignment persists to other modules. You can achieve the same result with the attribute command:

definition pr2 (A : Type) (a b : A) : A := b -- mark pr2 as reducible attribute pr2 [reducible] -- mark pr2 as irreducible attribute pr2 [irreducible]

The local modifier can be used to instruct Lean to limit the scope to the current namespace or section.

definition pr2 (A : Type) (a b : A) : A := b local attribute pr2 [irreducible]

When reducibility hints are declared in a namespace, their scope is restricted to the namespace. In other words, even if you import the module in which the attributes are declared, they do not take effect until the namespace is opened. As with coercions, if you want a reducibility attribute to be set whenever a module is imported, be sure to declare it at the top level. See also Section 1.7 below for more information on how to import only the reducibility attributes, without exposing other aspects of the namespace.

Finally, we can go back to semireducible using the attribute command:

-- pr2 is semireducible definition pr2 (A : Type) (a b : A) : A := b -- mark pr2 as reducible attribute pr2 [reducible] -- ... -- make it semireducible again attribute pr2 [semireducible]

8.5 Helping the Elaborator

Because proof terms and expressions in dependent type theory can become quite complex, working in dependent type theory effectively involves relying on the system to fill in details automatically. When the elaborator fails to elaborate a term, there are two possibilities. One possibility is that there is an error in the term, and no solution is possible. In that case, your goal, as the user, is to find the error and correct it. The second possibility is that the term has a valid elaboration, but the elaborator failed to find it. In that case, you have to help the elaborator along by providing information. This section provides some guidance in both situations.

If the error message is not sufficient to allow you to identify the problem, a first strategy is to ask Lean's pretty printer to show more information, as discussed in Section Setting Options, using some or all of the following options:

set_option pp.implicit true set_option pp.universes true set_option pp.notation false set_option pp.coercions true set_option pp.numerals false set_option pp.full_names true

The following option subsumes all of those settings:

set_option pp.all true

Sometimes, the elaborator will fail with the message that the unifier has exceeded its maximum number of steps. As we noted in the last section, some elaboration problems can lead to nonterminating behavior, and so Lean simply gives up after it has reached a pre-set maximum. You can change this with the set_option command:

set_option unifier.max_steps 100000

This can sometimes help you determine whether there is an error in the term or whether the elaboration problem has simply grown too complex. In the latter case, there are steps you can take to cut down the complexity.

To start with, Lean provides a mechanism to break large elaboration problems down into simpler ones, with a proof ... qed block. Here is the sample proof from Section Examples of Propositional Validities, with additional proof ... qed annotations:

example (p q r : Prop) : p ∧ (q ∨ r) ↔ (p ∧ q) ∨ (p ∧ r) := iff.intro (assume H : p ∧ (q ∨ r), show (p ∧ q) ∨ (p ∧ r), from proof have Hp : p, from and.elim_left H, or.elim (and.elim_right H) (assume Hq : q, show (p ∧ q) ∨ (p ∧ r), from or.inl (and.intro Hp Hq)) (assume Hr : r, show (p ∧ q) ∨ (p ∧ r), from or.inr (and.intro Hp Hr)) qed) (assume H : (p ∧ q) ∨ (p ∧ r), show p ∧ (q ∨ r), from proof or.elim H (assume Hpq : p ∧ q, have Hp : p, from and.elim_left Hpq, have Hq : q, from and.elim_right Hpq, show p ∧ (q ∨ r), from and.intro Hp (or.inl Hq)) (assume Hpr : p ∧ r, have Hp : p, from and.elim_left Hpr, have Hr : r, from and.elim_right Hpr, show p ∧ (q ∨ r), from and.intro Hp (or.inr Hr)) qed)

Writing proof t qed as a subterm of a larger term breaks up the elaboration problem as follows: first, the elaborator tries to elaborate the surrounding term, independent of t. If it succeeds, that solution is used to constrain the type of t, and the elaborator processes that term independently. The net result is that a big elaboration problem gets broken down into smaller elaboration problems. This "localizes" the elaboration procedure, which has both positive and negative effects. A disadvantage is that information is insulated, so that the solution to one problem cannot inform the solution to another. The key advantage is that it can simplify the elaborator's task. For example, backtracking points within a proof ... qed do not become backtracking points for the outside term; the elaborator either succeeds or fails to elaborate each independently. As another benefit, error messages are often improved; an error that ultimately stems from an incorrect choice of an overload in one subterm is not "blamed" on another part of the term.

In principle, one can write proof t qed for any term t, but it is used most effectively following a have or show, as in the example above. This is because have and show specify the intended type of the proof ... qed block, reducing any ambiguity about the subproblem the elaborator needs to solve.

The use of proof ... qed blocks with have and show illustrates two general strategies that can help the elaborator: first, breaking large problems into smaller problems, and, second, providing additional information. The first strategy can also be achieved by breaking a large definition into smaller definitions, or breaking a theorem with a large proof into auxiliary lemmas. Even breaking up long terms internal to a proof using auxiliary have statements can help locate the source of an error.

The second strategy, providing additional information, can be achieved by using have, show, (t : T) notation, and #<namespace> (see Section Coercions) to indicate expected types. More directly, it often helps to specify the implicit arguments. When Lean cannot solve for the value of a metavariable corresponding to an implicit argument, you can always use @ to provide that argument explicitly. Doing so will either help the elaborator solve the elaboration problem, or help you find an error in the term that is blocking the intended solution.

In Lean, tactics not only allow us to invoke arbitrary automated procedures, but also provide an alternative approach to construct proofs and terms. For many users, this is one of the most effective mechanisms to help the elaborator. A tactic can be viewed as a "recipe", a sequence of commands or instructions, that describes how to build a proof. This recipe may be as detailed as we want. A tactic T can be embedded into proof terms by writing by T or begin T end. These annotations instruct Lean that tactic T should be invoked to construct the term in the given location. As with proof ... qed, the elaborator tries to elaborate the surrounding terms before executing T. In fact, the expression proof t qed is just syntactic sugar for by exact t, which invokes the exact tactic. We will discuss tactics in Chapter Tactic-Style Proofs.

If you are running Lean using Emacs, you can "profile" the elaborator and type checker, to find out where they are spending all their time. Type M-x lean-execute to run an independent Lean process manually and add the option --profile. The output buffer will then report the times required by the elaborator and type checker, for each definition and theorem processed. If you ever find the system slowing down while processing a file, this can help you locate the source of the problem.

8.6 Sections

Lean provides various sectioning mechanisms that help structure a theory. We saw in Section Variables and Sections that the section command makes it possible not only to group together elements of a theory that go together, but also to declare variables that are inserted as arguments to theorems and definitions, as necessary. In fact, Lean has two ways of introducing local elements into the sections, namely, as variables or as parameters.

Remember that the point of the variable command is to declare variables for use in theorems, as in the following example:

import standard open nat algebra section variables x y : ℕ definition double := x + x check double y check double (2 * x) theorem t1 : double x = 2 * x := calc double x = x + x : rfl ... = 1 * x + x : one_mul ... = 1 * x + 1 * x : one_mul ... = (1 + 1) * x : right_distrib ... = 2 * x : rfl check t1 y check t1 (2 * x) theorem t2 : double (2 * x) = 4 * x := calc double (2 * x) = 2 * (2 * x) : t1 ... = 2 * 2 * x : mul.assoc ... = 4 * x : rfl end

The definition of double does not have to declare x as an argument; Lean detects the dependence and inserts it automatically. Similarly, Lean detects the occurrence of x in t1 and t2, and inserts it automatically there, too. Note that double does not have y as argument. Variables are only included in declarations where they are actually mentioned. To ask Lean to include a variable in every definition in a section, use the include command. This is often useful with type classes, and is discussed in Section Instances in Sections in the next chapter.

Notice that the variable x is generalized immediately, so that even within the section double is a function of x, and t1 and t2 depend explicitly on x. This is what makes it possible to apply double and t1 to other expressions, like y and 2 * x. It corresponds to the ordinary mathematical locution "in this section, let x and y range over the natural numbers." Whenever x and y occur, we assume they denote natural numbers.

Sometimes, however, we wish to fix a single value in a section. For example, in an ordinary mathematical text, we might say "in this section, we fix a type, A, and a binary relation on A." The notion of a parameter captures this usage:

import standard section parameters {A : Type} (R : A → A → Type) hypothesis transR : ∀ {x y z}, R x y → R y z → R x z variables {a b c d e : A} theorem t1 (H1 : R a b) (H2 : R b c) (H3 : R c d) : R a d := transR (transR H1 H2) H3 theorem t2 (H1 : R a b) (H2 : R b c) (H3 : R c d) (H4 : R d e) : R a e := transR H1 (t1 H2 H3 H4) check t1 check t2 end check t1 check t2

Here, hypothesis functions as a synonym for parameter, so that A, R, and transR are all parameters in the section. This means that, as before, they are inserted as arguments to definitions and theorems as needed. But there is a difference: within the section, t1 is an abbreviation for @t1 A R transR, which is to say, these arguments are fixed until the section is closed. This means that you do not have to specify the explicit arguments R and transR when you write t1 H2 H3 H4, in contrast to the previous example. But it also means that you cannot specify other arguments in their place. In this example, making R a parameter is appropriate if R is the only binary relation you want to reason about in the section. If you want to apply your theorems to arbitrary binary relations within the section, make R a variable.

Notice that Lean is consistent when it comes to providing alternative syntax for Prop-valued variants of declarations:

Type Prop
constant axiom
variable premise
parameter hypothesis
take assume

Lean also allows you to use conjecture in place of hypothesis.

The possibility of declaring parameters in a section also makes it possible to define "local notation" that depends on those parameters. In the example below, as long as the parameter m is fixed, we can write a ≡ b for equivalence modulo m. As soon as the section is closed, however, the dependence on m becomes explicit, and the notation a ≡ b is no longer valid.

import data.int open int eq.ops algebra section mod_m parameter (m : ℤ) variables (a b c : ℤ) definition mod_equiv := (m ∣ b - a) local infix ≡ := mod_equiv theorem mod_refl : a ≡ a := show m ∣ a - a, from (sub_self a)⁻¹ ▸ !dvd_zero theorem mod_symm (H : a ≡ b) : b ≡ a := have H1 : m ∣ -(b - a), from iff.mpr !dvd_neg_iff_dvd H, have H2 : m ∣ a - b, from neg_sub b a ▸ H1, H2 theorem mod_trans (H1 : a ≡ b) (H2 : b ≡ c) : a ≡ c := have H1 : m ∣ (c - b) + (b - a), from !dvd_add H2 H1, have H2 : m ∣ c - a, from eq.subst (calc (c - b) + (b - a) = c - b + b - a : add.assoc ... = c + -b + b - a : rfl ... = c - a : neg_add_cancel_right) H1, H2 end mod_m check mod_refl -- ∀ (m a : ℤ), mod_equiv m a a check mod_symm -- ∀ (m a b : ℤ), mod_equiv m a b → mod_equiv m b a check mod_trans -- ∀ (m a b c : ℤ), mod_equiv m a b → mod_equiv m b c → mod_equiv m a c

8.7 More on Namespaces

Recall from Section Namespaces that namespaces not only package shorter names for theorems and identifiers, but also things like notation, coercions, classes, rewrite rules, and so on. You can ask Lean to display a list of these "metaclasses":

print metaclasses

These can be opened independently using modifiers to the open command:

import data.nat open [declaration] nat open [notation] nat open [coercion] nat open [class] nat open [abbreviation] nat

For example, open [coercion] nat makes the coercions in the namespace nat available (and nothing else). You can multiple metaclasses on one line:

import data.nat open [declaration] [notation] [coercion] nat

You can also open a namespace while excluding certain metaclasses. For example,

import data.nat open - [notation] [coercion] nat

imports all metaclasses but [notation] and [coercion]. You can limit the scope of an open command by putting it in a section. For example, here we temporarily import notation from nat:

import data.nat section open [notation] nat /- ... -/ end

You can also import only certain theorems by providing an explicit list in parentheses:

import data.nat open nat (add sub_sub zero_div) check add check sub_sub check zero_div

The open command above imports all metaobjects from nat, but limits the shortened identifiers to the ones listed. If you want to import only the shortened identifiers, use the following:

import data.nat open [declaration] nat (add sub_sub zero_div)

When you open a section, you can rename identifiers on the fly:

import data.nat open nat (renaming add -> plus) check plus

Or you can exclude a list of items from being imported:

import data.nat open nat (hiding add)

Within a namespace, you can declare certain identifiers to be protected. This means that when the namespace is opened, the short version of these names are not made available:

namespace foo protected definition bar (A : Type) (x : A) := x end foo open foo check foo.bar -- "check bar" yields an error

In the Lean library, common names are protected to avoid clashes. For example, we want to write nat.rec_on, int.rec_on, and list.rec_on, even when all of these namespaces are open, to avoid ambiguity and overloading. You can always define a local abbreviation to use the shorter name:

import data.list open list local abbreviation induction_on := @list.induction_on check induction_on

Alternatively, you can "unprotect" the definition by renaming it when you open the namespace:

import data.list open list (renaming induction_on → induction_on) check induction_on

As yet a third alternative, you obtain an alias for the shorter name by opening the namespace for that identifier only:

import data.list open list (induction_on) check induction_on

You may find that at times you want to cobble together a namespace, with notation, rewrite rules, or whatever, from existing namespaces. Lean provides an export command for that. The export command supports the same options and modifiers as the open command: when you export to a namespace, aliases for all the items you export become part of the new namespace. For example, below we define a new namespace, my_namespace, which includes items from bool, nat, and list.

import standard namespace my_namespace export bool (hiding measurable) export nat export list end my_namespace check my_namespace.band check my_namespace.zero check my_namespace.append open my_namespace check band check zero check append

This makes it possible for you to define nicely prepackaged configurations for those who will use your theories later on.

Sometimes it is useful to hide auxiliary definitions and theorems from the outside world, for example, so that they do not clutter up the namespace. The private keyword allows you to do this. The name of a private definition is only visible in the module/file where it was declared.

import data.nat open nat private definition inc (x : nat) := x + 1 private theorem inc_eq_succ (x : nat) : succ x = inc x := rfl

In this example, the definition inc and theorem inc_eq_succ are not visible or accessible in modules that import this one.