This is a chapter-by-chapter overview of the dissertation of Jonathan Sterling:

Jonathan Sterling.
**First Steps in Synthetic Tait Computability: The Objective Metatheory of Cubical Type Theory**.
Sep 13, 2021.
10.5281/zenodo.6990769.
[sterling-2021-thesis]

**First Steps in Synthetic Tait Computability: The Objective Metatheory of Cubical Type Theory**. Sep 13, 2021. 10.5281/zenodo.6990769. [sterling-2021-thesis]

The implementation and semantics of dependent type theories can be studied in a syntax-independent way: the objective metatheory of dependent type theories exploits the universal properties of their syntactic categories to endow them with computational content, mathematical meaning, and practical implementation (normalization, type checking, elaboration). The semantic methods of the objective metatheory inform the design and implementation of correct-by-construction elaboration algorithms, promising a principled interface between real proof assistants and ideal mathematics.

In this dissertation, I add synthetic Tait computability to the arsenal of the objective metatheorist. Synthetic Tait computability is a mathematical machine to reduce difficult problems of type theory and programming languages to trivial theorems of topos theory. First employed by Sterling and Harper to reconstruct the theory of program modules and their phase separated parametricity, synthetic Tait computability is deployed here to resolve the last major open question in the syntactic metatheory of cubical type theory: normalization of open terms.

# Background on Homotopy and Cubical Type Theory
[jms-000R]

# Background on Homotopy and Cubical Type Theory [jms-000R]

For more than four decades, dependent type theory has been positioned as the “common language” that can finally unify mathematics and computer programming: while it has never been controversial that a computer program is a form of mathematical construction, the running hypothesis of the type theoretic community has been the converse to this claim, namely that mathematical constructions should be viewed as programs that can in principle be executed by a physical machine — roughly, **sets** = **types** and **elements** = **programs**. Thus the struggle to realize this type theoretic hypothesis has been a two-way process, punctuated by moments at which the mathematical meaning of a programming construct is elucidated, or at which the computational content of a mathematical construct is uncovered.

In the current millennium, a new identification has been taking shape in which **types** = **\infty
-groupoids** (homotopy types), which are an infinite-dimensional generalization of sets; the origins of this new perspective on type theory lie with Hofmann and Streicher’s 1998 groupoid interpretation of type theory, combined with the revolutionary contributions of Voevodsky and Awodey and Warren respectively. The main feature of the new language, dubbed **homotopy type theory** or **HoTT**, is that isomorphisms between types are equipped with a new induction rule called univalence stating that all type theoretic constructs respect isomorphisms: to a first approximation, if A \cong B
then P(A) \cong P(B)
for any P
. The univalence principle is motivated by the phenomenon of homotopy invariance that pervades the large-scale structure of modern-day mathematics, from algebraic topology to algebraic geometry to mathematical physics; as a programming construct, univalence suggests new approaches to both generic and modular programming.

Thus one of the main projects for the first decade of homotopy type theory was to substantiate the relationship between HoTT and mathematics on the one hand, and between HoTT and computer programming on the other hand. The question of whether homotopy type theoretic language can be interpreted in sheaves on arbitrary infinite-dimensional spaces (
\infty
-topoi) has finally been resolved satisfactorily by Shulman in 2019. On the other hand, the computational interpretation of homotopy type theory has involved a reformulation of HoTT called **cubical type theory** that reorganizes the higher-dimensional structure discussed by considering all the points, lines, squares, cubes, hypercubes, and so-on that one can draw in a given type. The computational interpretation of the new cubical type theory can be split into two different conjectures:

Conjecture
(cubical canonicity)
[jms-000S]

Conjecture
(decidability of cubical type theory)
[jms-000T]

The canonicity conjecture ensures that terms written in cubical type theory can be thought of as computer programs, and was verified independently by Huber and Angiuli for different variants of cubical type theory. The decidability conjecture is no less important, as it is a necessary ingredient to implement a *typechecker* or a *compiler* for a programming language based on cubical type theory.

# Contributions of *First Steps in Synthetic Tait Computability*
[jms-000U]

# Contributions of *First Steps in Synthetic Tait Computability*
[jms-000U]

**synthetic Tait computability**that generalizes and abstracts the method of

*Tait computability*or

*logical predicates*; in the past two years, synthetic Tait computability has played a central role in solving several problems in both type theory and core programming languages, suggesting that this dissertation presents a lasting and transformative contribution to the state of the art.

# Detailed Overview of *First Steps in Synthetic Tait Computability*
[jms-000V]

# Detailed Overview of *First Steps in Synthetic Tait Computability*
[jms-000V]

We give a chapter-by-chapter overview of *First Steps in Synthetic Tait Computability*. The novel contributions are contained in Chapters 4, 5, 7, and 8; the remaining Chapters 0-3 and 6 are primarily expository.

# Part I: Dependent Dype Theory
[jms-000W]

# Part I: Dependent Dype Theory [jms-000W]

# Chapter 0: Conspectus on Type Theory
[jms-000X]

# Chapter 0: Conspectus on Type Theory [jms-000X]

*Cubical Hypothesis*confirmed.

# Chapter 1: Objective Syntax of Dependent Types
[jms-000Y]

# Chapter 1: Objective Syntax of Dependent Types [jms-000Y]

To state and prove theorems like canonicity and decidability for a type theory, we must have a mathematical definition of the *syntax* of type theory. Conventionally, the syntax of type theory has been studied in several layers: one starts with a definition of “raw” syntax as trees labeled by the names of the generating operations, quotients these trees under permutation of bound variables, and then layers on top of this an additional inductively defined formalism expressing the well-formedness of types, well-formedness of terms, definitional equality of types, and definitional equality of terms. After this, one verifies that definitional equivalence classes of well-formed types and terms can be used as the raw materials to construct a universal model of the type theory that has a universal property: any other model of type theory can be equipped with a unique structure-preserving homomorphism from the universal model. The described universal property determines the universal model up to unique isomorphism, if such a model exists.

We refer to the painstaking process described above as the subjective metatheory, building on the Hegel–Lawvere distinction between objective and subjective approaches to logic. The objective metatheory, in contrast, involves stating and proving results about type theories and programming languages relying only on the universal property of the universal model and not on any specifics of its presentation; the advantage of the objective metatheory is that it is simpler, more direct, more modular, and more composable.

Chapter 1 argues that the subjective metatheory in the sense described is redundant: the decidability conjecture can be stated with respect to any representative of the universal model and does not depend in any way on the raw syntax of type theory, and moreover, for all the type theories considered in this dissertation the existence of at least one representative of the universal model is guaranteed for somewhat trivial reasons that have nothing to do with the specifics of type theory. In this chapter, we develop a logical framework for specifying type theories modularly and working with their universal models in an objective fashion.

# Part II: Mathematical Background
[jms-000Z]

# Part II: Mathematical Background [jms-000Z]

# Chapter 2: The Language of Topoi
[jms-0010]

# Chapter 2: The Language of Topoi [jms-0010]

*and*as a kind of generalized topology. The viewpoint of topos theory

*qua*generalized topology plays an important role in this dissertation, and yet it nonetheless remains unfamiliar to most computer scientists and logicians. For this reason, Chapter 2 is provided as sufficient exposition to understand the use of topoi in the remainder of the dissertation, focusing on the

*recollement*or

*gluing*of a topos from a pair of complementary open and closed subtopoi, a classical construction that provides the geometrical basis for synthetic Tait computability.

# Chapter 3: The Theory of Universes
[jms-0011]

# Chapter 3: The Theory of Universes [jms-0011]

# Part III: Synthetic Tait Computability
[jms-0012]

# Part III: Synthetic Tait Computability [jms-0012]

# Chapter 4: Tait’s Method of Computability
[jms-0013]

# Chapter 4: Tait’s Method of Computability [jms-0013]

It is simple enough to verify **negative** properties of a formal system, e.g. the non-derivability of a given assertion \Phi
: find a mathematical object that models all the rules of the formal system and yet refutes \Phi
. In contrast, it is much harder to verify any non-trivial **positive** property of a formal system (such as canonicity, normalization, decidability, etc.). To handle such results, new techniques were needed — and delivered in the late 1960s by Tait, Martin-Löf and others under the name of Tait’s method of computability or logical predicates. Since its inception, Tait’s method has been the primary tool for verifying positive properties of logics, programming languages, and type theories. Early on, Freyd noticed that the logical predicates arguments can be rephrased as model constructions that glue together geometrical objects corresponding to *syntax* (object) and *set theory* (meta), setting the stage for this thesis. Thus despite appearances, both positive and negative properties can both be proved using semantic methods.

In the subsequent development of the computability method for applications in computer science, *indexed* variants of the logical predicates have proved to be fundamental and a number of variations on indexed logical predicates have appeared including the Kripke logical predicates of Jung and Tiuryn and the much more sophisticated *Grothendieck* logical predicates of Fiore and Simpson as well as Altenkirch, Dybjer, Hofmann, and Scott. This chapter points out that all of these forms of indexing arise in the same way from what is referred to as a **figure shape**, a continuous map into the classifying space of “Henkin models” of a given theory. Then the (Kripke, Grothendieck, etc.) logical predicates model is presented much more simply as the Artin gluing of this morphism’s inverse image.

An explicit proof of canonicity for the simply typed \lambda
-calculus motivates the abstraction and axiomatization of the geometry of figure shapes and their gluings as a new language for syntactic metatheory, namely **synthetic Tait computability**. The idea of synthetic Tait computability is to treat both object-level notions (e.g. the collection of terms of type \TpBool
) and meta-level notions (e.g. a normal form of a given term) in the same language by means of a pair of lex idempotent modalities. One strength of this presentation is that both object-level and meta-level notions can be treated using higher-order abstract syntax (HOAS) in the sense of Hofmann, which greatly simplifies the manipulation of variables.

The first demonstration of the power and modularity of synthetic Tait
computability is a new a proof of the canonicity property for Martin-Löf type theory. Unlike traditional proofs of canonicity via non-synthetic Tait computability, the synthetic version is completely modular and broken up into general-purpose lemmas that are stated at a high level of abstraction and can be reused in proofs of *different properties* for *different type theories*. (Indeed, some of the constructions isolated in this chapter are used off the shelf in Chapter 7 to prove normalization for cubical type theory.) The modularization of syntactic metatheory is one of the main contributions of this dissertation.

# Chapter 5: Synthetic Normalization by Evaluation
[jms-0015]

# Chapter 5: Synthetic Normalization by Evaluation [jms-0015]

*Tait saturation yoga*for dependent types in the synthetic setting, an important closure condition for logical predicates that comprised one of the main innovations of Tait in the context of simply typed combinators. Although this chapter is intended only as “dry run” for the main result (to be exposed in Chapter 7), the normalization argument presented here has intrinsic value: it is the simplest and most direct proof of normalization and decidability for Martin-Löf type theory with \eta -laws and cumulative universes that has appeared in the literature so far.

# Part IV: Cubical Type Theory
[jms-0016]

# Part IV: Cubical Type Theory [jms-0016]

# Chapter 6: Cartesian Cubical Type Theory
[jms-0017]

# Chapter 6: Cartesian Cubical Type Theory [jms-0017]

*path*or

*identification*\TpPath{A}{u}{v} between two elements of any type; for instance, to identify u,v:A is the same as to construct a function p:\II\to A such that p0 = u and p1=v . Terms involving

*variables*of type \II can exhibit complex computational behavior that is difficult to account for: for instance, if p:\TpPath{A}{u}{v} and i:\II are variables, then the application pi:A is a normal form, but it must nonetheless reduce to either u or v when i is substituted for by a constant. Despite appearances, this scenario is fundamentally different from the way that pi must reduce when p is replaced by a \lambda -abstraction, as normal forms must

*a priori*be closed under arbitrary \II -substitutions — a necessity, because the normal form of an n -cube must be an n -cube of normal forms. One of the main technical contributions of this dissertation, introduced in the next chapter, is to generalization of the notion of neutral form and the Tait saturation yoga that smoothly accommodates the problematic computational behavior of the interval.

# Chapter 7: Normalization for Cubical Type Theory
[jms-0014]

# Chapter 7: Normalization for Cubical Type Theory [jms-0014]

This chapter reports the main result of the dissertation, normalization for cubical type theory and its corollaries: injectivity of type constructors, and decidability of equality & typing. These results were first obtained by Sterling and Angiuli for the fragment of cubical type theory *without* universes; the present chapter extends the results of *op. cit.* to support a cumulative hierarchy of universes.

The central innovation of this chapter is to generalize the notion of neutral form to accommodate the computational behavior of terms that have free variables of type \II
discussed in our synopsis of Chapter 6. In the conventional account of neutral and normal forms, neutrals \FmtNe{e}
are built up inductively from
x
for term variables x : A
, function applications to normal forms \NeApp{\FmtNe{e}}{\FmtNf{m}}
and projections from neutral pairs \NeFst{\FmtNe{e}}
, \NeSnd{\FmtNe{e}}
; our account of neutrals is much the same, except that each neutral form \FmtNe{e}
comes equipped with a **“frontier
of instability”** \Frontier{\FmtNe{e}}
, a predicate on its free \II
-variables that indicates when it “needs to compute further”. We think of a neutral form for an n
-cube as being *undefined* on its frontier of instability; the process of restricting a neutral to its frontier of instability is then referred to as *destabilization*.

When x : A is a variable of an ordinary type, the frontier of instability \Frontier\,\prn{\NeVar\,{x}} is empty because variables never need to compute further. Where something new happens is the path type: given a neutral form \FmtNe{e} : \Ne\,\prn{\TpPath{A}{u}{v}} of path type, we have for each term r : \II a neutral form \NePathApp{\FmtNe{e}}{r} : \Ne\,A whose frontier of instability is defined like so:

In other words, the path neutral application \NePathApp{\FmtNe{e}}{r} needs to compute as soon as e needs to compute, and as soon as the interval term r : \II becomes equal to a constant. Prior to the introduction of the frontier of instability, the neutrals are embedded into the normals at base types unconditionally, i.e. for each neutral form \FmtNe{e} : \Ne\,\TpBool , we have a normal form \NeEmb{e} : \Nf\,\TpBool .

Now that neutrals are equipped with frontiers of instability, a more refined notion of normal form is needed: when \FmtNe{e}
is a neutral form, the corresponding normal form should contain (recursively) normal forms for e
that are defined under the frontier of instability \Frontier\,\FmtNe{e}
. To be more concrete,
let x : \TpPath{\TpBool}{\TmTt}{\TmTt}
be a variable of path type and r : \II
is a term; then \NePathApp{\NeVar\,x}{r}:\Ne\,\prn{\TpPath{\TpBool}{\TmTt}{\TmTt}}
is a neutral form for the term xr
whose frontier of instability is the boundary (r = 0) \lor (r = 1)
; the corresponding normal form must therefore glue onto \NePathApp{\NeVar\,x}{r}
additional normal forms for x0
and
x1
. We refer to the process of completing a neutral with additional data defined on its frontier of instability as stabilization; the *stabilized* normal form of xr
is then written

Just as the embedding of neutrals into normals is “stabilized” by a com patible normal form defined on the neutral’s frontier of instability, so too must the Tait saturation yoga be adjusted. Conventionally one requires the computability predicate for a type A
to be equipped with a function that takes neutral forms \FmtNe{e}
of terms e : A
to computability witnesses for the same term. In the **stabilized Tait saturation yoga**, we strengthen the induction hypothesis to require for each neutral form \FmtNe{e}
a function that extends a computability witness defined only on the frontier of instability \Frontier\FmtNe{e}
to a computability witnessed defined everywhere

The twin innovations of *frontiers of instability* and *stabilization* then suffice to adapt the synthetic normalization argument of Chapter 5 to a proof of normalization (and thus decidability) for cubical type theory.

# Part V: Prospects
[jms-0018]

# Part V: Prospects [jms-0018]

# Chapter 8: A Plan for PL
[jms-0019]

# Chapter 8: A Plan for PL [jms-0019]

This dissertation has focused almost solely on the development and applications of synthetic Tait computability in the context of pure type theory, but the author originally invented synthetic Tait computability to solve problems in core programming languages, as part of Sterling and Harper’s re-analysis of the *phase distinction* in ML-style module systems between static (compiletime) and dynamic (runtime) code. The purpose of this chapter is to identify several applications of synthetic Tait computability to core programming languages, and set an agenda for future work — some of which has been executed and published following the completion of this dissertation

# § 8.1: Two Phase Distinctions for Program Modules
[jms-001A]

# § 8.1: Two Phase Distinctions for Program Modules [jms-001A]

# § 8.2: Type Refinements and Program Extraction
[jms-001B]

# § 8.2: Type Refinements and Program Extraction [jms-001B]

*refinement types*and

*program extraction*via a phase distinction between

*computation*and

*specification*. Refinement types are often thought of as a kind of subtype, but there is a fundamental difference: when \Rfn{\phi}{A} and \Rfn{\psi}{B} are refinements of types A and B respectively, then \phi\to\psi refines A\to B . In contrast, subtyping laws for function spaces are contravariant in the domain. The refinements available in synthetic Tait computability are moreover proof-relevant in the sense that specification-level code can contain data in addition to properties. The application to proof-relevant refinement types is employed by Niu, Sterling, Grodin, and Harper to develop a logical framework for simultaneously verifying behavior and complexity of functional programs.

# § 8.3: Information-Flow and Noninterference
[jms-001C]

# § 8.3: Information-Flow and Noninterference [jms-001C]

*security*and

*information flow control*is identified: a security class is a phase distinction between low and high security. The preliminary results presented in this section have been substantially improved upon by Sterling and Harper, adding support for general recursion and termination-insensitive noninterference by combining synthetic Tait computability with synthetic domain theory.