This is a chapter-by-chapter overview of the dissertation of Jonathan Sterling:
The implementation and semantics of dependent type theories can be studied in a syntax-independent way: the objective metatheory of dependent type theories exploits the universal properties of their syntactic categories to endow them with computational content, mathematical meaning, and practical implementation (normalization, type checking, elaboration). The semantic methods of the objective metatheory inform the design and implementation of correct-by-construction elaboration algorithms, promising a principled interface between real proof assistants and ideal mathematics. In this dissertation, I add synthetic Tait computability to the arsenal of the objective metatheorist. Synthetic Tait computability is a mathematical machine to reduce difficult problems of type theory and programming languages to trivial theorems of topos theory. First employed by Sterling and Harper to reconstruct the theory of program modules and their phase separated parametricity, synthetic Tait computability is deployed here to resolve the last major open question in the syntactic metatheory of cubical type theory: normalization of open terms.
For more than four decades, dependent type theory has been positioned as the “common language” that can finally unify mathematics and computer programming: while it has never been controversial that a computer program is a form of mathematical construction, the running hypothesis of the type theoretic community has been the converse to this claim, namely that mathematical constructions should be viewed as programs that can in principle be executed by a physical machine — roughly, sets = types and elements = programs. Thus the struggle to realize this type theoretic hypothesis has been a two-way process, punctuated by moments at which the mathematical meaning of a programming construct is elucidated, or at which the computational content of a mathematical construct is uncovered. In the current millennium, a new identification has been taking shape in which types = \infty
-groupoids (homotopy types), which are an infinite-dimensional generalization of sets; the origins of this new perspective on type theory lie with Hofmann and Streicher’s 1998 groupoid interpretation of type theory, combined with the revolutionary contributions of Voevodsky and Awodey and Warren respectively. The main feature of the new language, dubbed homotopy type theory or HoTT, is that isomorphisms between types are equipped with a new induction rule called univalence stating that all type theoretic constructs respect isomorphisms: to a first approximation, if A \cong B
then P(A) \cong P(B)
for any P
. The univalence principle is motivated by the phenomenon of homotopy invariance that pervades the large-scale structure of modern-day mathematics, from algebraic topology to algebraic geometry to mathematical physics; as a programming construct, univalence suggests new approaches to both generic and modular programming. Thus one of the main projects for the first decade of homotopy type theory was to substantiate the relationship between HoTT and mathematics on the one hand, and between HoTT and computer programming on the other hand. The question of whether homotopy type theoretic language can be interpreted in sheaves on arbitrary infinite-dimensional spaces (
\infty
-topoi) has finally been resolved satisfactorily by Shulman in 2019. On the other hand, the computational interpretation of homotopy type theory has involved a reformulation of HoTT called cubical type theory that reorganizes the higher-dimensional structure discussed by considering all the points, lines, squares, cubes, hypercubes, and so-on that one can draw in a given type. The computational interpretation of the new cubical type theory can be split into two different conjectures: The canonicity conjecture ensures that terms written in cubical type theory can be thought of as computer programs, and was verified independently by Huber and Angiuli for different variants of cubical type theory. The decidability conjecture is no less important, as it is a necessary ingredient to implement a typechecker or a compiler for a programming language based on cubical type theory. We give a chapter-by-chapter overview of First Steps in Synthetic Tait Computability. The novel contributions are contained in Chapters 4, 5, 7, and 8; the remaining Chapters 0-3 and 6 are primarily expository. To state and prove theorems like canonicity and decidability for a type theory, we must have a mathematical definition of the syntax of type theory. Conventionally, the syntax of type theory has been studied in several layers: one starts with a definition of “raw” syntax as trees labeled by the names of the generating operations, quotients these trees under permutation of bound variables, and then layers on top of this an additional inductively defined formalism expressing the well-formedness of types, well-formedness of terms, definitional equality of types, and definitional equality of terms. After this, one verifies that definitional equivalence classes of well-formed types and terms can be used as the raw materials to construct a universal model of the type theory that has a universal property: any other model of type theory can be equipped with a unique structure-preserving homomorphism from the universal model. The described universal property determines the universal model up to unique isomorphism, if such a model exists. We refer to the painstaking process described above as the subjective metatheory, building on the Hegel–Lawvere distinction between objective and subjective approaches to logic. The objective metatheory, in contrast, involves stating and proving results about type theories and programming languages relying only on the universal property of the universal model and not on any specifics of its presentation; the advantage of the objective metatheory is that it is simpler, more direct, more modular, and more composable. Chapter 1 argues that the subjective metatheory in the sense described is redundant: the decidability conjecture can be stated with respect to any representative of the universal model and does not depend in any way on the raw syntax of type theory, and moreover, for all the type theories considered in this dissertation the existence of at least one representative of the universal model is guaranteed for somewhat trivial reasons that have nothing to do with the specifics of type theory. In this chapter, we develop a logical framework for specifying type theories modularly and working with their universal models in an objective fashion. It is simple enough to verify negative properties of a formal system, e.g. the non-derivability of a given assertion \Phi
: find a mathematical object that models all the rules of the formal system and yet refutes \Phi
. In contrast, it is much harder to verify any non-trivial positive property of a formal system (such as canonicity, normalization, decidability, etc.). To handle such results, new techniques were needed — and delivered in the late 1960s by Tait, Martin-Löf and others under the name of Tait’s method of computability or logical predicates. Since its inception, Tait’s method has been the primary tool for verifying positive properties of logics, programming languages, and type theories. Early on, Freyd noticed that the logical predicates arguments can be rephrased as model constructions that glue together geometrical objects corresponding to syntax (object) and set theory (meta), setting the stage for this thesis. Thus despite appearances, both positive and negative properties can both be proved using semantic methods. In the subsequent development of the computability method for applications in computer science, indexed variants of the logical predicates have proved to be fundamental and a number of variations on indexed logical predicates have appeared including the Kripke logical predicates of Jung and Tiuryn and the much more sophisticated Grothendieck logical predicates of Fiore and Simpson as well as Altenkirch, Dybjer, Hofmann, and Scott. This chapter points out that all of these forms of indexing arise in the same way from what is referred to as a figure shape, a continuous map into the classifying space of “Henkin models” of a given theory. Then the (Kripke, Grothendieck, etc.) logical predicates model is presented much more simply as the Artin gluing of this morphism’s inverse image. An explicit proof of canonicity for the simply typed \lambda
-calculus motivates the abstraction and axiomatization of the geometry of figure shapes and their gluings as a new language for syntactic metatheory, namely synthetic Tait computability. The idea of synthetic Tait computability is to treat both object-level notions (e.g. the collection of terms of type \TpBool
) and meta-level notions (e.g. a normal form of a given term) in the same language by means of a pair of lex idempotent modalities. One strength of this presentation is that both object-level and meta-level notions can be treated using higher-order abstract syntax (HOAS) in the sense of Hofmann, which greatly simplifies the manipulation of variables. The first demonstration of the power and modularity of synthetic Tait
computability is a new a proof of the canonicity property for Martin-Löf type theory. Unlike traditional proofs of canonicity via non-synthetic Tait computability, the synthetic version is completely modular and broken up into general-purpose lemmas that are stated at a high level of abstraction and can be reused in proofs of different properties for different type theories. (Indeed, some of the constructions isolated in this chapter are used off the shelf in Chapter 7 to prove normalization for cubical type theory.) The modularization of syntactic metatheory is one of the main contributions of this dissertation. This chapter reports the main result of the dissertation, normalization for cubical type theory and its corollaries: injectivity of type constructors, and decidability of equality & typing. These results were first obtained by Sterling and Angiuli for the fragment of cubical type theory without universes; the present chapter extends the results of op. cit. to support a cumulative hierarchy of universes. The central innovation of this chapter is to generalize the notion of neutral form to accommodate the computational behavior of terms that have free variables of type \II
discussed in our synopsis of Chapter 6. In the conventional account of neutral and normal forms, neutrals \FmtNe{e}
are built up inductively from
x
for term variables x : A
, function applications to normal forms \NeApp{\FmtNe{e}}{\FmtNf{m}}
and projections from neutral pairs \NeFst{\FmtNe{e}}
, \NeSnd{\FmtNe{e}}
; our account of neutrals is much the same, except that each neutral form \FmtNe{e}
comes equipped with a “frontier
of instability” \Frontier{\FmtNe{e}}
, a predicate on its free \II
-variables that indicates when it “needs to compute further”. We think of a neutral form for an n
-cube as being undefined on its frontier of instability; the process of restricting a neutral to its frontier of instability is then referred to as destabilization. When x : A
is a variable of an ordinary type, the frontier of instability \Frontier\,\prn{\NeVar\,{x}}
is empty because variables never need to compute further. Where something new happens is the path type: given a neutral form \FmtNe{e} : \Ne\,\prn{\TpPath{A}{u}{v}}
of path type, we have for each term r : \II
a neutral form \NePathApp{\FmtNe{e}}{r} : \Ne\,A
whose frontier of instability is defined like so: In other words, the path neutral application \NePathApp{\FmtNe{e}}{r}
needs to compute as soon as e needs to compute, and as soon as the interval term r : \II
becomes equal to a constant. Prior to the introduction of the frontier of instability, the neutrals are embedded into the normals at base types unconditionally, i.e. for each neutral form \FmtNe{e} : \Ne\,\TpBool
, we have a normal form \NeEmb{e} : \Nf\,\TpBool
. Now that neutrals are equipped with frontiers of instability, a more refined notion of normal form is needed: when \FmtNe{e}
is a neutral form, the corresponding normal form should contain (recursively) normal forms for e
that are defined under the frontier of instability \Frontier\,\FmtNe{e}
. To be more concrete,
let x : \TpPath{\TpBool}{\TmTt}{\TmTt}
be a variable of path type and r : \II
is a term; then \NePathApp{\NeVar\,x}{r}:\Ne\,\prn{\TpPath{\TpBool}{\TmTt}{\TmTt}}
is a neutral form for the term xr
whose frontier of instability is the boundary (r = 0) \lor (r = 1)
; the corresponding normal form must therefore glue onto \NePathApp{\NeVar\,x}{r}
additional normal forms for x0
and
x1
. We refer to the process of completing a neutral with additional data defined on its frontier of instability as stabilization; the stabilized normal form of xr
is then written Just as the embedding of neutrals into normals is “stabilized” by a com patible normal form defined on the neutral’s frontier of instability, so too must the Tait saturation yoga be adjusted. Conventionally one requires the computability predicate for a type A
to be equipped with a function that takes neutral forms \FmtNe{e}
of terms e : A
to computability witnesses for the same term. In the stabilized Tait saturation yoga, we strengthen the induction hypothesis to require for each neutral form \FmtNe{e}
a function that extends a computability witness defined only on the frontier of instability \Frontier\FmtNe{e}
to a computability witnessed defined everywhere The twin innovations of frontiers of instability and stabilization then suffice to adapt the synthetic normalization argument of Chapter 5 to a proof of normalization (and thus decidability) for cubical type theory. This dissertation has focused almost solely on the development and applications of synthetic Tait computability in the context of pure type theory, but the author originally invented synthetic Tait computability to solve problems in core programming languages, as part of Sterling and Harper’s re-analysis of the phase distinction in ML-style module systems between static (compiletime) and dynamic (runtime) code. The purpose of this chapter is to identify several applications of synthetic Tait computability to core programming languages, and set an agenda for future work — some of which has been executed and published following the completion of this dissertationBackground on Homotopy and Cubical Type Theory
[jms-000R]
Contributions of First Steps in Synthetic Tait Computability
[jms-000U]
Detailed Overview of First Steps in Synthetic Tait Computability
[jms-000V]
Part I: Dependent Dype Theory
[jms-000W]
Chapter 0: Conspectus on Type Theory
[jms-000X]
Chapter 1: Objective Syntax of Dependent Types
[jms-000Y]
Part II: Mathematical Background
[jms-000Z]
Chapter 2: The Language of Topoi
[jms-0010]
Chapter 3: The Theory of Universes
[jms-0011]
Part III: Synthetic Tait Computability
[jms-0012]
Chapter 4: Tait’s Method of Computability
[jms-0013]
Chapter 5: Synthetic Normalization by Evaluation
[jms-0015]
Part IV: Cubical Type Theory
[jms-0016]
Chapter 6: Cartesian Cubical Type Theory
[jms-0017]
Chapter 7: Normalization for Cubical Type Theory
[jms-0014]
Part V: Prospects
[jms-0018]
Chapter 8: A Plan for PL
[jms-0019]
§ 8.1: Two Phase Distinctions for Program Modules
[jms-001A]
§ 8.2: Type Refinements and Program Extraction
[jms-001B]
§ 8.3: Information-Flow and Noninterference
[jms-001C]