Software engineer, dilettante type theorist, syntactician, and philologist. Student of Ancient Greek, Sumerian, Hittite, Akkadian, Latin, Old English and German. I also co-host The Type Theory Podcast, and created the JonPRL proof assistant. Ideas written up here are rough at best.

I am in software engineering (functional programming preferred). I also provide tutoring services in any of the languages listed above, as well as type theory & proof theory. See my Curriculum Vitae.

This post is an introduction to the container-oriented generalization of the core Brouwerian (co)data structures, inspired by Ghani, Hancock and Pattinson.^{1} I am not introducing anything novel; I’m merely taking their framework and showing how to round up the usual suspects of Brouwerian mathematics in their generalized, “family-friendly” setting. I am using Constable et al’s Computational Type Theory + Induction-Recursion as my metalanguage,^{2} but other variants of type theory may be used as well.

Here, I present a judgemental reconstruction of LCF-style tactic systems, called Modernized LCF, which admits various extensions including validations (checkable certificates). The purpose is to present the structure and meaning of refinement proof directly as a logical theory with a meaning explanation, in contrast to the standard practice of taking refinement proof as an extra-logical “finishing touch”.

This year’s Oregon Programming Languages Summer School, organized by Bob Harper, Greg Morrisett and Zena Ariola, has come to a close after two amazing weeks. I wanted to take a moment to thank everyone who made my time here so pleasant, with special regards to Bob Harper and Mark Bickford, who patiently answered my numerous questions about Nuprl and Brouwerian mathematics.

Recent conversations have convinced me that there are several misconceptions about the status of side effects in a type theoretic setting, and how to reason about their benignity. Briefly, I will clarify this matter by proposing how one might go about integrating free assignables and non-determinism into Computational Type Theory.^{1} I do not answer all questions raised by this proposal, but I intend to be provocative and suggest a rough path toward realizing this goal.

Briefly, Diaconescu’s theorem states that the axiom of choice & extensionality together suffice to imply the principle of the excluded middle (PEM). In 100 Years of Zermelo’s Axiom of Choice, Martin-Löf distinguishes an intensional version of AC from an extensional one, and argues that whilst the former is a theorem of intuitionistic type theory, the latter leads to taboo. In this note, I hope to clarify a few matters, and demonstrate that the matter at hand is not so much extensionality, but infelicitous interpretation of quantifiers. When viewed through this lens, it becomes evident that the problem all along was setoids, not extensionality.

There are two standard techniques for representing dependently typed calculi in the LF. The first is to re-use the LF contexts for the object contexts, which can lead to difficulties when the meaning of hypothetico-general judgement in the object language is stronger than in the LF; this is, for instance, the case in MLTT 1979, where non-trivial functionality obligations are incurred in the sequent judgement.

Another technique advanced by Crary is to represent contexts explicitly, and then define a sequent judgement over them; this can be used to resolve the problem described above (and several others), but it comes at the cost of verbosity, and introduces certain other difficulties for my purpose. In this paper, I demonstrate an alternative, higher-order encoding of telescopes which can be used to faithfully encode the functional sequent judgement for MLTT 1979. You may also view a parallel Twelf development.

The intrinsic and extrinsic views on types are unified by considering two categories: one of syntactic types and terms, and another of semantic types and derivations, and then a forgetful functor from the latter to the former. Following Melliès and Zeilberger’s Functors are Type Refinement Systems, we might attempt to provide a similar characterization for computational type theories with universes, but there are some wrinkles; I attempt to provide a partial resolution to these here.

In 1994, Per Martin-Löf wrote Analytic and Synthetic Judgement in Type Theory, in which he convincingly showed that undecidability phenomena should be understood in terms of synthetic judgement, and demonstrated how the judgements of one theory may be made the propositions of another.