I am an Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. I study programming languages and semantics using type theory, category theory, domain theory, and topos theory as a guide. My other interests include Near Eastern, Classical, and Germanic philology. If you want to be my PhD student, read this. Find my contact information here
Central to both the design of programming languages and the practice of software engineering is the tension between abstraction and composition. I employ semantic methods from category theory and type theory to design, verify, and implement languages that enable both programmers and mathematicians to negotiate the different levels of abstraction that arise in their work. I apply my research to global safety and security properties of programming languages as well as the design and implementation of interactive theorem provers for higherdimensional mathematics. I develop foundational and practical mathematical tools to deftly weave together verifications that cut across multiple levels of abstraction.
This website is a “forest” created using the Forester tool. I organize my thoughts here on a variety of topics at a granular level; sometimes these thoughts are selfcontained, and at times I may organize them into larger notebooks or lecture notes. My nascent ideas about the design of tools for scientific thought are here. I welcome collaboration on any of the topics represented in my forest. To navigate my forest, press CtrlK
.
I maintain several public bibliographies in topics of my interest. You can also access personal bibliography or my curriculum vitæ.
This is my blog, in which I write about a variety of topics including computer science, mathematics, and the design of tools for scientists.
Separately from my dayjob at University of Cambridge, I am independently researching tools for scientific thought and developing software like Forester that you can use to unlock your brain. If you have benefited from this work or the writings on my blog, please considering supporting me with a sponsorship on Kofi.
I have been thinking about monoidal closed structures induced by slicing over a monoid, which has been considered by Combette and MunchMaccagnoni as a potential denotational semantics of destructors à la C++. It occurred to me that this construction is, in fact, almost a degenerate case of Day convolution on an internal monoidal category — and this made me realize that there might be a nice way to understand Day convolution in the language of fibered categories. In fact, many of these results (in particular the relativization to internal monoidal categories) are probably a special case of Theorem 11.22 of Shulman’s paper on enriched indexed categories.
Let
To understand the construction of the Day tensor, we will go through it stepbystep. Let
Under appropriate assumptions, we may also compute a “Day hom” by adjointness.
Let
First of all, we note that the pullback functor
Let
If
I believe, but did not check carefully, that when
There remain some interesting directions to explore. First of all, the claims above would obviously lead to a new construction of the Day convolution monoidal structure on the 1category of discrete fibrations on
Let
The conjecture above is highly nontrivial, as monoidal bicategories are extremely difficult to construct explicitly. I am hoping that Michael Shulman’s ideas involving monoidal double categories could potentially help.
I have been thinking again about the relationship between quantitative type theory and synthetic Tait computability and other approaches to type refinements. One of the defining characteristics of QTT that I thought distinguished it from STC was the treatment of types: in QTT, types only depend on the “computational” / unrefined aspect of their context, whereas types in STC are allowed to depend on everything. In the past, I mistakenly believed that this was due to the realizabilitystyle interpretation of QTT, in contrast with STC’s gluing interpretation. It is now clear to me that (1) QTT is actually glued (in the sense of qrealizability, no pun intended), and (2) the nonstandard interpretation of types in QTT corresponds to adding an additional axiom to STC, namely the tininess of the generic proposition.
It has been suggested to me by Neel Krishnaswami that this property of QTT may not be desirable in all cases (sometimes you want the types to depend on quantitative information), and that for this reason, graded type theories might be a better way forward in some applications. My results today show that STC is, in essence, what you get when you relax the QTT’s assumption that types do not depend on quantitative information. This suggests that we should explore the idea of multiplicities within the context of STC — as any monoidal product on the subuniverse spanned by closedmodal types induces quite directly a form of variable multiplicity in STC, I expect this direction to be fruitful.
My thoughts on the precise relationship between the QTT models and Artin gluing will be elucidated at a different time. Today, I will restrict myself to sketching an interpretation of a QTTstyle language in STC assuming the generic proposition is internally tiny.
Let
We now consider the interpretation of a language of (potentially quantitative) refinements into
So far we have not needed anything beyond the base structure of STC in order to give an interpretation of types in QTT’s style. But to extend this interpretation to a universe, we must additionally assume that
As
Although
We will now see how to use the adjunction
Thus we see that if
Finally, we comment that the tininess of
For any small category
We will assume here that
For any small category
The universal property of
Given a morphism of topoi
In case
In particular, let
To give an explicit computation of the tensor product, we first compute the inverse image of any
We are now prepared to compute the tensor product of any
Finally, we can relate the computation above to that of Fiore in terms of coends.
Above, we have written
Thanks to Marcelo Fiore and Daniel Gratzer for helpful discussions.
I have long been an enthusiast for outliners, a genre of computer software that deserves more than almost any other to be called an “elegant weapon for a more civilized age”. Recently I have been enjoying experimenting with Jesse Grosjean’s highly innovative outliner for macOS called Bike, which builds on a lot of his previous (highly impressive) work in the area with a level of fit and finish that is rare even in the world of macOS software. Bike costs $29.99 and is wellworth it; watch the introductory video or try the demo to see for yourself.
The purpose of outliners is to provide room to actively think; Grothendieck is said to have been unable to think at all without a pen in his hand, and I think of outliners as one way to elevate the tactile aspect of active thinking using the unique capabilities of software. Tools for thinking must combat stress and mental weight, and the most immediate way that outliners achieve this is through the ability to focus on a subtree — narrowing into a portion of the hierarchy and treating it as if it were the entire document, putting its context aside. This feature, which some of my readers may recognize from Emacs orgmode
, is wellrepresented in Bike — without, of course, suffering the noticeable quirks that come from the Emacs environment, nor the illadvised absolute/topdown model of hierarchy sadly adopted by orgmode
.
As a scientist in academia, one of the most frequent things I am doing when I am not writing my own papers or working with students is refereeing other scientists’ papers. For those who are unfamiliar, this means carefully studying a paper and then producing a detailed and wellstructured report that includes a summary of the paper, my personal assessment of its scientific validity and value, and a long list of corrections, questions, comments, and suggestions. Referee reports of this kind are then used by journal editors and conference program committees to decide which papers deserve to be published.
In this post, I will give an overview of my refereeing workflow with Bike and how I overcame the challenges transferring finished referee reports from Bike into the textbased formats used by conference refereeing platforms like HotCRP and EasyChair using a combination of XSLT 2.0 and Pandoc. This tutorial applies to Bike 1.14; I hope the format will not change too much, but I cannot make promises about what I do not control.
Most scientific conferences solicit and organize reviews for papers using a web platform such as HotCRP and EasyChair; although these are not the same, the idea is similar. Once you have been assigned to referee a paper, you will receive a web form with several sections and large text areas in which to put the components of your review; not all conferences ask for the same components, but usually one is expected to include the following in addition to your (numerical) assessment of the paper’s merit and your expertise:
Usually you will be asked to enter your comments under each section in a plain text format like Markdown. The first thing a new referee learns is not to type answers directly into the web interface, because this is an extremely reliable way to lose hours of your time when a browser or server glitch deletes all your work. Most of us instead write out our answers in a separate text editor, and paste them into the web interface when we are satisfied with them. In the past, I have done this with text files on my computer, but today I want to discuss how to draft referee reports as outlines in Bike; then I will show you how to convert them to the correct plain text format that can be pasted into your conference’s preferred webbased refereeing platform.
To start off, have a look at the figure below, which shows my refereeing template outline in Bike.
As you can see, there is a healthy combination of hierarchy and formatting in a Bike outline.
Bike is a rich text editor, but one that (much like GNU TeXmacs) avoids the classic pitfalls of nearly all rich text editors, such as the ubiquitous and dreaded “Is the space italic?!” userexperience failure; I will not outline Bike innovative approach to rich text editing here, but I suggest you check it out for yourself.
One of the most useful features of Bike’s approach to formatting is the concept of a row type, which is a semantic property of a row that has consequences for its visual presentation. Bike currently supports the following row types:
My refereeing template uses several of these row types (headings, notes, tasks) as well as some of the rich text formatting (highlighting). When I fill out the refereeing outline, I will use other row types as well — including quotes, ordered, and unordered rows. I will create a subheading under Detailed comments for the author to contain my questions and comments, which I enter in as ordered rows; then I make a separate subheading at the same level for Typographical errors, which I populate with unordered rows. Unordered rows are best for typos, because they are always accompanied already by line numbers. If I need to quote an extended portion of the paper, I will use a quote row.
When working on the report outline, I will constantly be focusing on individual sections to avoid not only distractions but also the intense mental weight of unfinished sections. Focusing means that the entire outline is narrowed to a subtree that can be edited away from its context; this is achieved in Bike by pressing the gray southeasterly arrows to the right of each heading, as seen in the figure.
Although we have seen how pleasant it is to use an outliner like Bike to draft a referee report, but we obviously cannot submit a .bike
file to a conference reviewing website or a journal editor. Most conference reviewing systems accept plain text or Markdown responses, and so our goal will be to convert a Bike outline into reasonably formatted Markdown.
It happens that Bike’s underlying format is HTML, so one idea would be to use Pandoc to process this HTML into Markdown. This would work, except that Bike’s model is sufficiently structured that it must make deeply idiosyncratic use of HTML tags, as can be seen from the listing below.
<?xml version="1.0" encoding="UTF8"?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta charset="utf8"/> </head> <body> <ul id="2sbcmmms"> <li id="3C" datatype="heading"> <p>Tasks</p> <ul> <li id="cs" datatype="task"> <p>read through paper on iPad and highlight</p> </li> </ul> </li> <li id="XZ" datatype="heading"> <p>Syntax and semantics of foo bar baz</p> <ul> <li id="Kp"> <p><mark>Overall merit:</mark><span> </span></p> </li> <li id="d9"> <p><mark>Reviewer Expertise:</mark></p> </li> <li id="uM" datatype="heading"> <p>Summary of the paper</p> <ul> <li id="oB" datatype="note"> <p>Please give a brief summary of the paper</p> </li> <li id="TWZ"> <p>This paper describes the syntax and semantics of foo, bar, and baz.</p> </li> </ul> </li> <li id="V8" datatype="heading"> <p>Assessment of the paper</p> <ul> <li id="vD" datatype="note"> <p>Please give a balanced assessment of the paper's strengths and weaknesses and a clear justification for your review score.</p> </li> </ul> </li> <li id="zo" datatype="heading"> <p>Detailed comments for authors</p> <ul> <li id="o0" datatype="note"> <p>Please give here any additional detailed comments or questions that you would like the authors to address in revising the paper.</p> </li> <li id="bgy" datatype="heading"> <p>Minor comments</p> <ul> <li id="tMq" datatype="unordered"> <p>line 23: "teh" => "the"</p> </li> <li id="EmX" datatype="unordered"> <p>line 99: "fou" => "foo"</p> </li> </ul> </li> </ul> </li> <li id="aN" datatype="heading"> <p>Questions to be addressed by author response</p> <ul> <li id="7s" datatype="note"> <p>Please list here any specific questions you would like the authors to address in their author response. Since authors have limited time in which to prepare their response, please only ask questions here that are likely to affect your accept/reject decision.</p> </li> </ul> </li> <li id="4S" datatype="heading"> <p>Comments for PC and other reviewers</p> <ul> <li id="bN" datatype="note"> <p>Please list here any additional comments you have which you want the PC and other reviewers to see, but not the authors.</p> </li> <li id="i2b"> <p>In case any one is wondering, I am an expert in foo, but not in bar nor baz.</p> </li> </ul> </li> </ul> </li> </ul> </body> </html>
The Markdown that would result from postprocessing a Bike outline directly with Pandoc would be deeply unsuitable for submission. We will, however, use a version of this idea: first we will preprocess the Bike format into more conventional (unstructured) HTML using XSLT 2.0, and then we will use Pandoc to convert this into Markdown.
XSLT 2.0 is unfortunately only implemented by proprietary tools like Saxon, developed by Saxonica. Nonetheless, it is possible to freely install Saxon on macOS using Homebrew:
brew install saxon
You must also install Pandoc, which is also conveniently available as a binary on Homebrew:
brew install pandoc
With the system requirements out of the way, we can proceed to prepare an XSLT stylesheet that will convert Bike’s idiosyncratic use of HTML tags to more conventional HTML that can be processed into Markdown by Pandoc. The stylesheet biketohtml.xsl
is described and explained in the listing below.
We can write convert Bike outlines to reasonable HTML using an XSLT 2.0 stylesheet, biketohtml.xsl
detailed below.
<?xml version="1.0"?> <xsl:stylesheet version="2.0" xmlns="http://www.w3.org/1999/xhtml" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xhtml="http://www.w3.org/1999/xhtml" excluderesultprefixes="xhtml"> <xsl:output method="xml" version="1.0" encoding="UTF8" indent="yes" /> <xsl:stripspace elements="*" />
We will allow several tags to be copied verbatim into the output, as Bike uses these in the same way that idiomatic HTML does.
<xsl:template match="xhtml:html  xhtml:body  xhtml:code  xhtml:strong  xhtml:em  xhtml:mark"> <xsl:copy> <xsl:applytemplates select="node()@*" /> </xsl:copy> </xsl:template>
Bike leaves behind a lot of empty span
elements; we drop these.
<xsl:template match="xhtml:span"> <xsl:applytemplates /> </xsl:template>
Bike uses ul
for all lists; the list type is determined not at this level, but rather by each individual item’s @datatype
attribute. To get this data into the HTML list model, we must group items that have the same @datatype
and wrap them in an appropriate listforming element.
To do this, we use XSLT 2.0’s xsl:foreachgroup
instruction to group adjacent li
elements by their @datatype
attribute. (It is extremely difficult and errorprone to write equivalent code in the more widely available XSLT 1.0.) We must convert @datatype
to a string: otherwise, the transformer will crash when it encounters an item without a @datatype
attribute.
<xsl:template match="xhtml:ul"> <xsl:foreachgroup select="xhtml:li" groupadjacent="string(@datatype)"> <xsl:choose> <xsl:when test="@datatype='ordered' or @datatype='task'"> <ol> <xsl:applytemplates select="currentgroup()" /> </ol> </xsl:when> <xsl:when test="@datatype='unordered'"> <ul> <xsl:applytemplates select="currentgroup()" /> </ul> </xsl:when> <xsl:otherwise> <xsl:applytemplates select="currentgroup()" /> </xsl:otherwise> </xsl:choose> </xsl:foreachgroup> </xsl:template>
Next, we match each individual li
element; the content of a list item is stored in a p
element directly under li
, so we let the transformer fall thorugh the parent and then format the content underneath according to the @datatype
of the item.
<xsl:template match="xhtml:li"> <xsl:applytemplates /> </xsl:template> <xsl:template match="xhtml:li[@datatype='ordered' or @datatype='unordered' or @datatype='task']/xhtml:p"> <li> <xsl:applytemplates /> </li> </xsl:template>
Bike has correctly adopted the optimal explicit and relative model of hierarchy, in contrast to HTML; this means that the depth of a heading is not reflected in the element that introduces it, but is instead inferred from its actual position in the outline hierarchy. To convert Bike outlines to idiomatic HTML, we must flatten the hierarchy and introduce explicit heading levels; luckily, this is easy to accomplish in XSLT by counting the ancestors of heading
type.
<xsl:template match="xhtml:li[@datatype='heading']/xhtml:p"> <xsl:element name="h{count(ancestor::xhtml:li[@datatype='heading'])}"> <xsl:applytemplates /> </xsl:element> </xsl:template>
The remainder of the row types are not difficult to render; you may prefer alternative formatting depending on your goals.
<xsl:template match="xhtml:li[@datatype='quote']/xhtml:p"> <blockquote> <xsl:applytemplates /> </blockquote> </xsl:template> <xsl:template match="xhtml:li[@datatype='note']/xhtml:p"> <p> <em> <xsl:applytemplates /> </em> </p> </xsl:template> <xsl:template match="xhtml:li[not(@datatype)]/xhtml:p"> <p> <xsl:applytemplates /> </p> </xsl:template> </xsl:stylesheet>
Next, we can use Saxon to convert a Bike outline to idiomatic HTML using the stylesheet above.
cat review.bike  saxon xsl:biketohtml.xsl  > review.html
Go ahead and open the resulting HTML file in a text editor and a browser to see the results.
The following is the result of transforming an example Bike outline to idiomatic HTML using an XSLT stylesheet.
<?xml version="1.0" encoding="UTF8"?> <html xmlns="http://www.w3.org/1999/xhtml"> <body> <h1>Tasks</h1> <ol> <li>read through paper on iPad and highlight</li> </ol> <h1>Syntax and semantics of foo bar baz</h1> <p> <mark>Overall merit:</mark> </p> <p> <mark>Reviewer Expertise:</mark> </p> <h2>Summary of the paper</h2> <p> <em>Please give a brief summary of the paper</em> </p> <p>This paper describes the syntax and semantics of foo, bar, and baz.</p> <h2>Assessment of the paper</h2> <p> <em>Please give a balanced assessment of the paper's strengths and weaknesses and a clear justification for your review score.</em> </p> <h2>Detailed comments for authors</h2> <p> <em>Please give here any additional detailed comments or questions that you would like the authors to address in revising the paper.</em> </p> <h3>Minor comments</h3> <ul> <li>line 23: "teh" => "the"</li> <li>line 99: "fou" => "foo"</li> </ul> <h2>Questions to be addressed by author response</h2> <p> <em>Please list here any specific questions you would like the authors to address in their author response. Since authors have limited time in which to prepare their response, please only ask questions here that are likely to affect your accept/reject decision.</em> </p> <h2>Comments for PC and other reviewers</h2> <p> <em>Please list here any additional comments you have which you want the PC and other reviewers to see, but not the authors.</em> </p> <p>In case any one is wondering, I am an expert in foo, but not in bar nor baz.</p> </body> </html>
Next, we will process this HTML file using Pandoc; unfortunately, Pandoc leaves behind a lot of garbage character escapes that are not suitable for submission anywhere, so we must filter those out using sed
.
cat review.html  pandoc f html t markdownraw_htmlnative_divsnative_spansfenced_divsbracketed_spanssmart  sed 's/\\//g'
The following is the result of converting the idiomatic HTML representation of a Bike outline to Markdown using Pandoc, with some light postprocessing by sed
.
# Tasks 1. read through paper on iPad and highlight # Syntax and semantics of foo bar baz Overall merit: Reviewer Expertise: ## Summary of the paper *Please give a brief summary of the paper* This paper describes the syntax and semantics of foo, bar, and baz. ## Assessment of the paper *Please give a balanced assessment of the paper's strengths and weaknesses and a clear justification for your review score.* ## Detailed comments for authors *Please give here any additional detailed comments or questions that you would like the authors to address in revising the paper.* ### Minor comments  line 23: "teh" => "the"  line 99: "fou" => "foo" ## Questions to be addressed by author response *Please list here any specific questions you would like the authors to address in their author response. Since authors have limited time in which to prepare their response, please only ask questions here that are likely to affect your accept/reject decision.* ## Comments for PC and other reviewers *Please list here any additional comments you have which you want the PC and other reviewers to see, but not the authors.* In case any one is wondering, I am an expert in foo, but not in bar nor baz.
We can group these tasks into a oneliner as follows:
cat review.bike  saxon xsl:biketohtml.xsl   pandoc f html t markdownraw_htmlnative_divsnative_spansfenced_divsbracketed_spanssmart  sed 's/\\//g'
I have gathered the scripts to convert Bike outlines into Markdown via idiomatic HTML in a Git repository where they can be easily downloaded. If you have any improvements to these scripts, please submit them as a patch to my public inbox! I am also interested in whether it is possible to write the XSLT 2.0 stylesheet as equivalent XSLT 1.0, to avoid requiring the proprietary Saxon tool. Feel free also to send comments on this post to my public inbox, or discuss with me on Mastodon.
Separately from my dayjob at University of Cambridge, I am independently researching tools for scientific thought and developing software like Forester that you can use to unlock your brain. If you have benefited from this work or the writings on my blog, please considering supporting me with a sponsorship on Kofi.
Lurie states the following result as Proposition 7.2.1.14 of Higher Topos Theory:
Let\mathcal{X} be an ∞topos and let{ \mathcal{X} } \xrightarrow{\tau _{ \leq 0}} { \tau _{ \leq 0} \mathcal{X} } be the left adjoint to the inclusion. A morphism{ U } \xrightarrow{\phi} { X } in\mathcal{X} is an effective epimorphism if and only if in\tau _{ \leq 0} { \left ( \phi \right )} is an effective epimorphism in the ordinary topos\mathrm{h} { \left ( \tau _{ \leq0 } \mathcal{X} \right )} .
Several users of MathOverflow have noticed that the proof of this result given by Lurie is circular. The result is true and can be recovered in a variety of ways, as pointed out in the MathOverflow discussion. For expository purposes, I would like to show how to prove this result directly in univalent foundations using standard results about
First we observe a trivial lemma about truncations.
Any proposition is orthogonal to the map
It follows from the above that
A function
We deduce that
by definition  
by induction  
by Lemma  
by HoTT Book, Theorem 7.3.12  
by HoTT Book, Theorem 7.3.9  
by definition 
Incidentally, the appeal to Theorem 7.3.12 of the HoTT Book can be replaced by the more general Proposition 2.26 of Christensen et al., which applies to an arbitrary reflective subuniverse and the corresponding subuniverse of separated types.
The Stacks project is the most successful scientific hypertext project in history. Its goal is to lay the foundations for the theory of algebraic stacks; to facilitate its scalable and sustainable development, several important innovations have been introduced, with the tags system being the most striking.
Each tag refers to a unique item (section, lemma, theorem, etc.) in order for this project to be referenceable. These tags don't change even if the item moves within the text. (Tags explained, The Stacks Project).
Many working scientists, students, and hobbyists have wished to create their own tagbased hypertext knowledge base, but the combination of tools historically required to make this happen are extremely daunting. Both the Stacks project and Kerodon use a cluster of software called Gerby, but bitrot has set in and it is no longer possible to build its dependencies on a modern environment without significant difficulty, raising questions of longevity.
Moreover, Gerby’s deployment involves running a database on a server (in spite of the fact that almost the entire functionality is static HTML), an architecture that is incompatible with the constraints of the everyday working scientist or student who knows at most how to upload static files to their universityprovided public storage. The recent experience of the nLab’s pandemicera hiatus and near death experience has demonstrated with some urgency the pracarity faced by any project relying heavily on volunteer system administrators.
After spending two years exploring the design of tools for scientific thought that meet the unique needs of real, scalable scientific writing in hypertext, I have created a tool called Forester which has the following benefits:
Forester combines associative and hierarchical networks of evergreen notes (called “trees”) into hypertext sites called “forests”.
A forest of evergreen notes (or a forest for short) is loosely defined to be a collection of evergreen notes in which multiple hierarchical structures are allowed to emerge and evolve over time. Concretely, one note may contextualize several other notes via transclusion within its textual structure; in the context of a forest, we refer to an individual note as a tree. Of course, a tree can be viewed as a forest that has a root node.
Trees correspond roughly to what are referred to as “tags” in the Stacks Project.
In this article, I will show you how to set up your own forest using the Forester software. These instructions pertain to the Forester 2.3 version.
In this section, we will walk through the installation of the Forester software.
Forester requires a unixbased system to run; it has been tested on both macOS and Linux. Windows support is desirable, but there are no concrete plans to implement it at this time.
Forester is a tool written in the OCaml programming language, and makes use of the latest features of OCaml 5. Most users should install Forester through OCaml's opam package manager; instructions to install opam and OCaml simultaneously can be found here.
If you intend to embed
It is best practice to maintain your forest inside of distributed version control. This serves not only as a way to prevent data loss (because you will be pushing frequently to a remote repository); it also allows you to easily roll back to an earlier version of your forest, or to create “branches” in which you prepare trees that are not yet ready to be integrated into the forest.
The recommended distributed version control system is git, which comes preinstalled on many unixbased systems and is easy to install otherwise. Git is not the most userfriendly piece of software, unfortunately, but it is ubiquitous. It is possible (but not recommended) to use Forester without version control, but note that the simplest way to initialize your own forest involves cloning a git repository.
Once you have met the system requirements, installing Forester requires only a single shell command:
opam install forester
To verify that Forester is installed, please run forester version
in your shell.
Now that you have installed the Forester software, it is time to set up your forest. Currently, the simplest way to set up a new forest is to clone your own copy of the foresttemplate
repository which has been prepared for this purpose. In the future, I plan to have a more userfriendly way to set up new forests; please contact the mailing list if this is an urgent need, and it will be prioritized accordingly.
We will initialize a new forest in a folder called forest
. Open your shell to a suitable parent directory (e.g. your documents folder), and type the following commands:
git init forest cd forest
The first command above creates a new folder called forest
and initializes a git repository inside that folder. The second command instructs your shell to enter the forest
directory. Next, we will initialize your new forest with the contents of the default template by pulling from the template repository:
git pull https://git.sr.ht/~jonsterling/foresttemplate
A tree in Forester is associated to an address of the form xxxNNNN
where xxx
is your chosen “namespace” (most likely your initials) and NNNN
is a fourdigit base36 number. The purpose of the namespace and the base36 code is to uniquely identify a tree, not only within your forest but across all forests. A tree with address xxxNNNN
is always stored in a file named xxxNNNN.tree
.
Note that the format of tree addresses is purely a matter of convention, and is not forced by the Forester tool. Users are free to use their own format for tree addresses, and in some cases alternative (humanreadable) formats may be desirable: this includes trees representing bibliographic references, as well as biographical trees.
The template forest assumes that your chosen namespace prefix is xxx
, and comes equipped with a “root” tree located at trees/xxx0001.tree
. You should choose your own namespace prefix, likely your personal initials, and then rename trees/xxx0001.tree
accordingly.
To build your forest, you can run the following command of Forester's executable in your shell, where xxx0001
is the name of your root tree:
forester build dev root=xxx0001 trees/
The dev
flag is optional, and when activated supplies metadata to the generated website to support an “edit button” on each tree; this flag is meant to be used when developing your forest locally, and should not be used when building the forest to be uploaded to your public web host. The root=xxx0001
option has the effect of rendering the tree xxx0001
to a file named output/index.xml
rather than to output/xxx0001.xml
.
Forester renders your forest to some XML files in the output/
directory; XML is, like HTML, a format for structured documents that can be displayed by web browsers. The forest template comes equipped with a builtin XSLT stylesheet (assets/forest.xsl
) which is used to instruct web browsers how to render your forest into a pleasing and readable format.
The recommended and most secure way to view your forest while editing it is to serve it from a local web server. To do this, first ensure that you have Python 3 correctly installed. Then run the following command from the root directory of your forest:
python3 m http.server 1313 d output
While this command is running, you will be able to access your forest by navigating to localhost:1313/index.xml
in your preferred web browser.
In the future, Forester will be able to run its own local server to avoid the dependency on external tools like Python.
It is also possible to open the generated file output/index.xml
directly in your web browser. Unfortunately, modern web browsers by default prevent the use of XSLT stylesheets on the local file system for security reasons. Because Forester's output format is XML, the output cannot be viewed directly in your web browser without disabling this security feature (at your own risk). Users who do not understand the risks involved should turn back and use a local web server instead, which is more secure; if you understand and are willing to accept the risks, you may proceed as follows depending on your browser.
To configure Firefox for viewing your local forest, navigate to about:config
in your address bar.
security.fileuri.strict_origin_policy
.security.fileuri.strict_origin_policy
will appear set to true
. Double click on the word true
to toggle it to false
.To configure Safari for viewing your local forest, you must activate the Develop menu and then toggle one setting.
The first tree that you should create is a biographical tree to represent your own identity; ultimately you will link to this tree when you set the authors of other trees that you create later on. Although most trees will be addressed by identifiers of the form xxxNNNN
, it is convenient to simply use a person’s full name to address a biographical tree. My own biographical tree is located at trees/people/jonmsterling.tree
and contains the following source code:
\title{Jon Sterling} \taxon{person} \meta{external}{https://www.jonmsterling.com/} \meta{institution}{University of Cambridge} \meta{orcid}{0000000205855564} \meta{position}{Associate Professor} \p{Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. Formerly a [Marie SkłodowskaCurie Postdoctoral Fellow](jms0061) hosted at Aarhus University by [Lars Birkedal](larsbirkedal), and before this a PhD student of [Robert Harper](robertharper).}
Let’s break this code down to understand what it does.
\title{Jon Sterling}
sets the title of the tree to my name.\taxon{person}
declaration informs Forester that the tree is biographical. Not ever tree needs to have a taxon; common taxa include person
, theorem
, definition
, lemma
, blog
, etc. You are free to use whatever you want, but some taxa are treated specially by Forester.\meta
declarations attach additional information to the tree that can be used during rendering. These declarations are optional, and you are free to put whatever metadata you want.\p{...}
.Do not hardwrap your text, as this can have visible impact on how trees are rendered; it is recommended that you use a text editor with good support for softwrapping, like Visual Studio Code.
You can see that the concrete syntax of Forester's trees looks superficially like a combination of
Creating a new tree in your forest is as simple as adding a .tree
file to the trees
folder. Because it is hard to manually choose the next incremental tree address, Forester provides a command to do this automatically. If your chosen namespace prefix is xxx
, then you should use the following command in your shell to create a new tree:
forester new dir=trees prefix=xxx
In return, Forester should output a message like Created tree xxx007J
; this means that there is a new file located in trees/xxx007J.tree
which you can populate. If we look at the contents of this new file, we will see that it is empty except for metadata assigning a date to the tree:
\date{20230815}
Most trees should have a \date
annotation; this date is meant to be the date of the tree's creation. You should proceed by adding further metadata: the title and the author; for the latter, you will use the address of your personal biographical tree.
\title{my first tree} \author{jonmsterling}
Tree titles should be given in lower case (except for proper names, etc.); these titles will be rendered by Forester in sentence case. A tree can have as many \author
declarations as it has authors; these will be rendered in their order of appearance.
Now you can begin to populate the tree with its content, written in the Forester markup language. Think carefully about keeping each tree relatively independent and atomic.
You may be used to writing
Forester’s bottomup approach to section hierarchy works via something called transclusion. The idea is that at any time, you can include (“transclude”) the full contents of another tree into the current tree as a subsection by adding the following code:
\transclude{xxxNNNN}
This is kind of like \input
command, but much better behaved: for instance, section numbers are computed on the fly. This entire tutorial is cobbled together by transcluding many smaller trees, each with their own independent existence. For example, the following two sections are transcluded from an entirely different part of my forest:
It is easy to make the mistake of prematurely imposing a complex hierarchical structure on a network of notes, which leads to excessive refactoring. Hierarchy should be used sparingly, and its strength is for the largescale organization of ideas. The best structure to impose on a network of many small related ideas is a relatively flat one. I believe that this is one of the mistakes made in the writing of the foundations of relative category theory, whose hierarchical nesting was too complex and quite beholden to my experience with prehypertext media.
One of the immediate impacts and strengths of Forester’s transclusion model is that a given tree has no canonical “geographic” location in the forest. One tree can appear as a child of many other trees, which allows the same content to be incorporated into different textual and intellectual narratives.
Multiple hierarchical structures can be imposed on the same associative network of nodes; a hierarchical structure amounts to a “narrative” that contextualizes a given subgraph of the network. One example could be the construction of lecture notes; another example could be a homework sheet; a further example could be a book chapter or scientific article. Although these may draw from the same body of definitions, theorems, examples, and exercises, these objects are contextualized within a different narrative, often toward fundamentally different ends.
As a result, any interface for navigating the neighborrelation in hierarchically organized notes would need to take account of the multiplicity of parent nodes. Most hypertext tools assume that the position of a node in the hierarchy is unique, and therefore have a single “next/previous” navigation interface; we must investigate the design of interfaces that surface all parent/neighbor relations.
A tree in Forester is a single file written in a markup language designed specifically for scientific writing with bottomup hierarchy via transclusion. A tree has two components: the frontmatter and the mainmatter.
The frontmatter of a Forester tree is a sequence of declarations that we summarize below.
Declaration  Meaning 

\title{...}  sets the title of the tree; can contain mainmatter markup 
\author{name}  sets the author of the tree to be the biographical tree at address name 
\date{YYYYMMDD}  sets the creation date of the tree 
\taxon{taxon}  sets the taxon of the tree; example taxa include lemma , theorem , person , reference ; the latter two taxa are treated specially by Forester for tracking biographical and bibliographical trees respectively 
\def\ident[x][y]{body}  defines and exports from the current tree a function named \ident with two arguments; subsequently, the expression \ident{u}{v} would expand to body with the values of u,v substituted for \x,\y 
\import{xxxNNNN}  brings the functions exported by the tree xxxNNNN into scope

\export{xxxNNNN}  brings the functions exported by the tree xxxNNNN into scope, and exports them from the current tree

Below we summarize the concrete syntax of the mainmatter in a Forester tree.
Function  Meaning 

\p{...}  creates a paragraph containing ... ; unlike Markdown, it is mandatory to annotate paragraphs explicitly 
\em{...}  typesets the content in italics 
\b{...}  typesets the content in boldface 
#{...}  typesets the content in (inline) math mode using 
##{...}  typesets the content in (display) math mode using 
\transclude{xxxNNNN}  transcludes the tree at address xxxNNNN as a subsection 
[title](address)  formats the text title as a hyperlink to address address ; if address is the address of a tree, the link will point to that tree, and otherwise it is treated as a URL

\let\ident[x][y]{body}  defines a local function named \ident with two arguments; subsequently, the expression \ident{u}{v} would expand to body with the values of u,v substituted for \x,\y .

\code{...}  typesets the content in monospace 
\tex{...}  typesets the value of the body externally using \texpackage{pkgname} declarations in the frontmatter 
An example of a complete tree in the Forester markup language can be seen below.
\title{creation of (co)limits} \date{20230211} \taxon{definition} \author{jonmsterling} \def\CCat{#{\mathcal{C}}} \def\DCat{#{\mathcal{D}}} \def\ICat{#{\mathcal{I}}} \def\Mor[arg1][arg2][arg3]{#{{\arg2}\xrightarrow{\arg1}{\arg3}}} \p{Let \Mor{U}{\CCat}{\DCat} be a functor and let \ICat be a category. The functor #{U} is said to \em{create (co)limits of #{\ICat}figures} when for any diagram \Mor{C_\bullet}{\ICat}{\CCat} such that #{\ICat\xrightarrow{C_\bullet}\CCat\xrightarrow{F}\DCat} has a (co)limit, then #{C_\bullet} has a (co)limit that is both preserved and reflected by #{F}.}
The code above results in the following tree:
Let
Now that you have created your forest and added a few trees of your own, it is time to upload it to your web host. Many users of Forester will have universitysupplied static web hosting, and others may prefer to use GitHub pages; deploying a forest works the same way in either case.
output
directory and upload them to your preferred web host.I am eager to see the new forests that people create using Forester. I am happy to offer personal assistance via the mailing list.
Many aspects of Forester are in flux and not fully documented; it will often be instructive to consult the source of existings forests, such as my own. Please be aware that my own forest tends to be using the latest (often unreleased) version of Forester, so there may be slight incompatibilities.
Have fun, and be sure to send me links to your forests when you have made them!
Separately from my dayjob at University of Cambridge, I am independently researching tools for scientific thought and developing software like Forester that you can use to unlock your brain. If you have benefited from this work or the writings on my blog, please considering supporting me with a sponsorship on Kofi.
It is easy to teach a student how to give a naïve denotational semantics to a language like System T, and then use it to reason about the equational theory: a type might as well be a set, and a program might as well be a function, and equational adequacy at base type is established using a logical relation between the initial model and the category of sets. Adding any nontrivial feature to this language (e.g. general recursion, polymorphism, state, etc.) immediately increases the difficulty beyond the facility of a beginner: to add recursion, one must replace sets and functions with domains and continuous maps, and to accommodate polymorphism and state, one must pass to increasingly inaccessible variations on this basic picture.
The dream of the 1990s was to find a category that behaves like
In this talk, I will explain some important classical results in synthetic domain theory as well as more recent results that illustrate the potential impact of “naïve denotational semantics” on the life of a workaday computer scientist.
Impredicative guarded dependent type theory (iGDTT) is a new version of type theory that combines guarded recursion (the "later" modality) with impredicative polymorphism (universal and existential types). It turns out that these two features are sufficient to define a very simple denotational semantics for System F with recursive types and higherorder store. We believe that the expressivity of iGDTT brings us one step closer to a general metalanguage for realistic denotational semantics, and provides a compelling strategy to elude the burden of operational semantics. As a further benefit, we are now able to justify the extension of full dependent type theory with a Haskellstyle IO
monad and IORef
types.
Jacobs has proposed definitions for (weak, strong, split) generic objects for a fibered category; building on his definition of (split) generic objects, Jacobs develops a menagerie of important fibrational structures with applications to categorical logic and computer science, including higher order fibrations, polymorphic fibrations, 𝜆2fibrations, triposes, and others. We observe that a split generic object need not in particular be a generic object under the given definitions, and that the definitions of polymorphic fibrations, triposes, etc. are strict enough to rule out some fundamental examples: for instance, the fibered preorder induced by a partial combinatory algebra in realizability is not a tripos in this sense. We propose a new alignment of terminology that emphasizes the forms of generic object appearing most commonly in nature, i.e. in the study of internal categories, triposes, and the denotational semantics of polymorphism. In addition, we propose a new class of acyclic generic objects inspired by recent developments in higher category theory and the semantics of homotopy type theory, generalizing the realignment property of universes to the setting of an arbitrary fibration.
Separation logic is used to reason locally about stateful programs. State of the art program logics for higherorder store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop Tulip, a higherorder separation logic over the typed equational theory of higherorder store for a monadic version of System
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
Several different topoi have played an important role in the development and applications of synthetic guarded domain theory (SGDT), a new kind of synthetic domain theory that abstracts the concept of guarded recursion frequently employed in the semantics of programming languages. In order to unify the accounts of guarded recursion and coinduction, several authors have enriched SGDT with multiple “clocks” parameterizing different timestreams, leading to more complex and difficult to understand topos models. Until now these topoi have been understood very concretely qua categories of presheaves, and the logicogeometrical question of what theories these topoi classify has remained open. We show that several important topos models of SGDT classify very simple geometric theories, and that the passage to various forms of multiclock guarded recursion can be rephrased more compositionally in terms of the lower bagtopos construction of Vickers and variations thereon due to Johnstone. We contribute to the consolidation of SGDT by isolating the universal property of multiclock guarded recursion as a modular construction that applies to any topos model of singleclock guarded recursion.
We present XTT, a version of Cartesian cubical type theory specialized for Bishop sets à la Coquand, in which every type enjoys a definitional version of the uniqueness of identity proofs. Using cubical notions, XTT reconstructs many of the ideas underlying Observational Type Theory, a version of intensional type theory that supports function extensionality. We prove the canonicity property of XTT (that every closed boolean is definitionally equal to a constant) by Artin gluing.
We present calf, a costaware logical framework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions, we argue that the cost structure of programs motivates a phase distinction between intension and extension. Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which costaware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a fullspectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants.
We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis. We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our securityaware computational model satisfies terminationinsensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper’s recent reinterpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topostheoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the costaware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between “higher” and “lower” security, and second to verify the computational adequacy of our sheaf semantics with respect to a version of Abadi et al.’s dependency core calculus to which we have added a construct for declassifying termination channels.
In the published version of this paper, there were a few mistakes that have been corrected in the local copy hosted here.
A serious (and asyet unfixed) problem was discovered in July of 2023 by Yue Niu, which undermines the proof of adequacy given; in particular, the proof that the logical relation on free algebras is admissible is not correct. I believe there is a different proof of adequacy for the calculus described, but it will have a different structure from what currently appears in the paper. We thank Yue Niu for his attention to detail and careful reading of this paper.
The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction, computational effects, and type abstraction. We contribute a fresh “synthetic” take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal typetheoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects, placing projectibility of module expressions on a typetheoretic basis.
Our main result is a (significant) proofrelevant and phasesensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of logical relation called a parametricity structure. Parametricity structures generalize the proofirrelevant relations of classical parametricity to proofrelevant families, where there may be nontrivial evidence witnessing the relatedness of two programs—simplifying the metatheory of strong sums over the collection of types, for although there can be no “relation classifying relations,” one easily accommodates a “family classifying small families.”
Using the insight that logical relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types, by iterating our modal account of the phase distinction. We axiomatize a dependent type theory of parametricity structures using two pairs of complementary modalities (syntactic, semantic) and (static, dynamic), substantiated using the topos theoretic Artin gluing construction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.
After going to press, we have fixed the following mistakes:
The local copy hosted here has the corrections implemented
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. Our normalization result is reductionfree, in the sense of yielding a bijection between equivalence classes of terms in context and a tractable language of
Extending Martín Hötzel Escardó’s effectful forcing technique, we give a new proof of a wellknown result: Brouwer’s monotone bar theorem holds for any bar that can be realized by a functional of type
We contribute XTT, a cubical reconstruction of Observational Type Theory [Altenkirch et al., 2007] which extends MartinLöf's intensional type theory with a dependent equality type that enjoys function extensionality and a judgmental version of the unicity of identity proofs principle (UIP): any two elements of the same equality type are judgmentally equal. Moreover, we conjecture that the typing relation can be decided in a practical way. In this paper, we establish an algebraic canonicity theorem using a novel extension of the logical families or categorical gluing argument inspired by Coquand and Shulman: every closed element of boolean type is derivably equal to either true or false.
Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT🔒 supporting the connectives of standard MartinLöf Type Theory as well as an S4style necessity operator. MLTT🔒 supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT🔒, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT🔒, demonstrating the ease of applying our techniques.
Nakano’s later modality can be used to specify and define recursive functions which are causal or synchronous; in concert with a notion of clock variable, it is possible to also capture the broader class of productive (co)programs. Until now, it has been difficult to combine these constructs with dependent types in a way that preserves the operational meaning of type theory and admits a hierarchy of universes. We present an operational account of guarded dependent type theory with clocks called Guarded Computational Type Theory, featuring a novel clock intersection connective that enjoys the clock irrelevance principle, as well as a predicative hierarchy of universes which does not require any indexing in clock contexts. Guarded Computational Type Theory is simultaneously a programming language with a rich specification logic, as well as a computational metalanguage that can be used to develop semantics of other languages and logics.
In this paper, we present an extension to MartinLöf’s Intuitionistic Type Theory which gives natural solutions to problems in pragmatics, such as pronominal reference and presupposition. Our approach also gives a simple account of donkey anaphora without resorting to exotic scope extension of the sort used in Discourse Representation Theory and Dynamic Semantics, thanks to the proofrelevant nature of type theory.
We develop a denotational semantics for general reference types in an impredicative version of guarded homotopy type theory, an adaptation of synthetic guarded domain theory to Voevodsky’s univalent foundations. We observe for the first time the profound impact of univalence on the denotational semantics of mutable state. Univalence automatically ensures that all computations are invariant under symmetries of the heap—a bountiful source of free theorems. In particular, even the most simplistic univalent model enjoys many new program equivalences that do not hold when the same constructions are carried out in the universes of traditional setlevel (extensional) type theory.
We present decalf, a directed, effectful costaware logical framework for studying quantitative aspects of functional programs with effects. Like calf, the language is based on a formal phase distinction between the extension and the intension of a program, its pure behavior as distinct from its cost measured by an effectful stepcounting primitive. The type theory ensures that the behavior is unaffected by the cost accounting. Unlike calf, the present language takes account of effects, such as probabilistic choice and mutable state; this extension requires a reformulation of calf’s approach to cost accounting: rather than rely on a “separable” notion of cost, here a cost bound is simply another program. To make this formal, we equip every type with an intrinsic preorder, relaxing the precise cost accounting intrinsic to a program to a looser but nevertheless informative estimate. For example, the cost bound of a probabilistic program is itself a probabilistic program that specifies the distribution of costs. This approach serves as a streamlined alternative to the standard method of isolating a recurrence that bounds the cost in a manner that readily extends to higherorder, effectful programs.
The development proceeds by first introducing the decalf type system, which is based on an intrinsic ordering among terms that restricts in the extensional phase to extensional equality, but in the intensional phase reflects an approximation of the cost of a program of interest. This formulation is then applied to a number of illustrative examples, including pure and effectful sorting algorithms, simple probabilistic programs, and higherorder functions. Finally, we justify decalf via a model in the topos of augmented simplicial sets.
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
We present a novel mechanism for controlling the unfolding of definitions in dependent type theory. Traditionally, proof assistants let users specify whether each definition can or cannot be unfolded in the remainder of a development; unfolding definitions is often necessary in order to reason about them, but an excess of unfolding can result in brittle proofs and intractably large proof goals. In our system, definitions are by default not unfolded, but users can selectively unfold them in a local manner. We justify our mechanism by means of elaboration to a core type theory with extension types, a connective first introduced in the context of homotopy type theory. We prove a normalization theorem for our core calculus and have implemented our system in the cooltt proof assistant, providing both theoretical and practical evidence for it.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. We then extend our language with modal constructs for proofrelevant relational reasoning based on the logical relations as types principle, in which equivalences between imperative abstract datatypes can be established synthetically. Finally we develop a decomposition of the store model as a general construction that extends an arbitrary polymorphic callbypushvalue adjunction with higherorder store, improving on Levy's possible worlds model construction; what is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
Existential types are reconstructed in terms of small reflective subuniverses and dependent sums. The folklore decomposition detailed here gives rise to a particularly simple account of first class modules as a mode of use of traditional second class modules in connection with the modal operator induced by a reflective subuniverse, leading to a semantic justification for the rules of firstclass modules in languages like OCaml and MoscowML. Additionally, we expose several constructions that give rise to semantic models of MLstyle programming languages with both firstclass modules and realistic computational effects, culminating in a model that accommodates higherorder first class recursive modules and higherorder store.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
Hofmann and Streicher famously showed how to lift Grothendieck universes into presheaf topoi, and Streicher has extended their result to the case of sheaf topoi by sheafification. In parallel, van den Berg and Moerdijk have shown in the context of algebraic set theory that similar constructions continue to apply even in weaker metatheories. Unfortunately, sheafification seems not to preserve an important realignment property enjoyed by the presheaf universes that plays a critical role in models of univalent type theory as well as synthetic Tait computability, a recent technique to establish syntactic properties of type theories and programming languages. In the context of multiple universes, the realignment property also implies a coherent choice of codes for connectives at each universe level, thereby interpreting the cumulativity laws present in popular formulations of MartinLöf type theory.
We observe that a slight adjustment to an argument of Shulman constructs a cumulative universe hierarchy satisfying the realignment property at every level in any Grothendieck topos. Hence one has directstyle interpretations of MartinLöf type theory with cumulative universes into all Grothendieck topoi. A further implication is to extend the reach of recent synthetic methods in the semantics of cubical type theory and the syntactic metatheory of type theory and programming languages to all Grothendieck topoi.
I have served as an external advisor for the following bachelor students:
I have served as a committeemember for the following PhD students:
PhD student of Robert Harper.
The course aims to introduce the mathematics of discrete structures, showing it as an essential tool for computer science that can be clever and beautiful.
As I have taken up a faculty position at Cambridge in Autumn ’23, I frequently receive queries from students who are interested in becoming my PhD student. I am still learning the processes here, so my advice on administrative matters may be a little vague. But I have chosen to accumulate here a few points for your attention.
Note that this is a living document, and will be updated as I learn new lessons.
It depends. For example, if you did a four year Bachelor degree with a research component in the final year (such as a high quality honors thesis), this could be similar in some respects to doing an undergraduate at Cambridge followed by an MPhil. In the end, having or not having a Masters degree is not going to be the thing that convinces me that you should do a PhD; on the other hand, it can help with your application for admission and funding — a side of the process that I am not in control of.
I have been informed by my colleague Peter Sewell that for students with untraditional backgrounds, many things become possible if your supervisor has external funding that can be used for students. I do not currently have this kind of funding, but there may be possibilities in the future. Please write to me if you would like to discuss this.
It will of course depend on the student, but I am preferring to recruit students with some existing mathematical maturity (and this could be in any area of mathematics, not just category theory). The purpose of Bachelors and Masterslevel mathematical training is not to learn a specific topic but rather to learn how to learn any topic whatsoever. In short, I do not care if you do not know what a natural transformation is, but it will be a problem if it would take you more than twenty minutes to find out what a natural transformation is.
If you aren’t sure where you stand, please contact me for a chat; I promise not to bite.
I do not recruit students to work on a specific project of mine. I do not run a “lab” or partake in “lab science”; for example, it would not be a successful outcome for you to make the fifteenth extension to a large project of mine that has consumed the lives of several of your predecessors. My job is to train you to be an independent researcher, drawn forward by your own scientific interest and burning questions. In doing so, I will do my best to guide you toward good questions, and provide technical training through collaboration on my projects, but in the end the responsibility for your scientific program is yours.
I will fight for your right to pursue scientific life in a collegial environment that accepts and respects your whole personhood. I have made clear my point of view on diversity, equity, and inclusion in the academic setting in my 2022 statement.
Central to both the design of programming languages and the practice of software engineering is the tension between abstraction and composition. I employ semantic methods from category theory and type theory to design, verify, and implement languages that enable both programmers and mathematicians to negotiate the different levels of abstraction that arise in their work. I apply my research to global safety and security properties of programming languages as well as the design and implementation of interactive theorem provers for higherdimensional mathematics. I develop foundational and practical mathematical tools to deftly weave together verifications that cut across multiple levels of abstraction.
From September 2023, I am an Associate Professor in Logical Foundations and Formal Methods at University of Cambridge.
From 2022, I was a Marie SkłodowskaCurie Postdoctoral Fellow hosted at Aarhus University working with Professor Lars Birkedal.
From 2016 to 2021, I was a PhD student of Professor Robert Harper at Carnegie Mellon University, where I wrote my doctoral thesis on synthetic Tait computability and its application to normalization for cubical type theory.
I have served on the conference program committees:
The annual Symposium on Principles of Programming Languages is a forum for the discussion of all aspects of programming languages and programming systems. Both theoretical and experimental papers are welcome on topics ranging from formal frameworks to experience reports. We seek submissions that make principled, enduring contributions to the theory, design, understanding, implementation or application of programming languages.
Applied category theory is a topic of interest for a growing community of researchers, interested in studying many different kinds of systems using categorytheoretic tools. These systems are found across computer science, mathematics, and physics, as well as in social science, linguistics, cognition, and neuroscience. The background and experience of our members is as varied as the systems being studied. The goal of Applied Category Theory is to bring researchers in the field together, disseminate the latest results, and facilitate further development of the field.
I have served on program committees for the following workshops:
The Second Workshop on the Implementation of Type Systems (WITS 2023) will be held on August 28, 2023, in Braga, Portugal, colocated with IFL 2023. The goal of this workshop is to bring together the implementors of a variety of languages with advanced type systems. The main focus is on the practical issues that come up in the implementation of these systems, rather than the theoretical frameworks that underlie them. In particular, we want to encourage exchanging ideas between the communities around specific systems that would otherwise be accessible to only a very select group. The workshop will have a mix of invited and contributed talks, organized discussion times, and informal collaboration time. We invite participants to share their experiences, study differences among the implementations, and generalize lessons from those. We also want to promote the creation of a shared vocabulary and set of best practices for implementing type systems.
Homotopy Type Theory is a young area of logic, combining ideas from several established fields: the use of dependent type theory as a foundation for mathematics, inspired by ideas and tools from abstract homotopy theory. Univalent Foundations are foundations of mathematics based on the homotopical interpretation of type theory.
The goal of this workshop is to bring together researchers interested in all aspects of Homotopy Type Theory/Univalent Foundations: from the study of syntax and semantics of type theory to practical formalization in proof assistants based on univalent type theory.
The Workshop on TypeDriven Development (TyDe) aims to show how static type information may be used effectively in the development of computer programs. Colocated with ICFP, this workshop brings together leading researchers and practitioners who are using or exploring types as a means of program development.
Jacobs has proposed definitions for (weak, strong, split) generic objects for a fibered category; building on his definition of (split) generic objects, Jacobs develops a menagerie of important fibrational structures with applications to categorical logic and computer science, including higher order fibrations, polymorphic fibrations, 𝜆2fibrations, triposes, and others. We observe that a split generic object need not in particular be a generic object under the given definitions, and that the definitions of polymorphic fibrations, triposes, etc. are strict enough to rule out some fundamental examples: for instance, the fibered preorder induced by a partial combinatory algebra in realizability is not a tripos in this sense. We propose a new alignment of terminology that emphasizes the forms of generic object appearing most commonly in nature, i.e. in the study of internal categories, triposes, and the denotational semantics of polymorphism. In addition, we propose a new class of acyclic generic objects inspired by recent developments in higher category theory and the semantics of homotopy type theory, generalizing the realignment property of universes to the setting of an arbitrary fibration.
Separation logic is used to reason locally about stateful programs. State of the art program logics for higherorder store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop Tulip, a higherorder separation logic over the typed equational theory of higherorder store for a monadic version of System
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
Several different topoi have played an important role in the development and applications of synthetic guarded domain theory (SGDT), a new kind of synthetic domain theory that abstracts the concept of guarded recursion frequently employed in the semantics of programming languages. In order to unify the accounts of guarded recursion and coinduction, several authors have enriched SGDT with multiple “clocks” parameterizing different timestreams, leading to more complex and difficult to understand topos models. Until now these topoi have been understood very concretely qua categories of presheaves, and the logicogeometrical question of what theories these topoi classify has remained open. We show that several important topos models of SGDT classify very simple geometric theories, and that the passage to various forms of multiclock guarded recursion can be rephrased more compositionally in terms of the lower bagtopos construction of Vickers and variations thereon due to Johnstone. We contribute to the consolidation of SGDT by isolating the universal property of multiclock guarded recursion as a modular construction that applies to any topos model of singleclock guarded recursion.
We present XTT, a version of Cartesian cubical type theory specialized for Bishop sets à la Coquand, in which every type enjoys a definitional version of the uniqueness of identity proofs. Using cubical notions, XTT reconstructs many of the ideas underlying Observational Type Theory, a version of intensional type theory that supports function extensionality. We prove the canonicity property of XTT (that every closed boolean is definitionally equal to a constant) by Artin gluing.
We present calf, a costaware logical framework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions, we argue that the cost structure of programs motivates a phase distinction between intension and extension. Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which costaware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a fullspectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants.
We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis. We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our securityaware computational model satisfies terminationinsensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper’s recent reinterpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topostheoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the costaware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between “higher” and “lower” security, and second to verify the computational adequacy of our sheaf semantics with respect to a version of Abadi et al.’s dependency core calculus to which we have added a construct for declassifying termination channels.
In the published version of this paper, there were a few mistakes that have been corrected in the local copy hosted here.
A serious (and asyet unfixed) problem was discovered in July of 2023 by Yue Niu, which undermines the proof of adequacy given; in particular, the proof that the logical relation on free algebras is admissible is not correct. I believe there is a different proof of adequacy for the calculus described, but it will have a different structure from what currently appears in the paper. We thank Yue Niu for his attention to detail and careful reading of this paper.
The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction, computational effects, and type abstraction. We contribute a fresh “synthetic” take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal typetheoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects, placing projectibility of module expressions on a typetheoretic basis.
Our main result is a (significant) proofrelevant and phasesensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of logical relation called a parametricity structure. Parametricity structures generalize the proofirrelevant relations of classical parametricity to proofrelevant families, where there may be nontrivial evidence witnessing the relatedness of two programs—simplifying the metatheory of strong sums over the collection of types, for although there can be no “relation classifying relations,” one easily accommodates a “family classifying small families.”
Using the insight that logical relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types, by iterating our modal account of the phase distinction. We axiomatize a dependent type theory of parametricity structures using two pairs of complementary modalities (syntactic, semantic) and (static, dynamic), substantiated using the topos theoretic Artin gluing construction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.
After going to press, we have fixed the following mistakes:
The local copy hosted here has the corrections implemented
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. Our normalization result is reductionfree, in the sense of yielding a bijection between equivalence classes of terms in context and a tractable language of
Extending Martín Hötzel Escardó’s effectful forcing technique, we give a new proof of a wellknown result: Brouwer’s monotone bar theorem holds for any bar that can be realized by a functional of type
We contribute XTT, a cubical reconstruction of Observational Type Theory [Altenkirch et al., 2007] which extends MartinLöf's intensional type theory with a dependent equality type that enjoys function extensionality and a judgmental version of the unicity of identity proofs principle (UIP): any two elements of the same equality type are judgmentally equal. Moreover, we conjecture that the typing relation can be decided in a practical way. In this paper, we establish an algebraic canonicity theorem using a novel extension of the logical families or categorical gluing argument inspired by Coquand and Shulman: every closed element of boolean type is derivably equal to either true or false.
Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT🔒 supporting the connectives of standard MartinLöf Type Theory as well as an S4style necessity operator. MLTT🔒 supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT🔒, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT🔒, demonstrating the ease of applying our techniques.
Nakano’s later modality can be used to specify and define recursive functions which are causal or synchronous; in concert with a notion of clock variable, it is possible to also capture the broader class of productive (co)programs. Until now, it has been difficult to combine these constructs with dependent types in a way that preserves the operational meaning of type theory and admits a hierarchy of universes. We present an operational account of guarded dependent type theory with clocks called Guarded Computational Type Theory, featuring a novel clock intersection connective that enjoys the clock irrelevance principle, as well as a predicative hierarchy of universes which does not require any indexing in clock contexts. Guarded Computational Type Theory is simultaneously a programming language with a rich specification logic, as well as a computational metalanguage that can be used to develop semantics of other languages and logics.
In this paper, we present an extension to MartinLöf’s Intuitionistic Type Theory which gives natural solutions to problems in pragmatics, such as pronominal reference and presupposition. Our approach also gives a simple account of donkey anaphora without resorting to exotic scope extension of the sort used in Discourse Representation Theory and Dynamic Semantics, thanks to the proofrelevant nature of type theory.
PI:  Jonathan Sterling 
Funding Agency:  United States Air Force Office of Scientific Research 
Program Officer:  Dr. Tristan Nguyen 
Years:  2023–2028 
Status:  recommended for sponsorship 
What does it mean for a piece of software to be correct? There are many possible degrees and dimensions of correctness, e.g. safety, functional correctness, computational complexity, and security. To grapple with this diversity of verification requirements, semanticists develop mathematical models of program behavior that put into relief different aspects of the physical reality of program execution on hardware, just as physicists create many idealized mathematical models to study different aspects of the material reality of the universe.
Mathematical models enable us to reason about program behavior by viewing highly complex objects as being glued together from smaller, simpler objects that are easier to study in isolation. For instance, operational models aim to reduce the behavior of a process to that of individual steps of discrete computation that take place on an idealized computer; in contrast, denotational models reduce the complex global behavior of a process to the simpler local behavior of its constituent subroutines. One advantage of operational methods is that they are applicable even in situations that challenge the modularity of denotational semantics, e.g. where it is not yet understood how to reduce the global behavior of a program to that of its components. On the other hand, denotational methods provide vastly stronger and simpler reasoning principles for program verification when available.
The central thesis of denotational semantics is that programs arrange themselves into geometrical spaces called computational domains, and that a computational process can be thought of as the limit of a sequence of continuous transformations on these domains. Although this thesis has been amply born out for simple kinds of program, today’s most urgent verification requirements pertain to program constructs like concurrency and side effects whose treatment requires the introduction of new kinds of space: for instance, the correct treatment of branching behavior for concurrent processes requires the introduction of higherdimensional computational domains in which programs can “remember” the specific way that they were glued together.
This project will extend the reach of denotational semantics and its attendant advantages for program verification into terrains where scientists have historically struggled to enact the reduction of global behavior to local behavior, making essential use of new advances in the homotopical and geometrical understanding of computation via higher dimensional category theory and topos theory. I will investigate two areas that are ripe for reaping the benefits of a modern denotational semantics: the semantics of sideeffects which govern the interaction of a program with the computer’s memory, and the semantics of concurrent processes.
Beneficiary:  Jonathan Sterling 
Award:  Marie SkłodowskaCurie Actions Postdoctoral Fellowship 
Funder:  European Commission, Horizon Europe Framework Programme (HORIZON) 
Host:  Aarhus University, Center for Basic Research in Program Verification 
Years:  2022–2024 (terminated 2023) 
Amount:  214,934.4 EUR 
See the Final Report.
Abstract. Software systems mediate a growing proportion of human activity, e.g. communication, transport, medicine, industrial and agricultural production, etc. As a result, it is urgent to understand and better control both the correctness and security properties of these increasingly complex software systems. The diversity of verification requirements speaks to a need for models of program execution that smoothly interpolate between many different levels of abstraction.
Models of program execution vary in expressiveness along the spectrum of possible programming languages and specification logics. At one extreme, dependent type theory is a language for mathematicallyinspired functional programming that is sufficiently expressive to serve as its own specification logic. Dependent type theory has struggled, however, to incorporate several computational effects that are common in everyday programming languages, such as state and concurrency. Languages that support these features require very sophisticated specification logics due to the myriad details that must be surfaced in their semantic models.
In the context of dependent type theory, I have recently developed a new technique called Synthetic Tait Computability or STC that smoothly combines multiple levels of abstraction into a single language. Inspired by sophisticated mathematical techniques invented in topos theory and category theory for entirely different purposes, STC enables lowlevel details (even down to execution steps) to be manipulated in a simpler and more abstract way than ever before, making them easier to control mathematically. Perhaps more importantly, the STC method makes it possible to import ideas and techniques from other mathematical fields that are comparatively more developed than programming languages.
The goal of the TypeSynth project is to extend the successful STC approach to a wider class of programming models, in particular programming languages with effects.
Jacobs has proposed definitions for (weak, strong, split) generic objects for a fibered category; building on his definition of (split) generic objects, Jacobs develops a menagerie of important fibrational structures with applications to categorical logic and computer science, including higher order fibrations, polymorphic fibrations, 𝜆2fibrations, triposes, and others. We observe that a split generic object need not in particular be a generic object under the given definitions, and that the definitions of polymorphic fibrations, triposes, etc. are strict enough to rule out some fundamental examples: for instance, the fibered preorder induced by a partial combinatory algebra in realizability is not a tripos in this sense. We propose a new alignment of terminology that emphasizes the forms of generic object appearing most commonly in nature, i.e. in the study of internal categories, triposes, and the denotational semantics of polymorphism. In addition, we propose a new class of acyclic generic objects inspired by recent developments in higher category theory and the semantics of homotopy type theory, generalizing the realignment property of universes to the setting of an arbitrary fibration.
Separation logic is used to reason locally about stateful programs. State of the art program logics for higherorder store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop Tulip, a higherorder separation logic over the typed equational theory of higherorder store for a monadic version of System
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
We develop a denotational semantics for general reference types in an impredicative version of guarded homotopy type theory, an adaptation of synthetic guarded domain theory to Voevodsky’s univalent foundations. We observe for the first time the profound impact of univalence on the denotational semantics of mutable state. Univalence automatically ensures that all computations are invariant under symmetries of the heap—a bountiful source of free theorems. In particular, even the most simplistic univalent model enjoys many new program equivalences that do not hold when the same constructions are carried out in the universes of traditional setlevel (extensional) type theory.
We present decalf, a directed, effectful costaware logical framework for studying quantitative aspects of functional programs with effects. Like calf, the language is based on a formal phase distinction between the extension and the intension of a program, its pure behavior as distinct from its cost measured by an effectful stepcounting primitive. The type theory ensures that the behavior is unaffected by the cost accounting. Unlike calf, the present language takes account of effects, such as probabilistic choice and mutable state; this extension requires a reformulation of calf’s approach to cost accounting: rather than rely on a “separable” notion of cost, here a cost bound is simply another program. To make this formal, we equip every type with an intrinsic preorder, relaxing the precise cost accounting intrinsic to a program to a looser but nevertheless informative estimate. For example, the cost bound of a probabilistic program is itself a probabilistic program that specifies the distribution of costs. This approach serves as a streamlined alternative to the standard method of isolating a recurrence that bounds the cost in a manner that readily extends to higherorder, effectful programs.
The development proceeds by first introducing the decalf type system, which is based on an intrinsic ordering among terms that restricts in the extensional phase to extensional equality, but in the intensional phase reflects an approximation of the cost of a program of interest. This formulation is then applied to a number of illustrative examples, including pure and effectful sorting algorithms, simple probabilistic programs, and higherorder functions. Finally, we justify decalf via a model in the topos of augmented simplicial sets.
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
We present a novel mechanism for controlling the unfolding of definitions in dependent type theory. Traditionally, proof assistants let users specify whether each definition can or cannot be unfolded in the remainder of a development; unfolding definitions is often necessary in order to reason about them, but an excess of unfolding can result in brittle proofs and intractably large proof goals. In our system, definitions are by default not unfolded, but users can selectively unfold them in a local manner. We justify our mechanism by means of elaboration to a core type theory with extension types, a connective first introduced in the context of homotopy type theory. We prove a normalization theorem for our core calculus and have implemented our system in the cooltt proof assistant, providing both theoretical and practical evidence for it.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. We then extend our language with modal constructs for proofrelevant relational reasoning based on the logical relations as types principle, in which equivalences between imperative abstract datatypes can be established synthetically. Finally we develop a decomposition of the store model as a general construction that extends an arbitrary polymorphic callbypushvalue adjunction with higherorder store, improving on Levy's possible worlds model construction; what is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
Existential types are reconstructed in terms of small reflective subuniverses and dependent sums. The folklore decomposition detailed here gives rise to a particularly simple account of first class modules as a mode of use of traditional second class modules in connection with the modal operator induced by a reflective subuniverse, leading to a semantic justification for the rules of firstclass modules in languages like OCaml and MoscowML. Additionally, we expose several constructions that give rise to semantic models of MLstyle programming languages with both firstclass modules and realistic computational effects, culminating in a model that accommodates higherorder first class recursive modules and higherorder store.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
It is easy to teach a student how to give a naïve denotational semantics to a language like System T, and then use it to reason about the equational theory: a type might as well be a set, and a program might as well be a function, and equational adequacy at base type is established using a logical relation between the initial model and the category of sets. Adding any nontrivial feature to this language (e.g. general recursion, polymorphism, state, etc.) immediately increases the difficulty beyond the facility of a beginner: to add recursion, one must replace sets and functions with domains and continuous maps, and to accommodate polymorphism and state, one must pass to increasingly inaccessible variations on this basic picture.
The dream of the 1990s was to find a category that behaves like
In this talk, I will explain some important classical results in synthetic domain theory as well as more recent results that illustrate the potential impact of “naïve denotational semantics” on the life of a workaday computer scientist.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. What is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
We present a novel mechanism for controlling the unfolding of definitions in dependent type theory. Traditionally, proof assistants let users specify whether each definition can or cannot be unfolded in the remainder of a development; unfolding definitions is often necessary in order to reason about them, but an excess of unfolding can result in brittle proofs and intractably large proof goals. In our system, definitions are by default not unfolded, but users can selectively unfold them in a local manner. We justify our mechanism by means of elaboration to a core type theory with extension types, a connective first introduced in the context of homotopy type theory. We prove a normalization theorem for our core calculus and have implemented our system in the cooltt proof assistant, providing both theoretical and practical evidence for it.
Impredicative guarded dependent type theory (iGDTT) is a new version of type theory that combines guarded recursion (the "later" modality) with impredicative polymorphism (universal and existential types). It turns out that these two features are sufficient to define a very simple denotational semantics for System F with recursive types and higherorder store. We believe that the expressivity of iGDTT brings us one step closer to a general metalanguage for realistic denotational semantics, and provides a compelling strategy to elude the burden of operational semantics. As a further benefit, we are now able to justify the extension of full dependent type theory with a Haskellstyle IO
monad and IORef
types.
The great semanticist John Reynolds famously wrote in 1983 that “type structure is a syntactic discipline for enforcing levels of abstraction”. If the last twenty years of programming language semantics and verification have taught us anything, it is that we also need a syntactic discipline for breaking abstraction — in other words, a way to glue together programs and verifications that cut across abstraction barriers.
In programming language semantics and verification, the problem of combining multiple levels of abstraction arises when choosing a “level of detail” at which to view program execution: for instance, one could look at program execution as a detailed operational process of discrete steps evincing the cost or complexity of an algorithm, or one could think of it more abstractly as a black box that only sends inputs to outputs. The difficulty is that in practice, verifications tend to cut across this barrier between complexity and functional correctness: for instance, complexity bounds often depend on the functional correctness of subroutines, and the existence of such a bound implies termination (a correctness property).
For this reason, it is crucial to develop integrated logical foundations for soundly reasoning using multiple models of execution at the same time, even when they expose different facets of a program's meaning. For the past three years, my research program has been to uncover and exploit the basic “laws of motion” governing all such abstraction barriers, which has led to the solution of a few significant open problems in homotopy type theory and modal type theory, as well as some preliminary applications to security and cost analysis.
Impredicative guarded dependent type theory (iGDTT) is a new version of type theory that combines
guarded recursion (the "later" modality) with impredicative polymorphism (universal and existential types).
It turns out that these two features are sufficient to define a very simple denotational semantics for
System F with recursive types and higherorder store. We believe that the expressivity of iGDTT
brings us one step closer to a general metalanguage for realistic denotational semantics,
and provides a compelling strategy to elude the burden of operational semantics.
As a further benefit, we are now able to justify the extension of full dependent
type theory with a Haskellstyle IO
monad and IORef
types.
PIs:  Stephanie Balzer and Robert Harper 
Unfunded:  Jonathan Sterling 
Funding Agency:  United States Air Force Office of Scientific Research 
Program Officer:  Dr. Tristan Nguyen 
Award No.:  FA95502110385 
Years:  20212022 
The purpose of this research is to investigate the development of programming language techniques to express and enforce constraints on the flow of information in a program.
Type systems are the most widely applicable tool for enforcing such restrictions within and among programs. The aim of the project is to investigate the development of suitable type systems for information flow security in two settings, and to understand their interrelationship. In each case the goal is to state and prove noninterference properties of programs that ensure the independence of nonsensitive outputs on sensitive inputs to a system. Both methods draw on the method of logical relations to establish these properties.
The first setting is that of session types for communicating programs. In their most basic form session types express the protocols for interaction among programs that interact along datacarrying communication channels. A key characteristic of session types is that they are able to track changes of state in a computation using methods drawn from substructural logic. The intent of the investigation is to extend session types to track information flow in a composite program using refinement types that encode security levels of data.
The second setting is that of program modules which govern the construction of programs from separable, reusable components. Type systems for modularity are primarily concerned with the interfaces between components, which ensure that the effects of changes to a module implementation on other modules in a system can be tightly controlled. The project will investigate the extension of module type systems to express information flow dependencies among components using a generalization of the phase distinction between static and dynamic aspects of a program to account for a richer hierarchy of security levels.
I have served as an external advisor for the following bachelor students:
I have served as a committeemember for the following PhD students:
PhD student of Robert Harper.
The course aims to introduce the mathematics of discrete structures, showing it as an essential tool for computer science that can be clever and beautiful.
This award, established in 2018, is endowed with a generous contribution from Emeritus Professor Ed Clarke and his wife Martha. It is presented annually to an outstanding dissertation in the Computer Science Department.
Dear friends,
I am delighted to announce that Jonathan Sterling has been selected as the winner of the 2022 Edmund M. Clarke Dissertation Award in CSD for his thesis:
First Steps in Synthetic Tait Computability, October 2021
Jonathan’s PhD research was advised by Robert Harper.
This award, endowed by a generous contribution from Professor Ed Clarke and his wife Martha, is given annually to an outstanding dissertation in the Computer Science Department.
Congratulations Jonathan!
The implementation and semantics of dependent type theories can be studied in a syntaxindependent way: the objective metatheory of dependent type theories exploits the universal properties of their syntactic categories to endow them with computational content, mathematical meaning, and practical implementation (normalization, type checking, elaboration). The semantic methods of the objective metatheory inform the design and implementation of correctbyconstruction elaboration algorithms, promising a principled interface between real proof assistants and ideal mathematics.
In this dissertation, I add synthetic Tait computability to the arsenal of the objective metatheorist. Synthetic Tait computability is a mathematical machine to reduce difficult problems of type theory and programming languages to trivial theorems of topos theory. First employed by Sterling and Harper to reconstruct the theory of program modules and their phase separated parametricity, synthetic Tait computability is deployed here to resolve the last major open question in the syntactic metatheory of cubical type theory: normalization of open terms.
Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT🔒 supporting the connectives of standard MartinLöf Type Theory as well as an S4style necessity operator. MLTT🔒 supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT🔒, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT🔒, demonstrating the ease of applying our techniques.
We contribute XTT, a cubical reconstruction of Observational Type Theory [Altenkirch et al., 2007] which extends MartinLöf's intensional type theory with a dependent equality type that enjoys function extensionality and a judgmental version of the unicity of identity proofs principle (UIP): any two elements of the same equality type are judgmentally equal. Moreover, we conjecture that the typing relation can be decided in a practical way. In this paper, we establish an algebraic canonicity theorem using a novel extension of the logical families or categorical gluing argument inspired by Coquand and Shulman: every closed element of boolean type is derivably equal to either true or false.
Named in honor of Professor W. Kendrick Pritchett, a distinguished scholar and a devoted teacher of Greek, upon his retirement, a monetary prize to the enrolled student who performs best in the first year of Greek (sometimes awarded to cowinners). Awarded annually since 1977 and administered by the Department. Each May, the instructors of Greek 2 and (when offered) 10 meet after final exams to discuss the best students in each class; in consultation with the Department Chair or a delegate of the Chair, one or two winners are selected.
I maintain several public bibliographies.
This page collects papers and dissertations about Cubical Type Theory; if you have written a paper or dissertation on this topic, please write to me to have it added to this list.
We present XTT, a version of Cartesian cubical type theory specialized for Bishop sets à la Coquand, in which every type enjoys a definitional version of the uniqueness of identity proofs. Using cubical notions, XTT reconstructs many of the ideas underlying Observational Type Theory, a version of intensional type theory that supports function extensionality. We prove the canonicity property of XTT (that every closed boolean is definitionally equal to a constant) by Artin gluing.
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. Our normalization result is reductionfree, in the sense of yielding a bijection between equivalence classes of terms in context and a tractable language of
In their usual form, representation independence metatheorems provide an external guarantee that two implementations of an abstract interface are interchangeable when they are related by an operationpreserving correspondence. If our programming language is dependentlytyped, however, we would like to appeal to such invariance results within the language itself, in order to obtain correctness theorems for complex implementations by transferring them from simpler, related implementations. Recent work in proof assistants has shown that Voevodsky’s univalence principle allows transferring theorems between isomorphic types, but many instances of representation independence in programming involve nonisomorphic representations.
In this paper, we develop techniques for establishing internal relational representation independence results in dependent type theory, by using higher inductive types to simultaneously quotient two related implementation types by a heterogeneous correspondence between them. The correspondence becomes an isomorphism between the quotiented types, thereby allowing us to obtain an equality of implementations by univalence. We illustrate our techniques by considering applications to matrices, queues, and finite multisets. Our results are all formalized in Cubical Agda, a recent extension of Agda which supports univalence and higher inductive types in a computationally wellbehaved way.
We contribute XTT, a cubical reconstruction of Observational Type Theory [Altenkirch et al., 2007] which extends MartinLöf's intensional type theory with a dependent equality type that enjoys function extensionality and a judgmental version of the unicity of identity proofs principle (UIP): any two elements of the same equality type are judgmentally equal. Moreover, we conjecture that the typing relation can be decided in a practical way. In this paper, we establish an algebraic canonicity theorem using a novel extension of the logical families or categorical gluing argument inspired by Coquand and Shulman: every closed element of boolean type is derivably equal to either true or false.
This paper improves the treatment of equality in guarded dependent type theory (𝖦𝖣𝖳𝖳), by combining it with cubical type theory (𝖢𝖳𝖳). 𝖦𝖣𝖳𝖳 is an extensional type theory with guarded recursive types, which are useful for building models of program logics, and for programming and reasoning with coinductive types. We wish to implement 𝖦𝖣𝖳𝖳 with decidable type checking, while still supporting nontrivial equality proofs that reason about the extensions of guarded recursive constructions. 𝖢𝖳𝖳 is a variation of MartinLöf type theory in which the identity type is replaced by abstract paths between terms. 𝖢𝖳𝖳 provides a computational interpretation of functional extensionality, enjoys canonicity for the natural numbers type, and is conjectured to support decidable typechecking. Our new type theory, guarded cubical type theory (𝖦𝖢𝖳𝖳), provides a computational interpretation of extensionality for guarded recursive types. This further expands the foundations of 𝖢𝖳𝖳 as a basis for formalisation in mathematics and computer science. We present examples to demonstrate the expressivity of our type theory, all of which have been checked using a prototype typechecker implementation. We show that 𝖢𝖳𝖳 can be given semantics in presheaves on
The implementation and semantics of dependent type theories can be studied in a syntaxindependent way: the objective metatheory of dependent type theories exploits the universal properties of their syntactic categories to endow them with computational content, mathematical meaning, and practical implementation (normalization, type checking, elaboration). The semantic methods of the objective metatheory inform the design and implementation of correctbyconstruction elaboration algorithms, promising a principled interface between real proof assistants and ideal mathematics.
In this dissertation, I add synthetic Tait computability to the arsenal of the objective metatheorist. Synthetic Tait computability is a mathematical machine to reduce difficult problems of type theory and programming languages to trivial theorems of topos theory. First employed by Sterling and Harper to reconstruct the theory of program modules and their phase separated parametricity, synthetic Tait computability is deployed here to resolve the last major open question in the syntactic metatheory of cubical type theory: normalization of open terms.
For six years, I have served as the founder and technical leader of the RedPRL Development Team which has produced three interactive proof assistants for variants of cubical type theory: RedPRL, redtt, and cooltt. I will share several lessons that we have learned about the design and implementation of homotopical proof assistants along this journey. This talk discusses joint work with Carlo Angiuli, Evan Cavallo, Favonia, and Reed Mullanix.
We contribute XTT, a cubical reconstruction of Observational Type Theory which extends intensional type theory with a dependent equality type that enjoys function extensionality and judgmental unicity of identity proofs. XTT employs a variant of the Cartesian cubical Kan operations satisfying regularity (i.e., transport in constant type families is judgmentally constant), allowing its equality type to model MartinLof’s identity type judgmentally. We prove canonicity for the initial model of XTT (i.e., any closed term of boolean type is equal to either true or false) using a novel cubical extension (independently proposed by Awodey) of the categorical gluing technique inspired by Coquand and Shulman, in which we glue the fundamental fibration of a category of augmented Cartesian cubical sets along a cubical nerve. We conjecture that our methods will extend to open terms, allowing us to establish normalization and decidability of the typing relation.
RedPRL is an experimental proof assistant based on Cartesian cubical computational type theory, a new type theory for higherdimensional constructions inspired by homotopy type theory. In the style of Nuprl, RedPRL users employ tactics to establish behavioral properties of cubical functional programs embodying the constructive content of proofs. Notably, RedPRL implements a twolevel type theory, allowing an extensional, proofirrelevant notion of exact equality to coexist with a higherdimensional proofrelevant notion of paths.
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. The main difficulty in comparison to conventional type theory is located in a new feature of cubical type theories, the absence of a stable notion of neutral term: for instance, the path application (p @ i)
ceases to be neutral within its “locus of instability” ∂(i)
and must compute to an endpoint. We introduce a new, geometricallyinspired generalization of the notion of neutral term, stabilizing neutrals by gluing them together with partial computability data along their loci of instability — when the locus of instability is nowhere, a stabilized neutral is a conventional neutral, and when the locus of instability is everywhere, a stabilized neutral is just computability data. Our normalization result is based on a reductionfree Artin gluing argument, and yields an injective function from equivalence classes of terms in context to a tractable language of beta/etanormal forms. As corollaries we obtain both decidability of judgmental equality, as well as injectivity of type constructors in contexts formed by assuming variables x : A
and dimensions i : 𝕀
.
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. The main difficulty in comparison to conventional type theory is located in a new feature of cubical type theories, the absence of a stable notion of neutral term: for instance, the path application p(i) ceases to be neutral within its “locus of instability” ∂(i) and must compute to an endpoint. We introduce a new, geometricallyinspired generalization of the notion of neutral term, stabilizing neutrals by gluing them together with partial computability data along their loci of instability — when the locus of instability is nowhere, a stabilized neutral is a conventional neutral, and when the locus of instability is everywhere, a stabilized neutral is just computability data. Our normalization result is based on a reductionfree Artin gluing argument, and yields an injective function from equivalence classes of terms in context to a tractable language of beta/etanormal forms. As corollaries we obtain both decidability of judgmental equality, as well as injectivity of type constructors in contexts formed by assuming variables x : A and dimensions i : 𝕀.
(j.w.w. Carlo Angiuli.)
redtt is an interactive proof assistant for Cartesian cubical type theory, a version of MartinLöf type theory featuring computational versions of function extensionality, higher inductive types, and univalence. Building on ideas from Epigram, Agda, and Idris, redtt introduces a new cubical take on interactive proof development with holes. We will first introduce the basics of cubical type theory and then dive into an interactive demonstration of redtt’s features and its mathematical library.
After this we will catch a first public glimpse of the future of redtt, a new prototype that our team is building currently codenamed "cooltt": cooltt introduces syntax to split on disjunctions of cofibrations in arbitrary positions, implementing the full definitional eta law for disjunction. While cooltt is still in the early stages, it already has full support for univalence and cubical interactive proof development.
My research makes use of techniques from category theory, mathematical logic and type theory to advance the foundations of programming language semantics and theorem proving systems. I have a longstanding interest in the semantics and logic of names, locality and binding. My aim is to develop mathematical models and methods which aid language design and the development of formal logics for specifying and reasoning about programs. I am particularly interested in higherorder typed programming languages and in dependently typed logics.
Dan Licata works on type theory (especially homotopy type theory), logic, category theory, functional programming, and programming languages.
Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. Formerly a Marie SkłodowskaCurie Postdoctoral Fellow hosted at Aarhus University by Lars Birkedal, and before this a PhD student of Robert Harper.
Dan Licata works on type theory (especially homotopy type theory), logic, category theory, functional programming, and programming languages.
Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. Formerly a Marie SkłodowskaCurie Postdoctoral Fellow hosted at Aarhus University by Lars Birkedal, and before this a PhD student of Robert Harper.
PhD student of Robert Harper.
I am a postdoc at the Department of Mathematics at Stockholm University. My main research interests are in Homotopy Type Theory, in particular its semantics using (higher) category theory. I am also interested in type theory in general and pure category theory.
This page collects papers and dissertations about Guarded Domain Theory; if you have written a paper or dissertation on this topic, please write to me to have it added to this list.
Separation logic is used to reason locally about stateful programs. State of the art program logics for higherorder store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop Tulip, a higherorder separation logic over the typed equational theory of higherorder store for a monadic version of System
Several different topoi have played an important role in the development and applications of synthetic guarded domain theory (SGDT), a new kind of synthetic domain theory that abstracts the concept of guarded recursion frequently employed in the semantics of programming languages. In order to unify the accounts of guarded recursion and coinduction, several authors have enriched SGDT with multiple “clocks” parameterizing different timestreams, leading to more complex and difficult to understand topos models. Until now these topoi have been understood very concretely qua categories of presheaves, and the logicogeometrical question of what theories these topoi classify has remained open. We show that several important topos models of SGDT classify very simple geometric theories, and that the passage to various forms of multiclock guarded recursion can be rephrased more compositionally in terms of the lower bagtopos construction of Vickers and variations thereon due to Johnstone. We contribute to the consolidation of SGDT by isolating the universal property of multiclock guarded recursion as a modular construction that applies to any topos model of singleclock guarded recursion.
This paper improves the treatment of equality in guarded dependent type theory (𝖦𝖣𝖳𝖳), by combining it with cubical type theory (𝖢𝖳𝖳). 𝖦𝖣𝖳𝖳 is an extensional type theory with guarded recursive types, which are useful for building models of program logics, and for programming and reasoning with coinductive types. We wish to implement 𝖦𝖣𝖳𝖳 with decidable type checking, while still supporting nontrivial equality proofs that reason about the extensions of guarded recursive constructions. 𝖢𝖳𝖳 is a variation of MartinLöf type theory in which the identity type is replaced by abstract paths between terms. 𝖢𝖳𝖳 provides a computational interpretation of functional extensionality, enjoys canonicity for the natural numbers type, and is conjectured to support decidable typechecking. Our new type theory, guarded cubical type theory (𝖦𝖢𝖳𝖳), provides a computational interpretation of extensionality for guarded recursive types. This further expands the foundations of 𝖢𝖳𝖳 as a basis for formalisation in mathematics and computer science. We present examples to demonstrate the expressivity of our type theory, all of which have been checked using a prototype typechecker implementation. We show that 𝖢𝖳𝖳 can be given semantics in presheaves on
Nakano’s later modality can be used to specify and define recursive functions which are causal or synchronous; in concert with a notion of clock variable, it is possible to also capture the broader class of productive (co)programs. Until now, it has been difficult to combine these constructs with dependent types in a way that preserves the operational meaning of type theory and admits a hierarchy of universes. We present an operational account of guarded dependent type theory with clocks called Guarded Computational Type Theory, featuring a novel clock intersection connective that enjoys the clock irrelevance principle, as well as a predicative hierarchy of universes which does not require any indexing in clock contexts. Guarded Computational Type Theory is simultaneously a programming language with a rich specification logic, as well as a computational metalanguage that can be used to develop semantics of other languages and logics.
We present the topos
We develop a denotational semantics for general reference types in an impredicative version of guarded homotopy type theory, an adaptation of synthetic guarded domain theory to Voevodsky’s univalent foundations. We observe for the first time the profound impact of univalence on the denotational semantics of mutable state. Univalence automatically ensures that all computations are invariant under symmetries of the heap—a bountiful source of free theorems. In particular, even the most simplistic univalent model enjoys many new program equivalences that do not hold when the same constructions are carried out in the universes of traditional setlevel (extensional) type theory.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. We then extend our language with modal constructs for proofrelevant relational reasoning based on the logical relations as types principle, in which equivalences between imperative abstract datatypes can be established synthetically. Finally we develop a decomposition of the store model as a general construction that extends an arbitrary polymorphic callbypushvalue adjunction with higherorder store, improving on Levy's possible worlds model construction; what is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. What is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
Impredicative guarded dependent type theory (iGDTT) is a new version of type theory that combines guarded recursion (the "later" modality) with impredicative polymorphism (universal and existential types). It turns out that these two features are sufficient to define a very simple denotational semantics for System F with recursive types and higherorder store. We believe that the expressivity of iGDTT brings us one step closer to a general metalanguage for realistic denotational semantics, and provides a compelling strategy to elude the burden of operational semantics. As a further benefit, we are now able to justify the extension of full dependent type theory with a Haskellstyle IO
monad and IORef
types.
Impredicative guarded dependent type theory (iGDTT) is a new version of type theory that combines
guarded recursion (the "later" modality) with impredicative polymorphism (universal and existential types).
It turns out that these two features are sufficient to define a very simple denotational semantics for
System F with recursive types and higherorder store. We believe that the expressivity of iGDTT
brings us one step closer to a general metalanguage for realistic denotational semantics,
and provides a compelling strategy to elude the burden of operational semantics.
As a further benefit, we are now able to justify the extension of full dependent
type theory with a Haskellstyle IO
monad and IORef
types.
I am currently a software engineer. Previously I was a postdoctoral researcher at the Department of Computer Science of Aarhus University where I worked on program logics for concurrency, and semantics of programming languages, logics, and type theories.
I study programming languages, type theories, and logics. I am particularly interested in applying semantic methods to prove syntactic properties of modal type theories and programming languages. I am also involved in the development of program logics for concurrent programming languages through the Iris project.
A PhD student of Lars Birkedal.
Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. Formerly a Marie SkłodowskaCurie Postdoctoral Fellow hosted at Aarhus University by Lars Birkedal, and before this a PhD student of Robert Harper.
Villum Investigator; Head of Logic and Semantics Group.
I study programming languages, type theories, and logics. I am particularly interested in applying semantic methods to prove syntactic properties of modal type theories and programming languages. I am also involved in the development of program logics for concurrent programming languages through the Iris project.
A PhD student of Lars Birkedal.
I am currently a software engineer. Previously I was a postdoctoral researcher at the Department of Computer Science of Aarhus University where I worked on program logics for concurrency, and semantics of programming languages, logics, and type theories.
My interests include formal proofs of programs and functional programming. I enjoy figuring out why complex code actually works and formalizing why it does indeed so; but I also like the process of building better abstractions to manage software complexity.
Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. Formerly a Marie SkłodowskaCurie Postdoctoral Fellow hosted at Aarhus University by Lars Birkedal, and before this a PhD student of Robert Harper.
I am interested in semantics of programming languages, functional programming, recursion schemes, category theory, mathematical logic and type theory.
This page collects papers and dissertations about Synthetic Tait Computability, also known as the logical relations as types / LRAT principle; if you have written a paper or dissertation on this topic, please write to me to have it added to this list.
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
We present calf, a costaware logical framework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions, we argue that the cost structure of programs motivates a phase distinction between intension and extension. Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which costaware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a fullspectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants.
We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis. We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our securityaware computational model satisfies terminationinsensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper’s recent reinterpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topostheoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the costaware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between “higher” and “lower” security, and second to verify the computational adequacy of our sheaf semantics with respect to a version of Abadi et al.’s dependency core calculus to which we have added a construct for declassifying termination channels.
In the published version of this paper, there were a few mistakes that have been corrected in the local copy hosted here.
A serious (and asyet unfixed) problem was discovered in July of 2023 by Yue Niu, which undermines the proof of adequacy given; in particular, the proof that the logical relation on free algebras is admissible is not correct. I believe there is a different proof of adequacy for the calculus described, but it will have a different structure from what currently appears in the paper. We thank Yue Niu for his attention to detail and careful reading of this paper.
The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction, computational effects, and type abstraction. We contribute a fresh “synthetic” take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal typetheoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects, placing projectibility of module expressions on a typetheoretic basis.
Our main result is a (significant) proofrelevant and phasesensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of logical relation called a parametricity structure. Parametricity structures generalize the proofirrelevant relations of classical parametricity to proofrelevant families, where there may be nontrivial evidence witnessing the relatedness of two programs—simplifying the metatheory of strong sums over the collection of types, for although there can be no “relation classifying relations,” one easily accommodates a “family classifying small families.”
Using the insight that logical relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types, by iterating our modal account of the phase distinction. We axiomatize a dependent type theory of parametricity structures using two pairs of complementary modalities (syntactic, semantic) and (static, dynamic), substantiated using the topos theoretic Artin gluing construction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.
After going to press, we have fixed the following mistakes:
The local copy hosted here has the corrections implemented
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. Our normalization result is reductionfree, in the sense of yielding a bijection between equivalence classes of terms in context and a tractable language of
We present decalf, a directed, effectful costaware logical framework for studying quantitative aspects of functional programs with effects. Like calf, the language is based on a formal phase distinction between the extension and the intension of a program, its pure behavior as distinct from its cost measured by an effectful stepcounting primitive. The type theory ensures that the behavior is unaffected by the cost accounting. Unlike calf, the present language takes account of effects, such as probabilistic choice and mutable state; this extension requires a reformulation of calf’s approach to cost accounting: rather than rely on a “separable” notion of cost, here a cost bound is simply another program. To make this formal, we equip every type with an intrinsic preorder, relaxing the precise cost accounting intrinsic to a program to a looser but nevertheless informative estimate. For example, the cost bound of a probabilistic program is itself a probabilistic program that specifies the distribution of costs. This approach serves as a streamlined alternative to the standard method of isolating a recurrence that bounds the cost in a manner that readily extends to higherorder, effectful programs.
The development proceeds by first introducing the decalf type system, which is based on an intrinsic ordering among terms that restricts in the extensional phase to extensional equality, but in the intensional phase reflects an approximation of the cost of a program of interest. This formulation is then applied to a number of illustrative examples, including pure and effectful sorting algorithms, simple probabilistic programs, and higherorder functions. Finally, we justify decalf via a model in the topos of augmented simplicial sets.
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
We present a novel mechanism for controlling the unfolding of definitions in dependent type theory. Traditionally, proof assistants let users specify whether each definition can or cannot be unfolded in the remainder of a development; unfolding definitions is often necessary in order to reason about them, but an excess of unfolding can result in brittle proofs and intractably large proof goals. In our system, definitions are by default not unfolded, but users can selectively unfold them in a local manner. We justify our mechanism by means of elaboration to a core type theory with extension types, a connective first introduced in the context of homotopy type theory. We prove a normalization theorem for our core calculus and have implemented our system in the cooltt proof assistant, providing both theoretical and practical evidence for it.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. We then extend our language with modal constructs for proofrelevant relational reasoning based on the logical relations as types principle, in which equivalences between imperative abstract datatypes can be established synthetically. Finally we develop a decomposition of the store model as a general construction that extends an arbitrary polymorphic callbypushvalue adjunction with higherorder store, improving on Levy's possible worlds model construction; what is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
The implementation and semantics of dependent type theories can be studied in a syntaxindependent way: the objective metatheory of dependent type theories exploits the universal properties of their syntactic categories to endow them with computational content, mathematical meaning, and practical implementation (normalization, type checking, elaboration). The semantic methods of the objective metatheory inform the design and implementation of correctbyconstruction elaboration algorithms, promising a principled interface between real proof assistants and ideal mathematics.
In this dissertation, I add synthetic Tait computability to the arsenal of the objective metatheorist. Synthetic Tait computability is a mathematical machine to reduce difficult problems of type theory and programming languages to trivial theorems of topos theory. First employed by Sterling and Harper to reconstruct the theory of program modules and their phase separated parametricity, synthetic Tait computability is deployed here to resolve the last major open question in the syntactic metatheory of cubical type theory: normalization of open terms.
Joint work with Harrison Grodin (Carnegie Mellon), Yue Niu (Carnegie Mellon), and Jon Sterling (Cambridge).
The computational view of intuitionistic dependent type theory is as an intrinsic logic of (functional) programs in which types are viewed as specifications of their behavior. Equational reasoning is particularly relevant in the functional case, where correctness can be formulated as equality between two implementations of the same behavior. Besides behavior, it is also important to specify and verify the cost of programs, measured in terms of their resource usage, with respect to both sequential and parallel evaluation. Although program cost can—and has been—verified in type theory using an extrinsic formulation of programs as data objects, what we seek here is, instead, an intrinsic account within type theory itself.
In this talk we discuss Calf, the CostAware Logical Framework, which is an extension of dependent callbypushvalue type theory that provides an intrinsic account of both parallel and sequential resource usage for a variety of problemspecific measures of cost. Thus, for example, it is possible to prove that insertion sort and merge sort are equal as regards behavior, but differ in terms of the number of comparisons required to achieve the same results. But how can equal functions have different cost? To provide an intrinsic account of both intensional and extensional properties of programs, we make use of Sterling's notion of Synthetic Tait Computability, a generalization of Tait's method originally developed for the study of higher type theory. In STC the concept of a “phase” plays a central role: originally as the distinction between the syntactic and semantic aspects of a computability structure, but more recently applied to the formulation of type theories for program modules and for information flow properties of programs. In Calf we distinguish two phases, the intensional and extensional, which differ as regards the significance of cost accounting—extensionally it is neglected, intensionally it is of paramount importance. Thus, in the extensional phase insertion sort and merge sort are equal, but in the intensional phase they are distinct, and indeed one is proved to have optimal behavior as regards comparisons, and the other not. Importantly, both phases are needed in a cost verification—the proof of the complexity of an algorithm usually relies on aspects of its correctness.
We will provide an overview of Calf itself, and of its application in the verification of the cost and behavior of a variety of programs. So far we have been able to verify cost bounds on Euclid's Algorithm, amortized bounds on batched queues, parallel cost bounds on a joinable form of redblack trees, and the equivalence and cost of the aforementioned sorting methods. In a companion paper at this meeting Grodin and I develop an account of amortization that relates the standard inductive view of instruction sequences with the coinductive view of data structures characterized by the same operations. In ongoing work we are extending the base of verified deterministic algorithms to those taught in the undergraduate parallel algorithms course at Carnegie Mellon, and are extending Calf itself to account for probabilistic methods, which are also used in that course.
A serious (and asyet unfixed) problem was discovered in July of 2023 by Yue Niu, which undermines the proof of adequacy given; in particular, the proof that the logical relation on free algebras is admissible is not correct. I believe there is a different proof of adequacy for the calculus described, but it will have a different structure from what currently appears in the paper. We thank Yue Niu for his attention to detail and careful reading of this paper.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
Type abstraction, the phase distinction, and computational effects all play an important role in the design and implementation of MLstyle module systems. We propose a simple type theoretic metalanguage φML for multiphase modularity in which these concepts are treated individually, supporting the definition of highlevel modular constructs such as generative and applicative functors, as well as all extant forms of structure sharing.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
Securitytyped programming languages aim to control the flow of highsecurity information to low security clients. Starting with Abadi et al.'s dependency core calculus, the denotational semantics of such languages has been dominated by an extrinsic approach in which an existing insecure model of computation (e.g. ordinary domains) is restricted by a logical relation of "indistinguishability" to prevent lowsecurity outputs from depending on highsecurity inputs (noninterference). Thus in the extrinsic approach, security properties are bolted onto an insecure model by brute force, as it were. A more refined information flow policy called terminationinsensitive noninterference allows highsecurity bits to be leaked through termination channels but not through return values; unfortunately, the adaptation of the extrinsic/relational semantics to this more relaxed policy is incompatible with the transitivity of the logical relation, contradicting the intuition of "indistinguishability".
In contrast, an intrinsic semantics of security typing would involve a new computational model that evinces secure information flow and noninterference properties directly without any post hoc restriction by a logical relation. We contribute the first such intrinsic semantics of security typing in this sense by considering sheaves of dcpos on a space of abstract behaviors on which security classes arise as open/closed partitions; the security monads then arise as the closed modalities of topos theory that restrict a sheaf to its component over a closed subspace.
An advantage of our intrinsic semantics is that terminationinsensitive noninterference arises automatically from our computational model, namely the fact that the Sierpiński domain is not a constant sheaf; a further advantage is that our semantics is an instance of standard domain theoretic denotational semantics, albeit over a richer category of domains.
(j.w.w. R. Harper)
(Joint work with Jon Sterling and Yue Niu)
The informal phase distinction between compiletime and runtime in programming languages is formally manifested by the distinction between kinds, which classify types, and types, which classify code. The distinction underpins standard programming methodology whereby code is first typechecked for consistency before being compiled for execution. When used effectively, types help eliminate bugs before they occur.
Program modules, in even the most rudimentary form, threaten the distinction, comprising as they do both types and programs in a single unit. Matters worsen when considerating “open” modules, with free module variables standing for its imports. To maintain the separation in their presence it is necessary to limit the dependency of types, the static parts of a module, to their imported types. Such restrictions are fundamental for using dependent types to express modular structure, as originally suggested by MacQueen.
To address this question Moggi gave an “analytic” formulation of program modules in which modules are explicitly separated into their static and dynamic components using tools from category theory. Recent work by Dreyer, Rossberg, and Russo develops this approach fully in their account of MLlike module systems. In this talk we consider instead a “synthetic” formulation using a proposition to segregate the static from the dynamic, in particular to define static equivalence to manage type sharing and type dependency.
The purpose of this talk is to pose the question, “What are the Euclid’s postulates for syntactic metatheory?”
In the fourth century B.C.E., the Greek mathematician Euclid set down his famous postulates for plane geometry, explaining geometric shapes in terms of rules that govern their construction and incidence. The dialectical relationship between theories (axioms) and their models (coordinate systems) has been the driving force in the last two millennia of geometrical investigation.
In logic and computer science, workers in the “syntactic metatheory” investigate questions that lie on the fringe between a theory and its models — definability, normalization, decidability, conservativity, computational adequacy, parametricity, type safety, etc. Dominant methods attack these questions by means of explicit computations (e.g. Kripke logical relations) which practitioners have found to be both reliable and somewhat opaque. In this talk, I introduce Synthetic Tait computability — a new system of axioms that transforms these explicit computations into synthetic manipulations; classical Kripke logical relations can be seen as models or “coordinate systems” for the new geometry of syntax that is beginning to unfold.
Synthetic Tait computability has already been employed to positively resolve the normalization and decidability conjectures for cubical type theory, as well as a number of other recent results.
The fundamental contradiction of programming and program verification can be located in the tension between abstraction and composition. We make programs more abstract in order to prevent bad interactions between components; on the other side of the coin, we impede the composition of components when we abstract them. Modern programming practice evinces many distinct levels of abstraction that must be considered at the same time — for instance, compilers break module boundaries during linking, complexity analysis breaks the abstraction of extensional equivalence, and logical relations proofs break the abstraction of closure under substitution. What is needed to meet this challenge is linguistic tools that smoothly interpolate between these different levels of abstraction. Building on my doctoral dissertation and joint work with Bob Harper, I introduce a new plan for modal programming languages and logics that treat the transition between different abstraction levels as a firstclass notion.
Programmers use abstraction to hide representation details from ourselves: either to avoid mistakes (e.g. confusing a list index with a length) or to exploit representation invariants (e.g. two implementations of the QUEUE interface are indistinguishable). These abstraction boundaries can unfortunately impede the linking of smaller program units into efficient composite programs, because compilers must exploit representation details in order to produce efficient code. Sometimes seen as the "waterloo of separate compilation", the need to break abstraction is answered by wholeprogram analysis techniques that break all abstractions (as in the MLton compiler for Standard ML). Separate compilation, however, has a number of advantages including speed, parallelization, and elegance.
We present an alternative type theoretic account of abstractionbreaking during compilation based on the famous phase distinction of ML languages; rather than distinguishing between compiletime and runtime, we focus on separating "devtime" from compiletime. Our framework allows the programmer to selectively reveal representation details to the compiler without giving up the representation independence properties guaranteed by "devtime" type correctness. We also describe an application to the problem of printfdebugging, which is ordinarily obstructed by abstraction.
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. The main difficulty in comparison to conventional type theory is located in a new feature of cubical type theories, the absence of a stable notion of neutral term: for instance, the path application (p @ i)
ceases to be neutral within its “locus of instability” ∂(i)
and must compute to an endpoint. We introduce a new, geometricallyinspired generalization of the notion of neutral term, stabilizing neutrals by gluing them together with partial computability data along their loci of instability — when the locus of instability is nowhere, a stabilized neutral is a conventional neutral, and when the locus of instability is everywhere, a stabilized neutral is just computability data. Our normalization result is based on a reductionfree Artin gluing argument, and yields an injective function from equivalence classes of terms in context to a tractable language of beta/etanormal forms. As corollaries we obtain both decidability of judgmental equality, as well as injectivity of type constructors in contexts formed by assuming variables x : A
and dimensions i : 𝕀
.
This is joint work with Robert Harper.
How do you prove that two implementations of an abstract type behave the same in all configurations? Reynolds famously employed logical relations to establish such results; roughly, a logical relation is a structurerespecting relation between two interpretations of a theory that evinces, in the base case, a desirable invariant.
We present a synthetic way to understand and interact with logical relations, related to classical logical relations in the same way that Euclidean geometry relates to pointsets. Previously a logical relation was defined in terms of the (complicated) details of how it is constructed as a certain kind of relation over syntax. We instead take the simpler view that everything in sight is a logical relation, and then use modalities to isolate those logical relations that are degenerate in either the syntactic or the semantic direction.
Our “logical relations as types” principle has led to a new account of modules and representation independence (S., Harper), as well as the first proofs of normalization for cubical type theory (S., Angiuli) and general multimodal dependent type theory (Gratzer).
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. The main difficulty in comparison to conventional type theory is located in a new feature of cubical type theories, the absence of a stable notion of neutral term: for instance, the path application p(i) ceases to be neutral within its “locus of instability” ∂(i) and must compute to an endpoint. We introduce a new, geometricallyinspired generalization of the notion of neutral term, stabilizing neutrals by gluing them together with partial computability data along their loci of instability — when the locus of instability is nowhere, a stabilized neutral is a conventional neutral, and when the locus of instability is everywhere, a stabilized neutral is just computability data. Our normalization result is based on a reductionfree Artin gluing argument, and yields an injective function from equivalence classes of terms in context to a tractable language of beta/etanormal forms. As corollaries we obtain both decidability of judgmental equality, as well as injectivity of type constructors in contexts formed by assuming variables x : A and dimensions i : 𝕀.
(j.w.w. Carlo Angiuli.)
I study programming languages, type theories, and logics. I am particularly interested in applying semantic methods to prove syntactic properties of modal type theories and programming languages. I am also involved in the development of program logics for concurrent programming languages through the Iris project.
Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. Formerly a Marie SkłodowskaCurie Postdoctoral Fellow hosted at Aarhus University by Lars Birkedal, and before this a PhD student of Robert Harper.
Dan Licata works on type theory (especially homotopy type theory), logic, category theory, functional programming, and programming languages.
Associate Professor in Logical Foundations and Formal Methods at University of Cambridge. Formerly a Marie SkłodowskaCurie Postdoctoral Fellow hosted at Aarhus University by Lars Birkedal, and before this a PhD student of Robert Harper.
PhD student of Robert Harper.
I am a postdoc at the Department of Mathematics at Stockholm University. My main research interests are in Homotopy Type Theory, in particular its semantics using (higher) category theory. I am also interested in type theory in general and pure category theory.
PhD student of Robert Harper.
Jacobs has proposed definitions for (weak, strong, split) generic objects for a fibered category; building on his definition of (split) generic objects, Jacobs develops a menagerie of important fibrational structures with applications to categorical logic and computer science, including higher order fibrations, polymorphic fibrations, 𝜆2fibrations, triposes, and others. We observe that a split generic object need not in particular be a generic object under the given definitions, and that the definitions of polymorphic fibrations, triposes, etc. are strict enough to rule out some fundamental examples: for instance, the fibered preorder induced by a partial combinatory algebra in realizability is not a tripos in this sense. We propose a new alignment of terminology that emphasizes the forms of generic object appearing most commonly in nature, i.e. in the study of internal categories, triposes, and the denotational semantics of polymorphism. In addition, we propose a new class of acyclic generic objects inspired by recent developments in higher category theory and the semantics of homotopy type theory, generalizing the realignment property of universes to the setting of an arbitrary fibration.
Separation logic is used to reason locally about stateful programs. State of the art program logics for higherorder store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop Tulip, a higherorder separation logic over the typed equational theory of higherorder store for a monadic version of System
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
Several different topoi have played an important role in the development and applications of synthetic guarded domain theory (SGDT), a new kind of synthetic domain theory that abstracts the concept of guarded recursion frequently employed in the semantics of programming languages. In order to unify the accounts of guarded recursion and coinduction, several authors have enriched SGDT with multiple “clocks” parameterizing different timestreams, leading to more complex and difficult to understand topos models. Until now these topoi have been understood very concretely qua categories of presheaves, and the logicogeometrical question of what theories these topoi classify has remained open. We show that several important topos models of SGDT classify very simple geometric theories, and that the passage to various forms of multiclock guarded recursion can be rephrased more compositionally in terms of the lower bagtopos construction of Vickers and variations thereon due to Johnstone. We contribute to the consolidation of SGDT by isolating the universal property of multiclock guarded recursion as a modular construction that applies to any topos model of singleclock guarded recursion.
We present XTT, a version of Cartesian cubical type theory specialized for Bishop sets à la Coquand, in which every type enjoys a definitional version of the uniqueness of identity proofs. Using cubical notions, XTT reconstructs many of the ideas underlying Observational Type Theory, a version of intensional type theory that supports function extensionality. We prove the canonicity property of XTT (that every closed boolean is definitionally equal to a constant) by Artin gluing.
We present calf, a costaware logical framework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions, we argue that the cost structure of programs motivates a phase distinction between intension and extension. Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which costaware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a fullspectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants.
We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis. We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our securityaware computational model satisfies terminationinsensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper’s recent reinterpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topostheoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the costaware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between “higher” and “lower” security, and second to verify the computational adequacy of our sheaf semantics with respect to a version of Abadi et al.’s dependency core calculus to which we have added a construct for declassifying termination channels.
In the published version of this paper, there were a few mistakes that have been corrected in the local copy hosted here.
A serious (and asyet unfixed) problem was discovered in July of 2023 by Yue Niu, which undermines the proof of adequacy given; in particular, the proof that the logical relation on free algebras is admissible is not correct. I believe there is a different proof of adequacy for the calculus described, but it will have a different structure from what currently appears in the paper. We thank Yue Niu for his attention to detail and careful reading of this paper.
The theory of program modules is of interest to language designers not only for its practical importance to programming, but also because it lies at the nexus of three fundamental concerns in language design: the phase distinction, computational effects, and type abstraction. We contribute a fresh “synthetic” take on program modules that treats modules as the fundamental constructs, in which the usual suspects of prior module calculi (kinds, constructors, dynamic programs) are rendered as derived notions in terms of a modal typetheoretic account of the phase distinction. We simplify the account of type abstraction (embodied in the generativity of module functors) through a lax modality that encapsulates computational effects, placing projectibility of module expressions on a typetheoretic basis.
Our main result is a (significant) proofrelevant and phasesensitive generalization of the Reynolds abstraction theorem for a calculus of program modules, based on a new kind of logical relation called a parametricity structure. Parametricity structures generalize the proofirrelevant relations of classical parametricity to proofrelevant families, where there may be nontrivial evidence witnessing the relatedness of two programs—simplifying the metatheory of strong sums over the collection of types, for although there can be no “relation classifying relations,” one easily accommodates a “family classifying small families.”
Using the insight that logical relations/parametricity is itself a form of phase distinction between the syntactic and the semantic, we contribute a new synthetic approach to phase separated parametricity based on the slogan logical relations as types, by iterating our modal account of the phase distinction. We axiomatize a dependent type theory of parametricity structures using two pairs of complementary modalities (syntactic, semantic) and (static, dynamic), substantiated using the topos theoretic Artin gluing construction. Then, to construct a simulation between two implementations of an abstract type, one simply programs a third implementation whose type component carries the representation invariant.
After going to press, we have fixed the following mistakes:
The local copy hosted here has the corrections implemented
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. Our normalization result is reductionfree, in the sense of yielding a bijection between equivalence classes of terms in context and a tractable language of
Extending Martín Hötzel Escardó’s effectful forcing technique, we give a new proof of a wellknown result: Brouwer’s monotone bar theorem holds for any bar that can be realized by a functional of type
We contribute XTT, a cubical reconstruction of Observational Type Theory [Altenkirch et al., 2007] which extends MartinLöf's intensional type theory with a dependent equality type that enjoys function extensionality and a judgmental version of the unicity of identity proofs principle (UIP): any two elements of the same equality type are judgmentally equal. Moreover, we conjecture that the typing relation can be decided in a practical way. In this paper, we establish an algebraic canonicity theorem using a novel extension of the logical families or categorical gluing argument inspired by Coquand and Shulman: every closed element of boolean type is derivably equal to either true or false.
Modalities are everywhere in programming and mathematics! Despite this, however, there are still significant technical challenges in formulating a core dependent type theory with modalities. We present a dependent type theory MLTT🔒 supporting the connectives of standard MartinLöf Type Theory as well as an S4style necessity operator. MLTT🔒 supports a smooth interaction between modal and dependent types and provides a common basis for the use of modalities in programming and in synthetic mathematics. We design and prove the soundness and completeness of a type checking algorithm for MLTT🔒, using a novel extension of normalization by evaluation. We have also implemented our algorithm in a prototype proof assistant for MLTT🔒, demonstrating the ease of applying our techniques.
Nakano’s later modality can be used to specify and define recursive functions which are causal or synchronous; in concert with a notion of clock variable, it is possible to also capture the broader class of productive (co)programs. Until now, it has been difficult to combine these constructs with dependent types in a way that preserves the operational meaning of type theory and admits a hierarchy of universes. We present an operational account of guarded dependent type theory with clocks called Guarded Computational Type Theory, featuring a novel clock intersection connective that enjoys the clock irrelevance principle, as well as a predicative hierarchy of universes which does not require any indexing in clock contexts. Guarded Computational Type Theory is simultaneously a programming language with a rich specification logic, as well as a computational metalanguage that can be used to develop semantics of other languages and logics.
In this paper, we present an extension to MartinLöf’s Intuitionistic Type Theory which gives natural solutions to problems in pragmatics, such as pronominal reference and presupposition. Our approach also gives a simple account of donkey anaphora without resorting to exotic scope extension of the sort used in Discourse Representation Theory and Dynamic Semantics, thanks to the proofrelevant nature of type theory.
We develop a denotational semantics for general reference types in an impredicative version of guarded homotopy type theory, an adaptation of synthetic guarded domain theory to Voevodsky’s univalent foundations. We observe for the first time the profound impact of univalence on the denotational semantics of mutable state. Univalence automatically ensures that all computations are invariant under symmetries of the heap—a bountiful source of free theorems. In particular, even the most simplistic univalent model enjoys many new program equivalences that do not hold when the same constructions are carried out in the universes of traditional setlevel (extensional) type theory.
We present decalf, a directed, effectful costaware logical framework for studying quantitative aspects of functional programs with effects. Like calf, the language is based on a formal phase distinction between the extension and the intension of a program, its pure behavior as distinct from its cost measured by an effectful stepcounting primitive. The type theory ensures that the behavior is unaffected by the cost accounting. Unlike calf, the present language takes account of effects, such as probabilistic choice and mutable state; this extension requires a reformulation of calf’s approach to cost accounting: rather than rely on a “separable” notion of cost, here a cost bound is simply another program. To make this formal, we equip every type with an intrinsic preorder, relaxing the precise cost accounting intrinsic to a program to a looser but nevertheless informative estimate. For example, the cost bound of a probabilistic program is itself a probabilistic program that specifies the distribution of costs. This approach serves as a streamlined alternative to the standard method of isolating a recurrence that bounds the cost in a manner that readily extends to higherorder, effectful programs.
The development proceeds by first introducing the decalf type system, which is based on an intrinsic ordering among terms that restricts in the extensional phase to extensional equality, but in the intensional phase reflects an approximation of the cost of a program of interest. This formulation is then applied to a number of illustrative examples, including pure and effectful sorting algorithms, simple probabilistic programs, and higherorder functions. Finally, we justify decalf via a model in the topos of augmented simplicial sets.
It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if
As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that facilitate effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly nontrivial injectivity law for free models of MartinLöf type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.
We present a novel mechanism for controlling the unfolding of definitions in dependent type theory. Traditionally, proof assistants let users specify whether each definition can or cannot be unfolded in the remainder of a development; unfolding definitions is often necessary in order to reason about them, but an excess of unfolding can result in brittle proofs and intractably large proof goals. In our system, definitions are by default not unfolded, but users can selectively unfold them in a local manner. We justify our mechanism by means of elaboration to a core type theory with extension types, a connective first introduced in the context of homotopy type theory. We prove a normalization theorem for our core calculus and have implemented our system in the cooltt proof assistant, providing both theoretical and practical evidence for it.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. We then extend our language with modal constructs for proofrelevant relational reasoning based on the logical relations as types principle, in which equivalences between imperative abstract datatypes can be established synthetically. Finally we develop a decomposition of the store model as a general construction that extends an arbitrary polymorphic callbypushvalue adjunction with higherorder store, improving on Levy's possible worlds model construction; what is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
Existential types are reconstructed in terms of small reflective subuniverses and dependent sums. The folklore decomposition detailed here gives rise to a particularly simple account of first class modules as a mode of use of traditional second class modules in connection with the modal operator induced by a reflective subuniverse, leading to a semantic justification for the rules of firstclass modules in languages like OCaml and MoscowML. Additionally, we expose several constructions that give rise to semantic models of MLstyle programming languages with both firstclass modules and realistic computational effects, culminating in a model that accommodates higherorder first class recursive modules and higherorder store.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
Hofmann and Streicher famously showed how to lift Grothendieck universes into presheaf topoi, and Streicher has extended their result to the case of sheaf topoi by sheafification. In parallel, van den Berg and Moerdijk have shown in the context of algebraic set theory that similar constructions continue to apply even in weaker metatheories. Unfortunately, sheafification seems not to preserve an important realignment property enjoyed by the presheaf universes that plays a critical role in models of univalent type theory as well as synthetic Tait computability, a recent technique to establish syntactic properties of type theories and programming languages. In the context of multiple universes, the realignment property also implies a coherent choice of codes for connectives at each universe level, thereby interpreting the cumulativity laws present in popular formulations of MartinLöf type theory.
We observe that a slight adjustment to an argument of Shulman constructs a cumulative universe hierarchy satisfying the realignment property at every level in any Grothendieck topos. Hence one has directstyle interpretations of MartinLöf type theory with cumulative universes into all Grothendieck topoi. A further implication is to extend the reach of recent synthetic methods in the semantics of cubical type theory and the syntactic metatheory of type theory and programming languages to all Grothendieck topoi.
The implementation and semantics of dependent type theories can be studied in a syntaxindependent way: the objective metatheory of dependent type theories exploits the universal properties of their syntactic categories to endow them with computational content, mathematical meaning, and practical implementation (normalization, type checking, elaboration). The semantic methods of the objective metatheory inform the design and implementation of correctbyconstruction elaboration algorithms, promising a principled interface between real proof assistants and ideal mathematics.
In this dissertation, I add synthetic Tait computability to the arsenal of the objective metatheorist. Synthetic Tait computability is a mathematical machine to reduce difficult problems of type theory and programming languages to trivial theorems of topos theory. First employed by Sterling and Harper to reconstruct the theory of program modules and their phase separated parametricity, synthetic Tait computability is deployed here to resolve the last major open question in the syntactic metatheory of cubical type theory: normalization of open terms.
It is easy to teach a student how to give a naïve denotational semantics to a language like System T, and then use it to reason about the equational theory: a type might as well be a set, and a program might as well be a function, and equational adequacy at base type is established using a logical relation between the initial model and the category of sets. Adding any nontrivial feature to this language (e.g. general recursion, polymorphism, state, etc.) immediately increases the difficulty beyond the facility of a beginner: to add recursion, one must replace sets and functions with domains and continuous maps, and to accommodate polymorphism and state, one must pass to increasingly inaccessible variations on this basic picture.
The dream of the 1990s was to find a category that behaves like
In this talk, I will explain some important classical results in synthetic domain theory as well as more recent results that illustrate the potential impact of “naïve denotational semantics” on the life of a workaday computer scientist.
We contribute the first denotational semantics of polymorphic dependent type theory extended by an equational theory for general (higherorder) reference types and recursive types, based on a combination of guarded recursion and impredicative polymorphism; because our model is based on recursively defined semantic worlds, it is compatible with polymorphism and relational reasoning about stateful abstract datatypes. What is new in relation to prior typed denotational models of higherorder store is that our Kripke worlds need not be syntactically definable, and are thus compatible with relational reasoning in the heap. Our work combines recent advances in the operational semantics of state with the purely denotational viewpoint of synthetic guarded domain theory.
We present a novel mechanism for controlling the unfolding of definitions in dependent type theory. Traditionally, proof assistants let users specify whether each definition can or cannot be unfolded in the remainder of a development; unfolding definitions is often necessary in order to reason about them, but an excess of unfolding can result in brittle proofs and intractably large proof goals. In our system, definitions are by default not unfolded, but users can selectively unfold them in a local manner. We justify our mechanism by means of elaboration to a core type theory with extension types, a connective first introduced in the context of homotopy type theory. We prove a normalization theorem for our core calculus and have implemented our system in the cooltt proof assistant, providing both theoretical and practical evidence for it.
A serious (and asyet unfixed) problem was discovered in July of 2023 by Yue Niu, which undermines the proof of adequacy given; in particular, the proof that the logical relation on free algebras is admissible is not correct. I believe there is a different proof of adequacy for the calculus described, but it will have a different structure from what currently appears in the paper. We thank Yue Niu for his attention to detail and careful reading of this paper.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
For six years, I have served as the founder and technical leader of the RedPRL Development Team which has produced three interactive proof assistants for variants of cubical type theory: RedPRL, redtt, and cooltt. I will share several lessons that we have learned about the design and implementation of homotopical proof assistants along this journey. This talk discusses joint work with Carlo Angiuli, Evan Cavallo, Favonia, and Reed Mullanix.
Type abstraction, the phase distinction, and computational effects all play an important role in the design and implementation of MLstyle module systems. We propose a simple type theoretic metalanguage φML for multiphase modularity in which these concepts are treated individually, supporting the definition of highlevel modular constructs such as generative and applicative functors, as well as all extant forms of structure sharing.
We contribute XTT, a cubical reconstruction of Observational Type Theory which extends intensional type theory with a dependent equality type that enjoys function extensionality and judgmental unicity of identity proofs. XTT employs a variant of the Cartesian cubical Kan operations satisfying regularity (i.e., transport in constant type families is judgmentally constant), allowing its equality type to model MartinLof’s identity type judgmentally. We prove canonicity for the initial model of XTT (i.e., any closed term of boolean type is equal to either true or false) using a novel cubical extension (independently proposed by Awodey) of the categorical gluing technique inspired by Coquand and Shulman, in which we glue the fundamental fibration of a category of augmented Cartesian cubical sets along a cubical nerve. We conjecture that our methods will extend to open terms, allowing us to establish normalization and decidability of the typing relation.
RedPRL is an experimental proof assistant based on Cartesian cubical computational type theory, a new type theory for higherdimensional constructions inspired by homotopy type theory. In the style of Nuprl, RedPRL users employ tactics to establish behavioral properties of cubical functional programs embodying the constructive content of proofs. Notably, RedPRL implements a twolevel type theory, allowing an extensional, proofirrelevant notion of exact equality to coexist with a higherdimensional proofrelevant notion of paths.
Impredicative guarded dependent type theory (iGDTT) is a new version of type theory that combines guarded recursion (the "later" modality) with impredicative polymorphism (universal and existential types). It turns out that these two features are sufficient to define a very simple denotational semantics for System F with recursive types and higherorder store. We believe that the expressivity of iGDTT brings us one step closer to a general metalanguage for realistic denotational semantics, and provides a compelling strategy to elude the burden of operational semantics. As a further benefit, we are now able to justify the extension of full dependent type theory with a Haskellstyle IO
monad and IORef
types.
The great semanticist John Reynolds famously wrote in 1983 that “type structure is a syntactic discipline for enforcing levels of abstraction”. If the last twenty years of programming language semantics and verification have taught us anything, it is that we also need a syntactic discipline for breaking abstraction — in other words, a way to glue together programs and verifications that cut across abstraction barriers.
In programming language semantics and verification, the problem of combining multiple levels of abstraction arises when choosing a “level of detail” at which to view program execution: for instance, one could look at program execution as a detailed operational process of discrete steps evincing the cost or complexity of an algorithm, or one could think of it more abstractly as a black box that only sends inputs to outputs. The difficulty is that in practice, verifications tend to cut across this barrier between complexity and functional correctness: for instance, complexity bounds often depend on the functional correctness of subroutines, and the existence of such a bound implies termination (a correctness property).
For this reason, it is crucial to develop integrated logical foundations for soundly reasoning using multiple models of execution at the same time, even when they expose different facets of a program's meaning. For the past three years, my research program has been to uncover and exploit the basic “laws of motion” governing all such abstraction barriers, which has led to the solution of a few significant open problems in homotopy type theory and modal type theory, as well as some preliminary applications to security and cost analysis.
Impredicative guarded dependent type theory (iGDTT) is a new version of type theory that combines
guarded recursion (the "later" modality) with impredicative polymorphism (universal and existential types).
It turns out that these two features are sufficient to define a very simple denotational semantics for
System F with recursive types and higherorder store. We believe that the expressivity of iGDTT
brings us one step closer to a general metalanguage for realistic denotational semantics,
and provides a compelling strategy to elude the burden of operational semantics.
As a further benefit, we are now able to justify the extension of full dependent
type theory with a Haskellstyle IO
monad and IORef
types.
Logical relations are the main tool for proving positive properties of logics, type theories, and programming languages: canonicity, normalization, decidability, conservativity, computational adequacy, and more. Logical relations combine pure syntax with nonsyntactic objects that are parameterized in syntax in a somewhat complex way; the sophistication of possible parameterizations makes logical relations a tool that is primarily accessible to specialists. In the spirit of Halmos' book Naïve Set Theory, I advocate for a new viewpoint on logical relations based on synthetic Tait computability, the internal language of categories of logical relations. In synthetic Tait computability, logical relations are manipulated as if they were sets, making the essence of many complex logical relations arguments accessible to nonspecialists.
Securitytyped programming languages aim to control the flow of highsecurity information to low security clients. Starting with Abadi et al.'s dependency core calculus, the denotational semantics of such languages has been dominated by an extrinsic approach in which an existing insecure model of computation (e.g. ordinary domains) is restricted by a logical relation of "indistinguishability" to prevent lowsecurity outputs from depending on highsecurity inputs (noninterference). Thus in the extrinsic approach, security properties are bolted onto an insecure model by brute force, as it were. A more refined information flow policy called terminationinsensitive noninterference allows highsecurity bits to be leaked through termination channels but not through return values; unfortunately, the adaptation of the extrinsic/relational semantics to this more relaxed policy is incompatible with the transitivity of the logical relation, contradicting the intuition of "indistinguishability".
In contrast, an intrinsic semantics of security typing would involve a new computational model that evinces secure information flow and noninterference properties directly without any post hoc restriction by a logical relation. We contribute the first such intrinsic semantics of security typing in this sense by considering sheaves of dcpos on a space of abstract behaviors on which security classes arise as open/closed partitions; the security monads then arise as the closed modalities of topos theory that restrict a sheaf to its component over a closed subspace.
An advantage of our intrinsic semantics is that terminationinsensitive noninterference arises automatically from our computational model, namely the fact that the Sierpiński domain is not a constant sheaf; a further advantage is that our semantics is an instance of standard domain theoretic denotational semantics, albeit over a richer category of domains.
(j.w.w. R. Harper)
The purpose of this talk is to pose the question, “What are the Euclid’s postulates for syntactic metatheory?”
In the fourth century B.C.E., the Greek mathematician Euclid set down his famous postulates for plane geometry, explaining geometric shapes in terms of rules that govern their construction and incidence. The dialectical relationship between theories (axioms) and their models (coordinate systems) has been the driving force in the last two millennia of geometrical investigation.
In logic and computer science, workers in the “syntactic metatheory” investigate questions that lie on the fringe between a theory and its models — definability, normalization, decidability, conservativity, computational adequacy, parametricity, type safety, etc. Dominant methods attack these questions by means of explicit computations (e.g. Kripke logical relations) which practitioners have found to be both reliable and somewhat opaque. In this talk, I introduce Synthetic Tait computability — a new system of axioms that transforms these explicit computations into synthetic manipulations; classical Kripke logical relations can be seen as models or “coordinate systems” for the new geometry of syntax that is beginning to unfold.
Synthetic Tait computability has already been employed to positively resolve the normalization and decidability conjectures for cubical type theory, as well as a number of other recent results.
The fundamental contradiction of programming and program verification can be located in the tension between abstraction and composition. We make programs more abstract in order to prevent bad interactions between components; on the other side of the coin, we impede the composition of components when we abstract them. Modern programming practice evinces many distinct levels of abstraction that must be considered at the same time — for instance, compilers break module boundaries during linking, complexity analysis breaks the abstraction of extensional equivalence, and logical relations proofs break the abstraction of closure under substitution. What is needed to meet this challenge is linguistic tools that smoothly interpolate between these different levels of abstraction. Building on my doctoral dissertation and joint work with Bob Harper, I introduce a new plan for modal programming languages and logics that treat the transition between different abstraction levels as a firstclass notion.
Programmers use abstraction to hide representation details from ourselves: either to avoid mistakes (e.g. confusing a list index with a length) or to exploit representation invariants (e.g. two implementations of the QUEUE interface are indistinguishable). These abstraction boundaries can unfortunately impede the linking of smaller program units into efficient composite programs, because compilers must exploit representation details in order to produce efficient code. Sometimes seen as the "waterloo of separate compilation", the need to break abstraction is answered by wholeprogram analysis techniques that break all abstractions (as in the MLton compiler for Standard ML). Separate compilation, however, has a number of advantages including speed, parallelization, and elegance.
We present an alternative type theoretic account of abstractionbreaking during compilation based on the famous phase distinction of ML languages; rather than distinguishing between compiletime and runtime, we focus on separating "devtime" from compiletime. Our framework allows the programmer to selectively reveal representation details to the compiler without giving up the representation independence properties guaranteed by "devtime" type correctness. We also describe an application to the problem of printfdebugging, which is ordinarily obstructed by abstraction.
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. The main difficulty in comparison to conventional type theory is located in a new feature of cubical type theories, the absence of a stable notion of neutral term: for instance, the path application (p @ i)
ceases to be neutral within its “locus of instability” ∂(i)
and must compute to an endpoint. We introduce a new, geometricallyinspired generalization of the notion of neutral term, stabilizing neutrals by gluing them together with partial computability data along their loci of instability — when the locus of instability is nowhere, a stabilized neutral is a conventional neutral, and when the locus of instability is everywhere, a stabilized neutral is just computability data. Our normalization result is based on a reductionfree Artin gluing argument, and yields an injective function from equivalence classes of terms in context to a tractable language of beta/etanormal forms. As corollaries we obtain both decidability of judgmental equality, as well as injectivity of type constructors in contexts formed by assuming variables x : A
and dimensions i : 𝕀
.
This is joint work with Robert Harper.
How do you prove that two implementations of an abstract type behave the same in all configurations? Reynolds famously employed logical relations to establish such results; roughly, a logical relation is a structurerespecting relation between two interpretations of a theory that evinces, in the base case, a desirable invariant.
We present a synthetic way to understand and interact with logical relations, related to classical logical relations in the same way that Euclidean geometry relates to pointsets. Previously a logical relation was defined in terms of the (complicated) details of how it is constructed as a certain kind of relation over syntax. We instead take the simpler view that everything in sight is a logical relation, and then use modalities to isolate those logical relations that are degenerate in either the syntactic or the semantic direction.
Our “logical relations as types” principle has led to a new account of modules and representation independence (S., Harper), as well as the first proofs of normalization for cubical type theory (S., Angiuli) and general multimodal dependent type theory (Gratzer).
We prove normalization for (univalent, Cartesian) cubical type theory, closing the last major open problem in the syntactic metatheory of cubical type theory. The main difficulty in comparison to conventional type theory is located in a new feature of cubical type theories, the absence of a stable notion of neutral term: for instance, the path application p(i) ceases to be neutral within its “locus of instability” ∂(i) and must compute to an endpoint. We introduce a new, geometricallyinspired generalization of the notion of neutral term, stabilizing neutrals by gluing them together with partial computability data along their loci of instability — when the locus of instability is nowhere, a stabilized neutral is a conventional neutral, and when the locus of instability is everywhere, a stabilized neutral is just computability data. Our normalization result is based on a reductionfree Artin gluing argument, and yields an injective function from equivalence classes of terms in context to a tractable language of beta/etanormal forms. As corollaries we obtain both decidability of judgmental equality, as well as injectivity of type constructors in contexts formed by assuming variables x : A and dimensions i : 𝕀.
(j.w.w. Carlo Angiuli.)
redtt is an interactive proof assistant for Cartesian cubical type theory, a version of MartinLöf type theory featuring computational versions of function extensionality, higher inductive types, and univalence. Building on ideas from Epigram, Agda, and Idris, redtt introduces a new cubical take on interactive proof development with holes. We will first introduce the basics of cubical type theory and then dive into an interactive demonstration of redtt’s features and its mathematical library.
After this we will catch a first public glimpse of the future of redtt, a new prototype that our team is building currently codenamed "cooltt": cooltt introduces syntax to split on disjunctions of cofibrations in arbitrary positions, implementing the full definitional eta law for disjunction. While cooltt is still in the early stages, it already has full support for univalence and cubical interactive proof development.
What type theorists and other researchers in type theory have in common is that they study theorems that hold of the initial model of type theory; but type theorists especially emphasize the theorems whose statements are sufficiently nontypetheoretic that they need not be preserved by homomorphisms of models. These theorems, sometimes called "metatheorems" or "admissibilities", are the means by which we conceive and justify computerized implementations of type theory, including canonicity, normalization, and decidability of type checking and judgmental equality.
The main tool for proving such theorems is Tait's method of computability, which has in the past several years been subject to a rapid campaign of rectification using the categorical language of Artin gluing. I will give an overview of this brave new "objective computability", emphasizing my recent joint work with Angiuli and Gratzer on a general gluing theorem for models of MLTT along flat functors into Grothendieck topoi, with an application to cubical type theory.
We extend the theory of Artin gluing to strict dependent type theory: given a flat functor F : C → E from the category of contexts a model of MartinLöf type theory into a Grothendieck topos E, we may construct the Fcomputability families model of type theory. Our theorem extends to MLTT with a (strict, weak) universe à la Tarski if E may be equipped with a (strict, weak) universe à la Tarski.
We introduce a more tractable approach to gluing models of type theory, working primarily within a suitably enlarged category of computability families which are not all tracked by syntactical entities, but which is densely generated by the subcategory of such computability families.
This tree defines the basic notational macros used across .
Use Bike to think, write, organize.
Make lists, take notes, create documents.
Bike’s an unusually fast outliner designed for your Mac.
Kerodon is an online textbook on categorical homotopy theory and related mathematics. It currently consists of a handful of chapters, but should grow (slowly) over time. It is modeled on the Stacks project, and is maintained by Jacob Lurie.
Its infrastructure uses Gerby, which is maintained by Pieter Belmans. The design was created by Opus Design.
If you need to convert files from one markup format into another, pandoc is your swissarmy knife.
The implementation and semantics of dependent type theories can be studied in a syntaxindependent way: the objective metatheory of dependent type theories exploits the universal properties of their syntactic categories to endow them with computational content, mathematical meaning, and practical implementation (normalization, type checking, elaboration). The semantic methods of the objective metatheory inform the design and implementation of correctbyconstruction elaboration algorithms, promising a principled interface between real proof assistants and ideal mathematics.
In this dissertation, I add synthetic Tait computability to the arsenal of the objective metatheorist. Synthetic Tait computability is a mathematical machine to reduce difficult problems of type theory and programming languages to trivial theorems of topos theory. First employed by Sterling and Harper to reconstruct the theory of program modules and their phase separated parametricity, synthetic Tait computability is deployed here to resolve the last major open question in the syntactic metatheory of cubical type theory: normalization of open terms.
Do you wish to write beautiful scientific documents with GNU TeXmacs? Then be guided by The Jolly Writer on your way to mastering the secrets of elegant layout and mathematical typography.
The book covers the main features of the latest version of TeXmacs: its wysiwyg user interface, mathematics, tables, professional typography, technical pictures, laptop presentations, structured editing, interfaces to systems for symbolic and numeric computations, customizable style sheets, the versioning tool, and much more.
The primary aim of The Jolly Writer is to guide you through this world of possibilities. While hacking your way through the jungle of style tags, keyboard shortcuts, mathematical homoglyphs, hanging punctuation, structured insertions, morphing animations, among others, you will slowly but surely unravel their mysteries and deeper why. At the end of the road, maybe you will become a jolly writer yourself.
We study localization at a prime in homotopy type theory, using self maps of the circle. Our main result is that for a pointed, simply connected type
We model exceptions in a linear, effectful setting by relaxing the notion of monadic strength to contexts that are discardable, in the spirit of C++ destructors. This gives rise to a resource modality reminiscent of unique_ptr and move semantics in C++11. We explore consequences in language design for resource management in functional programming languages.
The Stacks project is an ever growing open source textbook and reference work on algebraic stacks and the algebraic geometry needed to define them. Here are some quick facts:
For a longer discussion, please read the blog post What is the stacks project?.
Homotopy type theory is a new branch of mathematics that combines aspects of several different fields in a surprising way. It is based on a recently discovered connection between homotopy theory and type theory. It touches on topics as seemingly distant as the homotopy groups of spheres, the algorithms for type checking, and the definition of weak
We develop a theory of categories which are simultaneously (1) indexed over a base category
We present a method of constructing symmetric monoidal bicategories from symmetric monoidal double categories that satisfy a lifting condition. Such symmetric monoidal double categories frequently occur in nature, so the method is widely applicable, though not universally so.
The paper develops a mathematical theory in the spirit of categorical algebra that provides a model theory for secondorder and dependentlysorted syntax. The theory embodies notions such as alphaequivalence, variable binding, captureavoiding simultaneous substitution, term metavariable, metasubstitution, mono and multi sorting, and sort dependency. As a matter of illustration, a model is used to extract a secondorder syntactic theory, which is thus guaranteed to be correct by construction.
If you have a LaTeX document which
you will run into the problem that
Gerby addresses these problems by providing an online tagbased view, instead of just having a big PDF. Gerby is tailored towards making large online textbooks and reference works more accessible.
In case you were wondering, a gerbe is a kind of stack (in the mathematical sense), and the software was originally meant for the Stacks project.
Higher category theory is generally regarded as technical and forbidding, but part of it is considerably more tractable: the theory of infinitycategories, higher categories in which all higher morphisms are assumed to be invertible. In Higher Topos Theory, Jacob Lurie presents the foundations of this theory, using the language of weak Kan complexes introduced by Boardman and Vogt, and shows how existing theorems in algebraic topology can be reformulated and generalized in the theory’s new language. The result is a powerful theory with applications in many areas of mathematics.
The book’s first five chapters give an exposition of the theory of infinitycategories that emphasizes their role as a generalization of ordinary categories. Many of the fundamental ideas from classical category theory are generalized to the infinitycategorical setting, such as limits and colimits, adjoint functors, indobjects and proobjects, locally accessible and presentable categories, Grothendieck fibrations, presheaves, and Yoneda’s lemma. A sixth chapter presents an infinitycategorical version of the theory of Grothendieck topoi, introducing the notion of an infinitytopos, an infinitycategory that resembles the infinitycategory of topological spaces in the sense that it satisfies certain axioms that codify some of the basic principles of algebraic topology. A seventh and final chapter presents applications that illustrate connections between the theory of higher topoi and ideas from classical topology.
We present Quantitative Type Theory, a Type Theory that records usage information for each variable in a judgement, based on a previous system by McBride. The usage information is used to give a realizability semantics using a variant of Linear Combinatory Algebras, refining the usual realizability semantics of Type Theory by accurately tracking resource behaviour. We define the semantics in terms of Quantitative Categories with Families, a novel extension of Categories with Families for modelling resource sensitive type theories.