This is a list of projects that I would love to see happen. Alas, lacking infinite time, I don’t have any concrete plans to make them happen, but I’d be happy to advise and mentor others undertaking these projects. Some of these might serve as good bachelor’s or master’s thesis projects, while others are likely too ambitious or not adequately ambitious. I’ve included estimates of how much work I think each project would take me, so adjust accordingly for your greater or lesser experience with the domain. The bottom of the list has some less ambitious projects.

Proving welltypedness of quotation in MetaCoq
 Context: Löb’s theorem says
□(□X → X) → □X
where□T
means{ t : Ast.term & Σ ;;; []  t : <% T %> }
. The two biggest undertakings in getting such a theorem about a substantial AST are quotation (□X → □□X
) and decidable equality of ASTs. MetaCoq largely has decidable equality of ASTs already, and this project is about completing quotation, and in particular the second projection, that the functionquote : { t : Ast.term & Σ ;;; []  t : T } → Ast.term
which adds a layer of quotation gives something that is welltyped, i.e., that we have∀ v, Σ ;;; []  quote v : <% { t : Ast.term & Σ ;;; []  t : T } %>
.  Partial work:
JasonGross/metacoq@coq8.16+quotationtyping
,MetaCoq/metacoq/quotation/
 Estimated commitment: 1–2 major design decisions + 8k–800k loc (defining
quote
took about 8k loc and 200 hours )  Notes:
 There are two major blockers at the current state of this project:
 Having a version of the safechecker that produces unsquashed typing derivations. I believe Théo Winterhalter may be working on this.
 Figuring out how to deal with universes / universe polymorphism of typing judgments.
Roughly the problem is: given the definition of
quote_prod : (A → Ast.term) → (B → Ast.term) → A × B → Ast.term
, how do we formulate the proof that whenever the two arguments always generate welltypedAst.term
s, the result ofquote_prod
is also welltyped, given that I may use this across different universes. (See alsoqpair_cps
for a slightly simpler version of this problem.)
 The next major issue after these is likely:
 Figuring out how to abstract over Gallina context variables so that we can adequately deduplicate work across safechecker invocations (probably we can turn external quantifications into internal ones and safecheck the abstracted term, but some details need to be worked out).
 And finally if that’s solved, the next two major issues I forsee are:
 The axioms used by the various parts of MetaCoq, and whether or not they cause problems when duplicated into the quoted Γ
 I’m worry about size of terms and performance a bit, things might be painful.
 There are two major blockers at the current state of this project:
 Context: Löb’s theorem says
 Löb’s theorem in MetaCoq: For those who think the above project is not ambitious enough.
This Agda code might be a good reference for how to build the abstractions.
 Denotation in MetaCoq
 On pain of inconsistency, we cannot write a function
denote : { T : Ast.term & { t : Ast.term & Σ ;;; []  t : T } → { T : Type & T }
. I argue in slides 23–28 and 38 of the presentation of Accelerating VerifiedCompiler Development with a Verified Rewriting Engine that it should be possible to paramterize a denotation function over collections in a way that allows effectively having such a denotation function for any subset of the language. The domain of any given instantiation of the denotation function would not, of course, include its own quotation, only those of more restrictive families. MetaCoq’s Template Coq automation could automatically emit specialized instantiations onthefly, though, allowing effective use of the denotation function. See this Zulip thread for some more discussion.  Estimated commitment: maybe a couple years? Probably PhDthesis worthy if fully accomplished.
 On pain of inconsistency, we cannot write a function
 Reflective rewriting based on MetaCoq
 My reflective rewriter (GitHub, publication, YouTube presentation) is very much a research prototype and needs to be rewritten from scratch. The ideas should be generalizable, though, and if MetaCoq can be given an ontheflyspecializable family of denotation functions from the bullet above, this project could plausibly become a dropin replacement for the default rewriting in Coq.
 See slides 26 and 38 of the presentation of Accelerating VerifiedCompiler Development with a Verified Rewriting Engine.
 Estimated commitment: maybe a month to a year on top of denotation in MetaCoq years? Very likely PhDthesis worthy in combination with denotation, if fully accomplished.
 Integrating abstract interpretation and reflective rewriting
 See Appendix D, Fusing Compiler Passes, of the paper Accelerating VerifiedCompiler Development with a Verified Rewriting Engine.
 Estimated commitment: will probably require a rewrite of much of the 12k lines of rewriter code () and 4.7k lines of abstract interpretation code + design decisions
 Removing needless nonzero assumptions from Coq’s
ZArith
library Many of the lemmas have premises of the form
x <> 0
even when the conclusion holds without such premises. Automating goals inZ
is much nicer when the relevant lemmas don’t require such assumptions.  Existing work: coq/coq#16203 performs this reorganization for
N
, see also this comment  Estimated commitment: the work on
N
was +765,270, the work onZ
should be 1×–2× as involved
 Many of the lemmas have premises of the form
 Fill out Coq’s theory of primitive integers.
 There’s some work in progress on a theory of integers modulo
n
; I’d love to see this extended/completed and applied to Coq’sUint63
andSint63
.  Additionally, it would be nice to have the basic order modules and relation structure (such as transitivity of comparison) filled out.
 Finally,
zify
,lia
, andnia
don’t work quite as well onint
as onZ
,N
,nat
, due to all the moduli. It would be nice to figure out an efficient strategy for deciding linear and nonlinear equations modulo a fixed modulus.
 There’s some work in progress on a theory of integers modulo
 Extending printing reduction effects to timing effects.
 Estimated commitment: 10–100 loc, probably at most a couple days of work
 Extending reduction effects to the VM and native compiler.
 Extending
rewrite_strat
withcbv
andlazy
reduction ordering strategies (currently it supports onlytopdown
,bottomup
,subterms
,innermost
,outermost
). See slide 32 of the presentation of Accelerating VerifiedCompiler Development with a Verified Rewriting Engine (15m50s into the presentation on YouTube) or page 14 of Towards a Scalable Proof Engine: A Performant Prototype Rewriting Primitive for Coq for some explanation of why we want reductionordering for rewriting.
 Estimated commitment: the current implementation of
subterm
is 241 loc,lazy
is about 150 loc + 1250 loc,cbv
is about 550 loc. Adaptingrewrite_strat
will probably take a 0.5–3 hours of designing how reduction ordering should work in theory, and then some amount of implementation time (depending on how much code can be reused, how much has to be redone from scratch, how long it takes to understand the existing code, etc).
 Establish a methodology for performing PHOAS passes that need to produce both
expr
like output and datalike output simultaneously without exponential blowup Concretely:
 solve mitplv/fiatcrypto#1604 with option (2) without exponential blowup; and
 rework mitplv/fiatcrypto#1761 to avoid exponential blowup
 Idea:
Currently we’re trying to write a pass that is
expr var1 * expr var2 > A * expr var3
. If we define an exprliketreestructure that (a) doesn’t use higherorder things for Abs nodes, and (b) storesA
at every node, then I think we can write a pass that isexpr var1 * expr var2 > A * treeofA
and thenexpr var1 * expr var2 * treeofA > expr var3
such that we incur only linear overhead.  Estimated commitment if I were doing it: maybe a day for mitplv/fiatcrypto#1604 and a couple days to a week for mitplv/fiatcrypto#1761, and then a couple weeks to a month or two to make the performance charts, write up a clean functionalpearl implementation, and write a conference paper.
 Concretely: