`perga` is a basic proof assistant based on a dependently typed lambda calculus (calculus of constructions). This implementation is based on the exposition in Nederpelt and Geuvers’*Type Theory and Formal Proof*. Right now it is a perfectly capable higher order logic proof checker, though there is lots of room for improved ergonomics and usability, which I intend to work on. At the moment, `perga` is comparable to Automath in terms of power and ease of use, being slightly more powerful than Automath, and a touch less ergonomic.
The syntax is fairly flexible and should work as you expect. Identifiers can be Unicode as long as `megaparsec` calls them alphanumeric. `λ` and `Π` abstractions can be written in the usual ways that should be clear from the examples below. Additionally, arrows can be used as an abbreviation for a Π type where the parameter doesn’t appear in the body as usual.
To be perfectly clear, `λ` abstractions can be written with either “λ” or “fun”, and are separated from their bodies by either “=>” or “⇒”. Binders with the same type can be grouped together, and multiple binders can occur between the “λ” and the “⇒”.
`Π` types can be written with either “Π”, “∀”, or “forall”, and are separated from their bodies with a “,”. Arrow types can be written “->” or “→”. Like with `λ` abstractions, binders with the same type can be grouped, and multiple binders can occur between the “Π” and the “,”.
(The distinction between `<type>` and `<term>` is for emphasis; they are the exact same syntactic category.) Here’s a couple definitions of the `const` function from above showing the options with the syntax, and a more complex example declaring functional extensionality as an axiom (assuming equality has been previously defined having type `eq : Π (A : *) → A → A → *`). Duplicate definitions are not normally allowed and will result in an error.
Type ascriptions are optional. If included, `perga` will check to make sure your definition matches the ascription, and, if so, will remember the way your wrote the type when printing inferred types, which is particularly handy when using abbreviations for complex types. `perga` has no problem inferring the types of top-level definitions, as they are completely determined by the term, but I recommend including ascriptions most of the time, as they serve as a nice piece of documentation, help guide the implementation process, and make sure you are implementing the type you think you are.
If the RHS of a definition is `axiom`, then `perga` will assume that the identifier is an inhabitant of the type ascribed to it (as such when using axioms, a type ascription is required). This allows you to use axioms.
Line comments are `--` like in Haskell, and block comments are `[* *]` somewhat like ML (and nest properly). There is no significant whitespace, so you are free to format code as you wish.
There isn’t a proper module system (yet), but you can include other files in a dumb, C preprocessor way by using `@include <filepath>` (NOTE: this unfortunately messes up line numbers in error messages). Filepaths are relative to the current file.
Running `perga` without any arguments drops you into a basic repl. From here, you can type in definitions which `perga` will typecheck. Previous definitions are accessible in future definitions. The usual readline keybindings are available, including navigating history, which is saved between sessions (in `~/.cache/perga/history`). In the repl, you can enter “:q”, press C-c, or press C-d to quit. Entering “:e” shows everything that has been defined along with their types. Entering “:t <ident>” prints the type of a particular identifier, while “:v <ident>” prints its value. Entering “:n <expr>” will fully normalize (including unfolding definitions) an expression, while “:w <expr>” will reduce it to weak head normal form. Finally “:l <filepath>” loads a file.
You can also give `perga` a filename as an argument, in which case it will typecheck every definition in the file. If you give `perga` multiple filenames, it will process each one in turn, sharing an environment between them. Upon finishing, which should be nearly instantaneous, it will print out all files it processed, and “success!” if it successfully typechecked, and the first error it encountered otherwise.
I decided to bake `let` expressions into the formal language, rather than being a layer on top of the syntax. This means we can treat typing `let` expressions different from functions. The only substantial difference between
is that the latter must have a proper function type while the former doesn’t need to. So, for instance,
let (x := *) in ...
is possible, while you could not write a function whose argument is type `□`. We are justified in doing this because we always know what the argument is, and can just immediately substitute it, both into the value of the body, as well as the type of the body.
Coq-style sections would be very handy, and probably *relatively* easy to implement (compared to everything else on this todo list). Upon parsing a definition inside a section, will somehow need to look ahead to see what variables are used to see how I need to modify `binders`, or just make every definition require every section variable as an argument.
Not decidable, but I might be able to implement some basic unification algorithm, or switch to bidirectional type checking. This isn’t super necessary though, I find leaving off the types of arguments to generally be a bad idea, but in some cases it can be handy, especially not at the top level.
Much, much more useful than [inference](#orgb7d6c6f), implicit arguments would be amazing. It also seems a lot more complicated, but any system for dealing with implicit arguments is far better than none.
A proper module system would be wonderful. To me, ML style modules with structures, signatures, and functors seems like the right way to handle algebraic structures for a relatively simple language, rather than records (or, worse, a bunch of `and`’s like I currently have; especially painful without [implicits](#orgc3b40df)) or type classes (probably much harder, but could be nicer), but any way of managing scope, importing files, etc. is a necessity. The [F-ing modules](https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B573FA00832D55D4878863DE1725D90B/S0956796814000264a.pdf/f-ing-modules.pdf) paper is probably a good reference.
This is definitely a stretch goal. It would be cool though, and would turn this proof checker into a much more competent programming language. It’s not necessary for the math, but inductive definitions let you leverage computation in proofs, which is amazing. They also make certain definitions way easier, by avoiding needing to manually stipulate elimination rules, including induction principles, and let you keep more math constructive and understandable to the computer.
Right now, everything defaults to one line, which can be a problem with how large the proof terms get. Probably want to use [prettyprinter](https://hackage.haskell.org/package/prettyprinter) to be able to nicely handle indentation and line breaks.
The repl is decent, probably the most fully-featured repl I’ve ever made, but implementing something like [this](https://abhinavsarkar.net/posts/repling-with-haskeline/) would be awesome.
Error messages are decent, but a little buggy. Syntax error messages are pretty ok, but could have better labeling. The type check error messages are decent, but could do with better location information. Right now, the location defaults to the end of the current definition, which is often good enough, but more detail can’t hurt. The errors are generally very janky and hard to read. Having had quite a bit of practice reading them now, they actually provide very useful information, but could be made **a lot** more readable.
Low priority, as I’m the only one working on this, I’m working on it very actively, and things will continue rapidly changing, but I’ll want to get around to it once things stabilize, before I forget how everything works.
### TODO Add versions to `perga.cabal` and/or nixify
Right now, if there’s a failure, everything just stops immediately. More incremental parsing/typechecking could pave the way for more interactivity, e.g. development with holes, an LSP server, etc., not to mention better error messages.
Right now, the parsing and typechecking kind of happens all at once on a single syntax representation. As I add fancier and fancier elaboration, it might be a good idea to have multiple syntax representations. So we’d have one level of syntax representing what is actually in the file (modulo some easy elaboration like with function definitions), and through a series of transformations transform it into something like the current `Expr` type with all the information needed for typechecking and all the complex language features removed.
I’ve had a bunch of ideas for a more mathematician-friendly syntax bouncing around my head for a while. Implementing one of them would be awesome, but probably quite tricky.
Infix/misfix operators would be very nice and make `perga` look more normal. It’s funny, at the moment it looks a lot like a lisp, even though it’s totally not.
(eq_trans nat (plus n (suc m)) (suc (plus n m)) (plus (suc m) n)
(plus_s_r n m)
(eq_trans nat (suc (plus n m)) (suc (plus m n)) (plus (suc m) n)
(eq_cong nat nat (plus n m) (plus m n) suc IH)
(eq_sym nat (plus (suc m) n) (suc (plus m n)) (plus_s_l m n))))
There’s a [tree-sitter parser](https://forgejo.ballcloud.cc/wball/tree-sitter-perga) and [neovim plugin](https://forgejo.ballcloud.cc/wball/perga.nvim) available now, but no emacs-mode.
This is definitely a stretch goal, and I’m not sure how good of an idea it would be, but I’m imagining a TUI split into two panels. On the left you can see the term you are building with holes in it. On the right you have the focused hole’s type as well as the types of everything in scope (like Coq and Lean show while you’re in the middle of a proof). Then you can interact with the system by entering commands (e.g. `intros`, `apply`, etc.) which changes the proof term on the left. You’d also just be able to type in the left window as well, and edit the proof term directly. This way you’d get the benefits of working with tactics, making it way faster to construct proof terms, and the benefits of working with proof terms directly, namely transparency and simplicity. I’ll probably want to look into [brick](https://hackage.haskell.org/package/brick) if I want to make this happen.