HUGE update
This commit is contained in:
parent
c1ccd50644
commit
8c5311a2f6
16 changed files with 689 additions and 344 deletions
158
README.md
158
README.md
|
|
@ -1,88 +1,108 @@
|
|||
A basic implementation of a dependently typed lambda calculus (calculus of constructions) based on the exposition in Nederpelt and Geuvers *Type Theory and Formal Proof*. Right now it is *technically* a perfectly capable higher order logic proof checker, though there is lots of room for improved ergonomics and usability, which I intend to work on.
|
||||
`perga` is a basic proof assistant based on a dependently typed lambda calculus (calculus of constructions). This implementation is based on the exposition in Nederpelt and Geuvers’ *Type Theory and Formal Proof*. Right now it is a perfectly capable higher order logic proof checker, though there is lots of room for improved ergonomics and usability, which I intend to work on. At the moment, `perga` is comparable to Automath in terms of power and ease of use, being slightly more powerful than Automath (though lacks a *primitive notion* system for the moment), and a touch less ergonomic.
|
||||
|
||||
|
||||
# Syntax
|
||||
|
||||
The syntax is fairly flexible and should work as you expect. Identifiers can be Unicode as long as `megaparsec` calls them alphanumeric. Lambda and Pi abstractions can be written in many obvious ways that should be clear from the examples below. Additionally, arrows can be used as an abbreviation for a Π type where the parameter doesn’t appear in the body as usual.
|
||||
The syntax is fairly flexible and should work as you expect. Identifiers can be Unicode as long as `megaparsec` calls them alphanumeric. `λ` and `\Pi` abstractions can be written in many obvious ways that should be clear from the examples below. Additionally, arrows can be used as an abbreviation for a Π type where the parameter doesn’t appear in the body as usual.
|
||||
|
||||
All of the following examples correctly parse, and should look familiar if you are used to standard lambda calculus notation or Coq syntax.
|
||||
All of the following example terms correctly parse, and should look familiar if you are used to standard lambda calculus notation or Coq syntax.
|
||||
|
||||
λ (α : *) . λ (β : *) . λ (x : α) . λ (y : β) . x
|
||||
fun (A B C : *) (g : → C) (f : A → B) (x : A) ⇒ g (f x)
|
||||
fun (S : *) (P Q : S -> *) (H : Π (x : S) . P x -> Q x) (HP : forall (x : S), P x) => fun (x : S) => H x (HP x)
|
||||
|
||||
I mostly stick to Coq syntax throughout this file and the examples, as that is what I’m most used to and is easiest to type. I will probably make the syntax more strict in the future, as this level of flexibility is really not necessary.
|
||||
|
||||
# Basic Usage
|
||||
Definitions work similarly, having abstract syntax as shown below.
|
||||
|
||||
For the moment, running the program drops you into a repl where you can enter terms in the calculus of construction, which the system will then type check and report the type of the entered term, or any errors if present.
|
||||
<ident> (<ident> : <type>)* : <type>? := <term>;
|
||||
|
||||
(The distinction between `<type>` and `<term>` is for emphasis; they are the exact same syntactic category.) Here’s a couple definitions of the `const` function from above showing the options with the syntax.
|
||||
|
||||
const := λ (α : *) . λ (β : *) . λ (x : α) . λ (y : β) . x;
|
||||
const : forall (α β : *), α → β → α := fun (α β : *) (x : α) (y : β) ⇒ x;
|
||||
const (α β : *) (x : α) (y : β) : α := x;
|
||||
|
||||
Type ascriptions are optional. If included, `perga` will check to make sure your definition matches the ascription, and, if so, will remember the way your wrote the type when printing inferred types, which is particularly handy when using abbreviations for complex types. `perga` has no problem inferring the types of top-level definitions, as they are completely determined by the term, but I recommend including ascriptions most of the time, as they serve as a nice piece of documentation, help guide the implementation process, and make sure you are implementing the type you think you are.
|
||||
|
||||
Line comments are `--` like in Haskell, and block comments are `(* *)` like ML (and nest properly). There is no significant whitespace, so you are free to format code as you wish.
|
||||
|
||||
|
||||
## Sample Session
|
||||
# Usage
|
||||
|
||||
Here’s a sample session. Suppose your goal is to prove that for every set $S$ and pair of propositions $P$ and $Q$ on $S$, if $\forall x \in S, P x$ holds, and $\forall x \in S, P x \Rightarrow Q x$, then $\forall x \in S, Q x$. A set $S$ corresponds to a type `S`, so our term will begin
|
||||
Running `perga` without any arguments drops you into a basic repl. From here, you can type in definitions which `perga` will typecheck. Previous definitions are accessible in future definitions. The usual readline keybindings are available, including navigating history, which is saved between sessions (in `~/.cache/perga/history`). In the repl, you can enter “:q”, press C-c, or press C-d to quit. Entering “:e” shows everything that has been defined along with their types. Entering “:t <ident>” prints the type of a particular identifier.
|
||||
|
||||
> fun (S : *)
|
||||
|
||||
Likewise propositions are functions from `S → *`, so we can continue
|
||||
|
||||
> fun (S : *) (P Q : S → *)
|
||||
|
||||
Since $\forall$ corresponds with `Π`, the hypothesis $\forall x \in S, P x$ would correspond to the type `Π (x : S) . P x` and the hypothesis $\forall x \in S, P x \Rightarrow Q x$ would correspond with `Π (x : S) . P x → Q x`. Thus we can continue
|
||||
|
||||
> fun (S : *) (P Q : S → *) (HP : forall (x : S), P x) (H : forall (x : S), P x → Q x)
|
||||
|
||||
Since we are trying to prove a universally quantified proposition, we need to introduce an `x : S`, so
|
||||
|
||||
> fun (S : *) (P Q : S → *) (HP : forall (x : S), P x) (H : forall (x : S), P x → Q x) (x : S)
|
||||
|
||||
Finally, all we have left to do is find something of type `Q x`. We can get to `Q x` using `H x` if we can find something of type `P x`. Fortunately, `HP x` is type `P x`, so our final term is
|
||||
|
||||
> fun (S : *) (P Q : S → *) (HP : forall (x : S), P x) (H : forall (x : S), P x → Q x) (x : S) ⇒ H x (HP x)
|
||||
|
||||
Pressing enter to send this term to the system, it responds with
|
||||
|
||||
λ (S : *) (P Q : S -> *) (HP : ∏ (x : S) . P x) (H : ∏ (x : S) . P x -> Q x) (x : S) . H x (HP x) : ∏ (S : *) (P Q : S -> *) . (∏ (x : S) . P x) -> (∏ (x : S) . P x -> Q x) -> ∏ (x : S) . Q x
|
||||
|
||||
This output is a bit hard to read, but it is ultimately in the form `term : type`. The `term` is, up to minor syntactic differences, the term we entered, and the `type` is the type of the term inferred by the system. As a nice sanity check, we can verify that the `type` indeed corresponds to the theorem we wanted to prove.
|
||||
|
||||
More complex and interesting proofs and theorems are technically possible (in fact, *all* interesting theorems and proofs are possible, for a certain definition of *interesting*, *theorem*, and *proof*), though practically infeasible without definitions.
|
||||
You can also give `perga` a filename as an argument, in which case it will typecheck every definition in the file. Upon finishing, which should be nearly instantaneous, it will print out all the definitions it parsed along with their types (like you had typed “:e” in the repl) so you can verify that it worked.
|
||||
|
||||
|
||||
# Goals
|
||||
# Simple Example
|
||||
|
||||
A much larger, commented example is located in <./examples/example.pg>. Here is an example file defining Leibniz equality and proving that it is reflexive, symmetric, and transitive.
|
||||
|
||||
-- file: equality.pg
|
||||
|
||||
-- Defining Leibniz equality
|
||||
-- Note that we can leave the ascription off
|
||||
eq (A : *) (x y : A) := forall (P : A -> *), P x -> P y;
|
||||
|
||||
-- Equality is reflexive, which is easy to prove
|
||||
-- Here we give an ascription so that when `perga` reports the type,
|
||||
-- it references `eq` rather than inferring the type.
|
||||
eq_refl (A : *) (x : A) : eq A x x := fun (P : A -> *) (Hx : P x) => Hx;
|
||||
|
||||
-- Equality is symmetric. This one's a little harder to prove.
|
||||
eq_sym (A : *) (x y : A) (Hxy : eq A x y) : eq A y x :=
|
||||
fun (P : A -> *) (Hy : P y) =>
|
||||
Hxy (fun (z : A) => P z -> P x) (fun (Hx : P x) => Hx) Hy;
|
||||
|
||||
-- Equality is transitive.
|
||||
eq_trans (A : *) (x y z : A) (Hxy : eq A x y) (Hyz : eq A y z) : eq A x z :=
|
||||
fun (P : A -> *) (Hx : P x) => Hyz P (Hxy P Hx);
|
||||
|
||||
Running `perga equality.pg` yields the following output.
|
||||
|
||||
eq : ∏ (A : *) . A -> A -> *
|
||||
eq_refl : ∏ (A : *) (x : A) . eq A x x
|
||||
eq_sym : ∏ (A : *) (x y : A) . eq A x y -> eq A y x
|
||||
eq_trans : ∏ (A : *) (x y z : A) . eq A x y -> eq A y z -> eq A x z
|
||||
|
||||
This means our proofs were accepted. Furthermore, as a sanity check, we can see that the types correspond exactly to what we wanted to prove.
|
||||
|
||||
|
||||
# Future Goals
|
||||
|
||||
|
||||
## Substantive
|
||||
|
||||
|
||||
### TODO DEFINITIONS
|
||||
### TODO Let-expressions
|
||||
|
||||
Some kind of definition system would be a difficult and substantial addition, but man is it necessary to do anything. Likewise, I’d probably want a way to define primitive notions/axioms, but that should be an easy extension of the definition system. Further following *Type Theory and Formal Proof* would additionally yield a nice context system, which would be handy, though I disagree with the choice to differentiate between parameterized definitions and functions. That distinction doesn’t really make sense in full calculus of constructions.
|
||||
|
||||
Sidenote: With a definition system, I would greatly prefer Haskell-style type annotations to ML-style type annotations, though the latter are likely way easier to implement. It serves as a nice bit of documentation, de-clutters the function definition, and follows the familiar mathematical statement-proof structure.
|
||||
|
||||
1. TODO Add support for named free variables to `Expr`
|
||||
|
||||
I’m taking the locally nameless approach. Dealing with free variables as de Bruijn indices greater than the context sounds terrible. This requires adding support for δ reductions, as well as performing reductions and type checking operations in an environment. NOTE: I’ll probably want to use the [Reader monad](https://hackage.haskell.org/package/mtl-2.3.1/docs/Control-Monad-Reader.html).
|
||||
I vaguely remember from reading Coq’art that Coq does something special with `let` expressions, so I’ll maybe want to check that out. I tried implementing `let` as syntax sugar for an immediately called function, but that proved to be a massive mess with how I’m handling things. `let` expressions would definitely be handy for factoring out complex sub expressions.
|
||||
|
||||
|
||||
### TODO Inference
|
||||
|
||||
Obviously not fully decidable, but I might be able to implement some basic unification algorithm. This isn’t super necessary though, I find leaving off the types of arguments to generally be a bad idea, but in some cases it can be worth it.
|
||||
Obviously not fully decidable, but I might be able to implement some basic unification algorithm. This isn’t super necessary though, I find leaving off the types of arguments to generally be a bad idea, but in some cases it can be handy, especially not at the top level.
|
||||
|
||||
|
||||
### TODO Implicits
|
||||
|
||||
Much, much more useful than [inference](#orgb32d6a9), implicit arguments would be amazing. It also seems a lot more complicated, but any system for dealing with implicit arguments is far better than none.
|
||||
Much, much more useful than [inference](#org79d490c), implicit arguments would be amazing. It also seems a lot more complicated, but any system for dealing with implicit arguments is far better than none. Getting rid of stuff like [lines 213-215 of the example file](./examples/example.pg) would be amazing.
|
||||
|
||||
|
||||
### TODO Module System
|
||||
|
||||
A proper module system would be wonderful. To me, ML style modules with structures, signatures, and functors seems like the right way to handle algebraic structures for a relatively simple language, rather than records (or a bunch of `and`’s like I currently have) or type classes (probably much harder, but could be nicer), but any way of managing scope, importing files, etc. is a necessity.
|
||||
|
||||
|
||||
### TODO Universes?
|
||||
|
||||
Not really necessary without [inductive definitions](#orgc763d54), but could be fun.
|
||||
Not really all that necessary, especially without [inductive definitions](#org357916d), but could be fun.
|
||||
|
||||
|
||||
### TODO Inductive Definitions
|
||||
|
||||
This is definitely a stretch goal. It would be cool though, and would turn this proof checker into a much more competent programming language. It’s not necessary for the math, but inductive definitions let you leverage computation in proofs, which is amazing. They also make certain definitions easier, by avoiding needing to manually stipulate elimination rules, including induction principles.
|
||||
This is definitely a stretch goal. It would be cool though, and would turn this proof checker into a much more competent programming language. It’s not necessary for the math, but inductive definitions let you leverage computation in proofs, which is amazing. They also make certain definitions way easier, by avoiding needing to manually stipulate elimination rules, including induction principles, and let you keep more math constructive and understandable to the computer.
|
||||
|
||||
|
||||
## Cosmetic/usage/technical
|
||||
|
|
@ -93,36 +113,56 @@ This is definitely a stretch goal. It would be cool though, and would turn this
|
|||
Right now, everything defaults to one line, which can be a problem with how large the proof terms get. Probably want to use [prettyprinter](https://hackage.haskell.org/package/prettyprinter) to be able to nicely handle indentation and line breaks.
|
||||
|
||||
|
||||
### DONE Better usage
|
||||
### TODO Better repl
|
||||
|
||||
Read input from a file if a filename is given, otherwise drop into a repl. Perhaps use something like [haskeline](https://hackage.haskell.org/package/haskeline) to improve the repl (though `rlwrap` is fine, so not a huge priority).
|
||||
|
||||
For even better usage in the future, check out [optparse](https://hackage.haskell.org/package/optparse-applicative) for command line arguments and [this really cool blogpost](https://abhinavsarkar.net/posts/repling-with-haskeline/) for some haskeline magic.
|
||||
The repl is decent, but implementing something like [this](https://abhinavsarkar.net/posts/repling-with-haskeline/) would be awesome. I’d also at least like to add a new command `":l <filename>"` to load the definitions from a file.
|
||||
|
||||
|
||||
### TODO Improve error messages
|
||||
|
||||
The error messages currently aren’t terrible, but it would be nice to have, e.g. line numbers. `megaparsec` allows for semantic errors, so somehow hook into that like what I’m doing for unbound variables?
|
||||
Error messages are decent, but a little buggy. Syntax error messages are pretty ok, but could have better labeling. The type check error messages are decent, but could do with better location information. Right now, the location defaults to the end of the current definition, which is often good enough, but more detail can’t hurt.
|
||||
|
||||
|
||||
### DONE Improve β-equivalence check
|
||||
### TODO Better testing
|
||||
|
||||
The check for β-equivalence could certainly be a lot smarter. Right now it just fully normalizes both terms and checks if the normal forms are equal, which isn’t the best strategy. This is much less of an issue without [inductive definitions](#orgc763d54)/recursion, as far less computation gets done in general, let alone at the type level. That being said, it’s certainly not a bad idea.
|
||||
|
||||
Improved the β-equivalence check to first reduce both terms to WHNF then walk the syntax tree and recursively compare the terms. I figured this would be a necessary precondition to adding definitions, since fully normalizing everything with definitions (recursively unfolding all definitions) seems absolutely horrendous. Also implementing this wasn’t nearly as hard as I would have thought.
|
||||
I would like to avoid regressions as I keep working on this, and a suite of unit tests would make me feel much more comfortable. I made unit tests, then added a ton of stuff. Most of the unit tests are kind of pointless now. For now, I think running the code on the example file is pretty sufficient.
|
||||
|
||||
|
||||
### DONE Better testing
|
||||
### TODO Alternate syntax
|
||||
|
||||
Using some kind of testing framework, like [QuickCheck](https://hackage.haskell.org/package/QuickCheck) and/or [HUnit](https://hackage.haskell.org/package/HUnit) seems like a good idea. I would like to avoid regressions as I keep working on this, and a suite of unit tests would make me feel much more comfortable.
|
||||
I’ve had a bunch of ideas for a more mathematician-friendly syntax bouncing around my head for a while. Implementing one of them would be awesome, but probably quite tricky.
|
||||
|
||||
Something like
|
||||
|
||||
Theorem basic (S : *) (P : S → *) :
|
||||
(∀ (x : S), P x → Q x) → (∀ (x : S), P x) → ∀ (x : S), Q x.
|
||||
Proof
|
||||
1. Suppose ∀ (x : S), P x → Q x
|
||||
2. Suppose ∀ (x : S), P x
|
||||
3. Let x : S
|
||||
4. P x by [2 x]
|
||||
5. Q x by [1 x 4]
|
||||
Qed
|
||||
|
||||
I think could be reliably translated into
|
||||
|
||||
basic (S : *) (P : S → *) : (Π (x : S), P x → Q x) → (Π (x : S), P x) → Π (x : S), Q x :=
|
||||
fun (a1 : Π (x : S), P x → Q x) ⇒
|
||||
fun (a2 : Π (x : S), P x) ⇒
|
||||
fun (x : S) ⇒
|
||||
a1 x (a2 x);
|
||||
|
||||
and is more intuitively understandable to a mathematician not familiar with type theory, while the latter would be utter nonsense.
|
||||
|
||||
I’m imagining the parser could be chosen based on the file extension or something. Some way to mix the syntaxes could be nice too.
|
||||
|
||||
|
||||
### DONE Switch to `Text`
|
||||
### TODO treesitter parser and/or emacs mode
|
||||
|
||||
Currently I use `String` everywhere. Switching to `Text` should be straightforward, and would improve performance and Unicode support.
|
||||
Really not necessary, especially while the syntax is in a bit of flux, but would eventually be nice. The syntax is simple enough that a treesitter grammar shouldn’t be too hard to write. An emacs mode would especially be nice if I ever get end up implementing an [alternate syntax](#orgf503794), to better handle indentation, automatically adjust line numbers, etc.
|
||||
|
||||
|
||||
### TODO TUI
|
||||
|
||||
This is definitely a stretch goal, but I’m imagining a TUI split into two panels. On the left you can see the term you are building with holes in it. On the right you have the focused hole’s type as well as the types of everything in scope (like Coq and Lean show while you’re in the middle of a proof). Then you can interact with the system by entering commands (e.g. `intros`, `apply`, etc.) which changes the proof term on the left. You’d also just be able to type in the left window as well, and edit the proof term directly. This way you’d get the benefits of working with tactics, making it way faster to construct proof terms, and the benefits of working with proof terms directly, namely transparency and simplicity. I’ll probably want to look into [brick](https://hackage.haskell.org/package/brick) in order to make this happen.
|
||||
This is definitely a stretch goal, and I’m not sure how good of an idea it would be, but I’m imagining a TUI split into two panels. On the left you can see the term you are building with holes in it. On the right you have the focused hole’s type as well as the types of everything in scope (like Coq and Lean show while you’re in the middle of a proof). Then you can interact with the system by entering commands (e.g. `intros`, `apply`, etc.) which changes the proof term on the left. You’d also just be able to type in the left window as well, and edit the proof term directly. This way you’d get the benefits of working with tactics, making it way faster to construct proof terms, and the benefits of working with proof terms directly, namely transparency and simplicity. I’ll probably want to look into [brick](https://hackage.haskell.org/package/brick) if I want to make this happen.
|
||||
|
||||
|
|
|
|||
154
README.org
154
README.org
|
|
@ -1,97 +1,135 @@
|
|||
#+title: Dependent Lambda
|
||||
#+title: Perga
|
||||
#+options: toc:nil
|
||||
|
||||
A basic implementation of a dependently typed lambda calculus (calculus of constructions) based on the exposition in Nederpelt and Geuvers /Type Theory and Formal Proof/. Right now it is /technically/ a perfectly capable higher order logic proof checker, though there is lots of room for improved ergonomics and usability, which I intend to work on.
|
||||
=perga= is a basic proof assistant based on a dependently typed lambda calculus (calculus of constructions). This implementation is based on the exposition in Nederpelt and Geuvers' /Type Theory and Formal Proof/. Right now it is a perfectly capable higher order logic proof checker, though there is lots of room for improved ergonomics and usability, which I intend to work on. At the moment, =perga= is comparable to Automath in terms of power and ease of use, being slightly more powerful than Automath (though lacks a /primitive notion/ system for the moment), and a touch less ergonomic.
|
||||
|
||||
* Syntax
|
||||
The syntax is fairly flexible and should work as you expect. Identifiers can be Unicode as long as =megaparsec= calls them alphanumeric. Lambda and Pi abstractions can be written in many obvious ways that should be clear from the examples below. Additionally, arrows can be used as an abbreviation for a Π type where the parameter doesn't appear in the body as usual.
|
||||
The syntax is fairly flexible and should work as you expect. Identifiers can be Unicode as long as =megaparsec= calls them alphanumeric. =λ= and =\Pi= abstractions can be written in many obvious ways that should be clear from the examples below. Additionally, arrows can be used as an abbreviation for a Π type where the parameter doesn't appear in the body as usual.
|
||||
|
||||
All of the following examples correctly parse, and should look familiar if you are used to standard lambda calculus notation or Coq syntax.
|
||||
All of the following example terms correctly parse, and should look familiar if you are used to standard lambda calculus notation or Coq syntax.
|
||||
#+begin_src
|
||||
λ (α : *) . λ (β : *) . λ (x : α) . λ (y : β) . x
|
||||
fun (A B C : *) (g : → C) (f : A → B) (x : A) ⇒ g (f x)
|
||||
fun (S : *) (P Q : S -> *) (H : Π (x : S) . P x -> Q x) (HP : forall (x : S), P x) => fun (x : S) => H x (HP x)
|
||||
#+end_src
|
||||
I mostly stick to Coq syntax throughout this file and the examples, as that is what I'm most used to and is easiest to type. I will probably make the syntax more strict in the future, as this level of flexibility is really not necessary.
|
||||
|
||||
* Basic Usage
|
||||
For the moment, running the program drops you into a repl where you can enter terms in the calculus of construction, which the system will then type check and report the type of the entered term, or any errors if present.
|
||||
Definitions work similarly, having abstract syntax as shown below.
|
||||
#+begin_src
|
||||
<ident> (<ident> : <type>)* : <type>? := <term>;
|
||||
#+end_src
|
||||
(The distinction between =<type>= and =<term>= is for emphasis; they are the exact same syntactic category.) Here's a couple definitions of the =const= function from above showing the options with the syntax.
|
||||
#+begin_src
|
||||
const := λ (α : *) . λ (β : *) . λ (x : α) . λ (y : β) . x;
|
||||
const : forall (α β : *), α → β → α := fun (α β : *) (x : α) (y : β) ⇒ x;
|
||||
const (α β : *) (x : α) (y : β) : α := x;
|
||||
#+end_src
|
||||
Type ascriptions are optional. If included, =perga= will check to make sure your definition matches the ascription, and, if so, will remember the way your wrote the type when printing inferred types, which is particularly handy when using abbreviations for complex types. =perga= has no problem inferring the types of top-level definitions, as they are completely determined by the term, but I recommend including ascriptions most of the time, as they serve as a nice piece of documentation, help guide the implementation process, and make sure you are implementing the type you think you are.
|
||||
|
||||
** Sample Session
|
||||
Here's a sample session. Suppose your goal is to prove that for every set $S$ and pair of propositions $P$ and $Q$ on $S$, if $\forall x \in S, P x$ holds, and $\forall x \in S, P x \Rightarrow Q x$, then $\forall x \in S, Q x$. A set $S$ corresponds to a type =S=, so our term will begin
|
||||
#+begin_src
|
||||
> fun (S : *)
|
||||
#+end_src
|
||||
Likewise propositions are functions from =S → *=, so we can continue
|
||||
#+begin_src
|
||||
> fun (S : *) (P Q : S → *)
|
||||
#+end_src
|
||||
Since $\forall$ corresponds with =Π=, the hypothesis $\forall x \in S, P x$ would correspond to the type =Π (x : S) . P x= and the hypothesis $\forall x \in S, P x \Rightarrow Q x$ would correspond with =Π (x : S) . P x → Q x=. Thus we can continue
|
||||
#+begin_src
|
||||
> fun (S : *) (P Q : S → *) (HP : forall (x : S), P x) (H : forall (x : S), P x → Q x)
|
||||
#+end_src
|
||||
Since we are trying to prove a universally quantified proposition, we need to introduce an =x : S=, so
|
||||
#+begin_src
|
||||
> fun (S : *) (P Q : S → *) (HP : forall (x : S), P x) (H : forall (x : S), P x → Q x) (x : S)
|
||||
#+end_src
|
||||
Finally, all we have left to do is find something of type =Q x=. We can get to =Q x= using =H x= if we can find something of type =P x=. Fortunately, =HP x= is type =P x=, so our final term is
|
||||
#+begin_src
|
||||
> fun (S : *) (P Q : S → *) (HP : forall (x : S), P x) (H : forall (x : S), P x → Q x) (x : S) ⇒ H x (HP x)
|
||||
#+end_src
|
||||
Pressing enter to send this term to the system, it responds with
|
||||
#+begin_src
|
||||
λ (S : *) (P Q : S -> *) (HP : ∏ (x : S) . P x) (H : ∏ (x : S) . P x -> Q x) (x : S) . H x (HP x) : ∏ (S : *) (P Q : S -> *) . (∏ (x : S) . P x) -> (∏ (x : S) . P x -> Q x) -> ∏ (x : S) . Q x
|
||||
#+end_src
|
||||
This output is a bit hard to read, but it is ultimately in the form =term : type=. The =term= is, up to minor syntactic differences, the term we entered, and the =type= is the type of the term inferred by the system. As a nice sanity check, we can verify that the =type= indeed corresponds to the theorem we wanted to prove.
|
||||
Line comments are =--= like in Haskell, and block comments are =(* *)= like ML (and nest properly). There is no significant whitespace, so you are free to format code as you wish.
|
||||
|
||||
More complex and interesting proofs and theorems are technically possible (in fact, /all/ interesting theorems and proofs are possible, for a certain definition of /interesting/, /theorem/, and /proof/), though practically infeasible without definitions.
|
||||
* Usage
|
||||
Running =perga= without any arguments drops you into a basic repl. From here, you can type in definitions which =perga= will typecheck. Previous definitions are accessible in future definitions. The usual readline keybindings are available, including navigating history, which is saved between sessions (in =~/.cache/perga/history=). In the repl, you can enter ":q", press C-c, or press C-d to quit. Entering ":e" shows everything that has been defined along with their types. Entering ":t <ident>" prints the type of a particular identifier.
|
||||
|
||||
* Goals
|
||||
You can also give =perga= a filename as an argument, in which case it will typecheck every definition in the file. Upon finishing, which should be nearly instantaneous, it will print out all the definitions it parsed along with their types (like you had typed ":e" in the repl) so you can verify that it worked.
|
||||
|
||||
* Simple Example
|
||||
A much larger, commented example is located in [[./examples/example.pg]]. Here is an example file defining Leibniz equality and proving that it is reflexive, symmetric, and transitive.
|
||||
#+begin_src
|
||||
-- file: equality.pg
|
||||
|
||||
-- Defining Leibniz equality
|
||||
-- Note that we can leave the ascription off
|
||||
eq (A : *) (x y : A) := forall (P : A -> *), P x -> P y;
|
||||
|
||||
-- Equality is reflexive, which is easy to prove
|
||||
-- Here we give an ascription so that when `perga` reports the type,
|
||||
-- it references `eq` rather than inferring the type.
|
||||
eq_refl (A : *) (x : A) : eq A x x := fun (P : A -> *) (Hx : P x) => Hx;
|
||||
|
||||
-- Equality is symmetric. This one's a little harder to prove.
|
||||
eq_sym (A : *) (x y : A) (Hxy : eq A x y) : eq A y x :=
|
||||
fun (P : A -> *) (Hy : P y) =>
|
||||
Hxy (fun (z : A) => P z -> P x) (fun (Hx : P x) => Hx) Hy;
|
||||
|
||||
-- Equality is transitive.
|
||||
eq_trans (A : *) (x y z : A) (Hxy : eq A x y) (Hyz : eq A y z) : eq A x z :=
|
||||
fun (P : A -> *) (Hx : P x) => Hyz P (Hxy P Hx);
|
||||
#+end_src
|
||||
Running =perga equality.pg= yields the following output.
|
||||
#+begin_src
|
||||
eq : ∏ (A : *) . A -> A -> *
|
||||
eq_refl : ∏ (A : *) (x : A) . eq A x x
|
||||
eq_sym : ∏ (A : *) (x y : A) . eq A x y -> eq A y x
|
||||
eq_trans : ∏ (A : *) (x y z : A) . eq A x y -> eq A y z -> eq A x z
|
||||
#+end_src
|
||||
This means our proofs were accepted. Furthermore, as a sanity check, we can see that the types correspond exactly to what we wanted to prove.
|
||||
|
||||
* Future Goals
|
||||
|
||||
** Substantive
|
||||
|
||||
*** TODO DEFINITIONS
|
||||
Some kind of definition system would be a difficult and substantial addition, but man is it necessary to do anything. Likewise, I'd probably want a way to define primitive notions/axioms, but that should be an easy extension of the definition system. Further following /Type Theory and Formal Proof/ would additionally yield a nice context system, which would be handy, though I disagree with the choice to differentiate between parameterized definitions and functions. That distinction doesn't really make sense in full calculus of constructions.
|
||||
|
||||
Sidenote: With a definition system, I would greatly prefer Haskell-style type annotations to ML-style type annotations, though the latter are likely way easier to implement. It serves as a nice bit of documentation, de-clutters the function definition, and follows the familiar mathematical statement-proof structure.
|
||||
|
||||
**** TODO Add support for named free variables to =Expr=
|
||||
I'm taking the locally nameless approach. Dealing with free variables as de Bruijn indices greater than the context sounds terrible. This requires adding support for \delta reductions, as well as performing reductions and type checking operations in an environment. NOTE: I'll probably want to use the [[https://hackage.haskell.org/package/mtl-2.3.1/docs/Control-Monad-Reader.html][Reader monad]].
|
||||
*** TODO Let-expressions
|
||||
I vaguely remember from reading Coq'art that Coq does something special with =let= expressions, so I'll maybe want to check that out. I tried implementing =let= as syntax sugar for an immediately called function, but that proved to be a massive mess with how I'm handling things. =let= expressions would definitely be handy for factoring out complex sub expressions.
|
||||
|
||||
*** TODO Inference
|
||||
Obviously not fully decidable, but I might be able to implement some basic unification algorithm. This isn't super necessary though, I find leaving off the types of arguments to generally be a bad idea, but in some cases it can be worth it.
|
||||
Obviously not fully decidable, but I might be able to implement some basic unification algorithm. This isn't super necessary though, I find leaving off the types of arguments to generally be a bad idea, but in some cases it can be handy, especially not at the top level.
|
||||
|
||||
*** TODO Implicits
|
||||
Much, much more useful than [[Inference][inference]], implicit arguments would be amazing. It also seems a lot more complicated, but any system for dealing with implicit arguments is far better than none.
|
||||
Much, much more useful than [[Inference][inference]], implicit arguments would be amazing. It also seems a lot more complicated, but any system for dealing with implicit arguments is far better than none. Getting rid of stuff like [[./examples/example.pg::213][lines 213-215 of the example file]] would be amazing.
|
||||
|
||||
*** TODO Module System
|
||||
A proper module system would be wonderful. To me, ML style modules with structures, signatures, and functors seems like the right way to handle algebraic structures for a relatively simple language, rather than records (or a bunch of =and='s like I currently have) or type classes (probably much harder, but could be nicer), but any way of managing scope, importing files, etc. is a necessity.
|
||||
|
||||
*** TODO Universes?
|
||||
Not really necessary without [[Inductive Definitions][inductive definitions]], but could be fun.
|
||||
Not really all that necessary, especially without [[Inductive Definitions][inductive definitions]], but could be fun.
|
||||
|
||||
*** TODO Inductive Definitions
|
||||
This is definitely a stretch goal. It would be cool though, and would turn this proof checker into a much more competent programming language. It's not necessary for the math, but inductive definitions let you leverage computation in proofs, which is amazing. They also make certain definitions easier, by avoiding needing to manually stipulate elimination rules, including induction principles.
|
||||
This is definitely a stretch goal. It would be cool though, and would turn this proof checker into a much more competent programming language. It's not necessary for the math, but inductive definitions let you leverage computation in proofs, which is amazing. They also make certain definitions way easier, by avoiding needing to manually stipulate elimination rules, including induction principles, and let you keep more math constructive and understandable to the computer.
|
||||
|
||||
** Cosmetic/usage/technical
|
||||
|
||||
*** TODO Prettier pretty printing
|
||||
Right now, everything defaults to one line, which can be a problem with how large the proof terms get. Probably want to use [[https://hackage.haskell.org/package/prettyprinter][prettyprinter]] to be able to nicely handle indentation and line breaks.
|
||||
|
||||
*** DONE Better usage
|
||||
Read input from a file if a filename is given, otherwise drop into a repl. Perhaps use something like [[https://hackage.haskell.org/package/haskeline][haskeline]] to improve the repl (though =rlwrap= is fine, so not a huge priority).
|
||||
|
||||
For even better usage in the future, check out [[https://hackage.haskell.org/package/optparse-applicative][optparse]] for command line arguments and [[https://abhinavsarkar.net/posts/repling-with-haskeline/][this really cool blogpost]] for some haskeline magic.
|
||||
*** TODO Better repl
|
||||
The repl is decent, but implementing something like [[https://abhinavsarkar.net/posts/repling-with-haskeline/][this]] would be awesome. I'd also at least like to add a new command =":l <filename>"= to load the definitions from a file.
|
||||
|
||||
*** TODO Improve error messages
|
||||
The error messages currently aren't terrible, but it would be nice to have, e.g. line numbers. =megaparsec= allows for semantic errors, so somehow hook into that like what I'm doing for unbound variables?
|
||||
Error messages are decent, but a little buggy. Syntax error messages are pretty ok, but could have better labeling. The type check error messages are decent, but could do with better location information. Right now, the location defaults to the end of the current definition, which is often good enough, but more detail can't hurt.
|
||||
|
||||
*** DONE Improve β-equivalence check
|
||||
The check for β-equivalence could certainly be a lot smarter. Right now it just fully normalizes both terms and checks if the normal forms are equal, which isn't the best strategy. This is much less of an issue without [[Inductive Definitions][inductive definitions]]/recursion, as far less computation gets done in general, let alone at the type level. That being said, it's certainly not a bad idea.
|
||||
*** TODO Better testing
|
||||
I would like to avoid regressions as I keep working on this, and a suite of unit tests would make me feel much more comfortable. I made unit tests, then added a ton of stuff. Most of the unit tests are kind of pointless now. For now, I think running the code on the example file is pretty sufficient.
|
||||
|
||||
Improved the β-equivalence check to first reduce both terms to WHNF then walk the syntax tree and recursively compare the terms. I figured this would be a necessary precondition to adding definitions, since fully normalizing everything with definitions (recursively unfolding all definitions) seems absolutely horrendous. Also implementing this wasn't nearly as hard as I would have thought.
|
||||
*** TODO Alternate syntax
|
||||
I've had a bunch of ideas for a more mathematician-friendly syntax bouncing around my head for a while. Implementing one of them would be awesome, but probably quite tricky.
|
||||
|
||||
*** DONE Better testing
|
||||
Using some kind of testing framework, like [[https://hackage.haskell.org/package/QuickCheck][QuickCheck]] and/or [[https://hackage.haskell.org/package/HUnit][HUnit]] seems like a good idea. I would like to avoid regressions as I keep working on this, and a suite of unit tests would make me feel much more comfortable.
|
||||
Something like
|
||||
#+begin_src
|
||||
Theorem basic (S : *) (P : S → *) :
|
||||
(∀ (x : S), P x → Q x) → (∀ (x : S), P x) → ∀ (x : S), Q x.
|
||||
Proof
|
||||
1. Suppose ∀ (x : S), P x → Q x
|
||||
2. Suppose ∀ (x : S), P x
|
||||
3. Let x : S
|
||||
4. P x by [2 x]
|
||||
5. Q x by [1 x 4]
|
||||
Qed
|
||||
#+end_src
|
||||
I think could be reliably translated into
|
||||
#+begin_src
|
||||
basic (S : *) (P : S → *) : (Π (x : S), P x → Q x) → (Π (x : S), P x) → Π (x : S), Q x :=
|
||||
fun (a1 : Π (x : S), P x → Q x) ⇒
|
||||
fun (a2 : Π (x : S), P x) ⇒
|
||||
fun (x : S) ⇒
|
||||
a1 x (a2 x);
|
||||
#+end_src
|
||||
and is more intuitively understandable to a mathematician not familiar with type theory, while the latter would be utter nonsense.
|
||||
|
||||
*** DONE Switch to =Text=
|
||||
Currently I use =String= everywhere. Switching to =Text= should be straightforward, and would improve performance and Unicode support.
|
||||
I'm imagining the parser could be chosen based on the file extension or something. Some way to mix the syntaxes could be nice too.
|
||||
|
||||
*** TODO treesitter parser and/or emacs mode
|
||||
Really not necessary, especially while the syntax is in a bit of flux, but would eventually be nice. The syntax is simple enough that a treesitter grammar shouldn't be too hard to write. An emacs mode would especially be nice if I ever get end up implementing an [[Alternate syntax][alternate syntax]], to better handle indentation, automatically adjust line numbers, etc.
|
||||
|
||||
*** TODO TUI
|
||||
This is definitely a stretch goal, but I'm imagining a TUI split into two panels. On the left you can see the term you are building with holes in it. On the right you have the focused hole's type as well as the types of everything in scope (like Coq and Lean show while you're in the middle of a proof). Then you can interact with the system by entering commands (e.g. =intros=, =apply=, etc.) which changes the proof term on the left. You'd also just be able to type in the left window as well, and edit the proof term directly. This way you'd get the benefits of working with tactics, making it way faster to construct proof terms, and the benefits of working with proof terms directly, namely transparency and simplicity. I'll probably want to look into [[https://hackage.haskell.org/package/brick][brick]] in order to make this happen.
|
||||
This is definitely a stretch goal, and I'm not sure how good of an idea it would be, but I'm imagining a TUI split into two panels. On the left you can see the term you are building with holes in it. On the right you have the focused hole's type as well as the types of everything in scope (like Coq and Lean show while you're in the middle of a proof). Then you can interact with the system by entering commands (e.g. =intros=, =apply=, etc.) which changes the proof term on the left. You'd also just be able to type in the left window as well, and edit the proof term directly. This way you'd get the benefits of working with tactics, making it way faster to construct proof terms, and the benefits of working with proof terms directly, namely transparency and simplicity. I'll probably want to look into [[https://hackage.haskell.org/package/brick][brick]] if I want to make this happen.
|
||||
|
|
|
|||
12
app/Main.hs
12
app/Main.hs
|
|
@ -1,6 +1,9 @@
|
|||
module Main where
|
||||
|
||||
import Control.Monad (void)
|
||||
import qualified Data.Map as M
|
||||
import qualified Data.Text.IO as T
|
||||
import Eval (Env)
|
||||
import Parser
|
||||
import Repl
|
||||
import System.Environment
|
||||
|
|
@ -10,15 +13,18 @@ main :: IO ()
|
|||
main = do
|
||||
args <- getArgs
|
||||
case args of
|
||||
[] -> repl
|
||||
[] -> void repl
|
||||
[file] -> handleFile file
|
||||
_ -> putStrLn "usage './perga' for repl and './perga <filename>' to get input from a file"
|
||||
|
||||
dumpEnv :: Env -> IO ()
|
||||
dumpEnv = void . M.traverseWithKey ((putStrLn .) . showEnvEntry)
|
||||
|
||||
handleFile :: String -> IO ()
|
||||
handleFile fileName =
|
||||
do
|
||||
fileH <- openFile fileName ReadMode
|
||||
input <- T.hGetContents fileH
|
||||
case pAll input of
|
||||
case parseProgram input of
|
||||
Left err -> putStrLn err
|
||||
Right () -> putStrLn "success!"
|
||||
Right env -> dumpEnv env
|
||||
|
|
|
|||
66
app/Repl.hs
66
app/Repl.hs
|
|
@ -1,52 +1,62 @@
|
|||
module Repl (repl) where
|
||||
module Repl (repl, showEnvEntry) where
|
||||
|
||||
import Check
|
||||
import qualified Data.Text as T
|
||||
import Data.Functor (void)
|
||||
import Data.List (isPrefixOf)
|
||||
import qualified Data.Map as M
|
||||
import qualified Data.Text as T
|
||||
import Eval
|
||||
import Expr
|
||||
import Parser
|
||||
import System.Console.Haskeline
|
||||
import System.Directory (createDirectoryIfMissing, getHomeDirectory)
|
||||
import System.FilePath ((</>))
|
||||
|
||||
newtype ReplState = ReplState
|
||||
{ debugMode :: Bool
|
||||
}
|
||||
|
||||
data ReplCommand = Quit | ToggleDebug | Input String deriving (Show)
|
||||
data ReplCommand = Quit | DumpEnv | TypeQuery String | Input String deriving (Show)
|
||||
|
||||
parseCommand :: Maybe String -> Maybe ReplCommand
|
||||
parseCommand Nothing = Nothing
|
||||
parseCommand (Just ":q") = Just Quit
|
||||
parseCommand (Just ":d") = Just ToggleDebug
|
||||
parseCommand (Just input) = Just (Input input)
|
||||
parseCommand (Just ":e") = Just DumpEnv
|
||||
parseCommand (Just input)
|
||||
| ":t" `isPrefixOf` input = case words input of
|
||||
[":t", rest] -> Just $ TypeQuery rest
|
||||
_ -> Nothing
|
||||
| otherwise = Just $ Input input
|
||||
|
||||
handleInput :: ReplState -> String -> InputT IO ()
|
||||
handleInput state input = case pAll (T.pack input) of
|
||||
Left err -> outputStrLn err
|
||||
Right () -> pure ()
|
||||
showEnvEntry :: T.Text -> Expr -> String
|
||||
showEnvEntry k v = T.unpack k ++ " : " ++ prettyS v
|
||||
|
||||
printDebugInfo :: Expr -> Expr -> InputT IO ()
|
||||
printDebugInfo expr ty = do
|
||||
outputStrLn $ "expr\t" ++ show expr
|
||||
outputStrLn $ "type\t" ++ show ty
|
||||
outputStrLn $ "type\t" ++ prettyS ty
|
||||
dumpEnv :: Env -> InputT IO ()
|
||||
dumpEnv = void . M.traverseWithKey ((outputStrLn .) . showEnvEntry)
|
||||
|
||||
repl :: IO ()
|
||||
handleInput :: GlobalState -> String -> InputT IO GlobalState
|
||||
handleInput env input =
|
||||
let (res, env') = parseDefEmpty env (T.pack input)
|
||||
in case res of
|
||||
Left err -> outputStrLn err >> pure env'
|
||||
Right () -> pure env'
|
||||
|
||||
repl :: IO GlobalState
|
||||
repl = do
|
||||
home <- getHomeDirectory
|
||||
let basepath = home </> ".cache" </> "perga"
|
||||
let filepath = basepath </> "history"
|
||||
createDirectoryIfMissing True basepath
|
||||
runInputT (defaultSettings{historyFile = Just filepath}) $ loop (ReplState False)
|
||||
runInputT (defaultSettings{historyFile = Just filepath}) (loop GS{_defs = M.empty, _types = M.empty})
|
||||
where
|
||||
loop :: ReplState -> InputT IO ()
|
||||
loop state = do
|
||||
loop :: GlobalState -> InputT IO GlobalState
|
||||
loop env = do
|
||||
minput <- getInputLine "> "
|
||||
case parseCommand minput of
|
||||
Nothing -> return ()
|
||||
Just Quit -> return ()
|
||||
Just ToggleDebug -> loop state{debugMode = not (debugMode state)}
|
||||
Nothing -> pure env
|
||||
Just Quit -> pure env
|
||||
Just DumpEnv -> dumpEnv (_types env) >> loop env
|
||||
Just (TypeQuery input) ->
|
||||
( case M.lookup (T.pack input) (_types env) of
|
||||
Nothing -> outputStrLn (input ++ " unbound")
|
||||
Just expr -> outputStrLn $ prettyS expr
|
||||
)
|
||||
>> loop env
|
||||
Just (Input input) -> do
|
||||
handleInput state input
|
||||
loop state
|
||||
env' <- handleInput env input
|
||||
loop env'
|
||||
|
|
|
|||
250
examples/example.pg
Normal file
250
examples/example.pg
Normal file
|
|
@ -0,0 +1,250 @@
|
|||
-- False
|
||||
|
||||
false : * := forall (A : *), A;
|
||||
|
||||
-- no introduction rule
|
||||
|
||||
-- elimination rule
|
||||
false_elim (A : *) (contra : false) : A := contra A;
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Negation
|
||||
|
||||
not (A : *) : * := A -> false;
|
||||
|
||||
-- introduction rule (kinda just the definition)
|
||||
not_intro (A : *) (h : A -> false) : not A := h;
|
||||
|
||||
-- elimination rule
|
||||
not_elim (A B : *) (a : A) (na : not A) : B := na a B;
|
||||
|
||||
-- can introduce double negation (can't eliminate it as that isn't constructive)
|
||||
double_neg_intro (A : *) (a : A) : not (not A) :=
|
||||
fun (notA : not A) => notA a;
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Conjunction
|
||||
|
||||
and (A B : *) : * := forall (C : *), (A -> B -> C) -> C;
|
||||
|
||||
-- introduction rule
|
||||
and_intro (A B : *) (a : A) (b : B) : and A B :=
|
||||
fun (C : *) (H : A -> B -> C) => H a b;
|
||||
|
||||
-- left elimination rule
|
||||
and_elim_l (A B : *) (ab : and A B) : A :=
|
||||
ab A (fun (a : A) (b : B) => a);
|
||||
|
||||
-- right elimination rule
|
||||
and_elim_r (A B : *) (ab : and A B) : B :=
|
||||
ab B (fun (a : A) (b : B) => b);
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Disjunction
|
||||
|
||||
-- 2nd order disjunction
|
||||
or (A B : *) : * := forall (C : *), (A -> C) -> (B -> C) -> C;
|
||||
|
||||
-- left introduction rule
|
||||
or_intro_l (A B : *) (a : A) : or A B :=
|
||||
fun (C : *) (ha : A -> C) (hb : B -> C) => ha a;
|
||||
|
||||
-- right introduction rule
|
||||
or_intro_r (A B : *) (b : B) : or A B :=
|
||||
fun (C : *) (ha : A -> C) (hb : B -> C) => hb b;
|
||||
|
||||
-- elimination rule (kinda just the definition)
|
||||
or_elim (A B C : *) (ab : or A B) (ha : A -> C) (hb : B -> C) : C :=
|
||||
ab C ha hb;
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Existential
|
||||
|
||||
-- 2nd order existential
|
||||
exists (A : *) (P : A -> *) : * := forall (C : *), (forall (x : A), P x -> C) -> C;
|
||||
|
||||
-- introduction rule
|
||||
exists_intro (A : *) (P : A -> *) (a : A) (h : P a) : exists A P :=
|
||||
fun (C : *) (g : forall (x : A), P x -> C) => g a h;
|
||||
|
||||
-- elimination rule (kinda just the definition)
|
||||
exists_elim (A B : *) (P : A -> *) (ex_a : exists A P) (h : forall (a : A), P a -> B) : B :=
|
||||
ex_a B h;
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Universal
|
||||
|
||||
-- 2nd order universal (just ∏, including it for completeness)
|
||||
all (A : *) (P : A -> *) : * := forall (a : A), P a;
|
||||
|
||||
-- introduction rule
|
||||
all_intro (A : *) (P : A -> *) (h : forall (a : A), P a) : all A P := h;
|
||||
|
||||
-- elimination rule
|
||||
all_elim (A : *) (P : A -> *) (h_all : all A P) (a : A) : P a := h_all a;
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Equality
|
||||
|
||||
-- 2nd order Leibniz equality
|
||||
eq (A : *) (x y : A) := forall (P : A -> *), P x -> P y;
|
||||
|
||||
-- equality is reflexive
|
||||
eq_refl (A : *) (x : A) : eq A x x := fun (P : A -> *) (Hx : P x) => Hx;
|
||||
|
||||
-- equality is symmetric
|
||||
eq_sym (A : *) (x y : A) (Hxy : eq A x y) : eq A y x := fun (P : A -> *) (Hy : P y) =>
|
||||
Hxy (fun (z : A) => P z -> P x) (fun (Hx : P x) => Hx) Hy;
|
||||
|
||||
-- equality is transitive
|
||||
eq_trans (A : *) (x y z : A) (Hxy : eq A x y) (Hyz : eq A y z) : eq A x z := fun (P : A -> *) (Hx : P x) =>
|
||||
Hyz P (Hxy P Hx);
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Some logic theorems
|
||||
|
||||
-- ~(A ∨ B) => ~A ∧ ~B
|
||||
de_morgan1 (A B : *) (h : not (or A B)) : and (not A) (not B) :=
|
||||
and_intro (not A) (not B)
|
||||
(fun (a : A) => h (or_intro_l A B a))
|
||||
(fun (b : B) => h (or_intro_r A B b));
|
||||
|
||||
-- ~A ∧ ~B => ~(A ∨ B)
|
||||
de_morgan2 (A B : *) (h : and (not A) (not B)) : not (or A B) :=
|
||||
fun (contra : or A B) =>
|
||||
or_elim A B false contra
|
||||
(and_elim_l (not A) (not B) h)
|
||||
(and_elim_r (not A) (not B) h);
|
||||
|
||||
-- ~A ∨ ~B => ~(A ∧ B)
|
||||
de_morgan3 (A B : *) (h : or (not A) (not B)) : not (and A B) :=
|
||||
fun (contra : and A B) =>
|
||||
or_elim (not A) (not B) false h
|
||||
(fun (na : not A) => na (and_elim_l A B contra))
|
||||
(fun (nb : not B) => nb (and_elim_r A B contra));
|
||||
|
||||
-- the last one (~(A ∧ B) => ~A ∨ ~B) is not possible constructively
|
||||
|
||||
-- A ∧ B => B ∧ A
|
||||
and_comm (A B : *) (h : and A B) : and B A :=
|
||||
and_intro B A
|
||||
(and_elim_r A B h)
|
||||
(and_elim_l A B h);
|
||||
|
||||
-- A ∨ B => B ∨ A
|
||||
or_comm (A B : *) (h : or A B) : or B A :=
|
||||
or_elim A B (or B A) h
|
||||
(fun (a : A) => or_intro_r B A a)
|
||||
(fun (b : B) => or_intro_l B A b);
|
||||
|
||||
-- A ∧ (B ∧ C) => (A ∧ B) ∧ C
|
||||
and_assoc_l (A B C : *) (h : and A (and B C)) : and (and A B) C :=
|
||||
and_intro (and A B) C
|
||||
(and_intro A B
|
||||
(and_elim_l A (and B C) h)
|
||||
(and_elim_l B C (and_elim_r A (and B C) h)))
|
||||
(and_elim_r B C (and_elim_r A (and B C) h));
|
||||
|
||||
-- (A ∧ B) ∧ C => A ∧ (B ∧ C)
|
||||
and_assoc_r (A B C : *) (h : and (and A B) C) : and A (and B C) :=
|
||||
and_intro A (and B C)
|
||||
(and_elim_l A B (and_elim_l (and A B) C h))
|
||||
(and_intro B C
|
||||
(and_elim_r A B (and_elim_l (and A B) C h))
|
||||
(and_elim_r (and A B) C h));
|
||||
|
||||
-- A ∨ (B ∨ C) => (A ∨ B) ∨ C
|
||||
or_assoc_l (A B C : *) (h : or A (or B C)) : or (or A B) C :=
|
||||
or_elim A (or B C) (or (or A B) C) h
|
||||
(fun (a : A) => or_intro_l (or A B) C (or_intro_l A B a))
|
||||
(fun (g : or B C) =>
|
||||
or_elim B C (or (or A B) C) g
|
||||
(fun (b : B) => or_intro_l (or A B) C (or_intro_r A B b))
|
||||
(fun (c : C) => or_intro_r (or A B) C c));
|
||||
|
||||
-- (A ∨ B) ∨ C => A ∨ (B ∨ C)
|
||||
or_assoc_r (A B C : *) (h : or (or A B) C) : or A (or B C) :=
|
||||
or_elim (or A B) C (or A (or B C)) h
|
||||
(fun (g : or A B) =>
|
||||
or_elim A B (or A (or B C)) g
|
||||
(fun (a : A) => or_intro_l A (or B C) a)
|
||||
(fun (b : B) => or_intro_r A (or B C) (or_intro_l B C b)))
|
||||
(fun (c : C) => or_intro_r A (or B C) (or_intro_r B C c));
|
||||
|
||||
-- A ∧ (B ∨ C) => A ∧ B ∨ A ∧ C
|
||||
and_distrib_l_or (A B C : *) (h : and A (or B C)) : or (and A B) (and A C) :=
|
||||
or_elim B C (or (and A B) (and A C)) (and_elim_r A (or B C) h)
|
||||
(fun (b : B) => or_intro_l (and A B) (and A C)
|
||||
(and_intro A B (and_elim_l A (or B C) h) b))
|
||||
(fun (c : C) => or_intro_r (and A B) (and A C)
|
||||
(and_intro A C (and_elim_l A (or B C) h) c));
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- very little very basic algebra
|
||||
|
||||
-- what it means for a set and operation to be a semigroup
|
||||
semigroup (M : *) (op : M -> M -> M) : * :=
|
||||
forall (a b c : M), eq M (op (op a b) c) (op a (op b c));
|
||||
|
||||
-- what it means for a set, operation, and identity element to be a monoid
|
||||
monoid (M : *) (op : M -> M -> M) (e : M) : * :=
|
||||
and (semigroup M op)
|
||||
(and (forall (a : M), eq M (op a e) a)
|
||||
(forall (a : M), eq M (op e a) a));
|
||||
|
||||
-- identity elements in monoids are unique
|
||||
monoid_identity_unique
|
||||
(M : *) (op : M -> M -> M) (e : M) (Hmonoid : monoid M op e)
|
||||
(a : M) (H : forall (b : M), eq M (op a b) b) : eq M a e :=
|
||||
eq_trans M a (op a e) e
|
||||
-- a = a * e
|
||||
(eq_sym M (op a e) a
|
||||
(and_elim_l
|
||||
(forall (a : M), eq M (op a e) a)
|
||||
(forall (a : M), eq M (op e a) a)
|
||||
(and_elim_r
|
||||
(semigroup M op)
|
||||
(and (forall (a : M), eq M (op a e) a) (forall (a : M), eq M (op e a) a))
|
||||
Hmonoid) a))
|
||||
|
||||
-- a * e = e
|
||||
(H e);
|
||||
|
||||
|
||||
-- what it means for a set, operation, identity element, and inverse map to be a group
|
||||
group (G : *) (op : G -> G -> G) (e : G) (inv : G -> G) : * :=
|
||||
and (monoid G op e)
|
||||
(and (forall (a : G), eq G (op a (inv a)) e)
|
||||
(forall (a : G), eq G (op (inv a) a) e));
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
-- Church numerals
|
||||
|
||||
nat : * := forall (A : *), (A -> A) -> A -> A;
|
||||
|
||||
-- zero is the constant function
|
||||
zero : nat := fun (A : *) (f : A -> A) (x : A) => x;
|
||||
|
||||
-- `suc n` composes one more `f` than `n`
|
||||
suc : nat -> nat := fun (n : nat) (A : *) (f : A -> A) (x : A) => f ((n A f) x);
|
||||
|
||||
-- addition as usual
|
||||
add : nat -> nat -> nat := fun (n m : nat) (A : *) (f : A -> A) (x : A) => (m A f) (n A f x);
|
||||
|
||||
-- multiplication as usual
|
||||
mul : nat -> nat -> nat := fun (n m : nat) (A : *) (f : A -> A) (x : A) => (m A (n A f)) x;
|
||||
|
||||
-- unforunately, I believe it is impossible to prove induction on Church numerals,
|
||||
-- which makes it really hard to prove standard theorems, or do anything really.
|
||||
|
||||
-- with a primitive notion system, we could stipulate the existence of a type of
|
||||
-- natural numbers satisfying the Peano axioms, but I haven't implemented that yet.
|
||||
62
lib/Check.hs
62
lib/Check.hs
|
|
@ -1,61 +1,51 @@
|
|||
module Check (TypeCheckError (..), CheckResult, checkType) where
|
||||
{-# LANGUAGE LambdaCase #-}
|
||||
|
||||
module Check (checkType) where
|
||||
|
||||
import Control.Monad (unless)
|
||||
import Control.Monad.Except (MonadError (throwError))
|
||||
import Control.Monad.Reader
|
||||
import Data.List ((!?))
|
||||
import qualified Data.Map as M
|
||||
import Data.Text (Text)
|
||||
import qualified Data.Text as T
|
||||
|
||||
import Eval
|
||||
import Expr
|
||||
import Errors
|
||||
import Eval (Env, betaEquiv, envLookup, isSort, subst, whnf)
|
||||
import Expr (Expr (..), incIndices, occursFree)
|
||||
|
||||
type Context = [Expr]
|
||||
|
||||
data TypeCheckError = SquareUntyped | UnboundVariable Text | NotASort Expr Expr | ExpectedPiType Expr Expr | NotEquivalent Expr Expr Expr deriving (Eq, Ord)
|
||||
matchPi :: Expr -> Expr -> ReaderT Env Result (Expr, Expr)
|
||||
matchPi x mt =
|
||||
whnf mt >>= \case
|
||||
(Pi _ a b) -> pure (a, b)
|
||||
t -> throwError $ ExpectedPiType x t
|
||||
|
||||
instance Show TypeCheckError where
|
||||
show SquareUntyped = "□ does not have a type"
|
||||
show (UnboundVariable x) = "Unbound variable: " ++ T.unpack x
|
||||
show (NotASort x t) = "Expected " ++ prettyS x ++ " to have type * or □, instead found " ++ prettyS t
|
||||
show (ExpectedPiType m a) = prettyS m ++ " : " ++ prettyS a ++ " is not a function"
|
||||
show (NotEquivalent a a' e) = "Cannot unify " ++ prettyS a ++ " with " ++ prettyS a' ++ " when evaluating " ++ prettyS e
|
||||
|
||||
type CheckResult = Either TypeCheckError
|
||||
|
||||
matchPi :: Expr -> Expr -> ReaderT Env CheckResult (Expr, Expr)
|
||||
matchPi _ (Pi _ a b) = pure (a, b)
|
||||
matchPi m e = throwError $ ExpectedPiType m e
|
||||
|
||||
findType :: Context -> Expr -> ReaderT Env CheckResult Expr
|
||||
findType :: Context -> Expr -> ReaderT Env Result Expr
|
||||
findType _ Star = pure Square
|
||||
findType _ Square = throwError SquareUntyped
|
||||
findType g (Var n x) = do
|
||||
t <- maybe (throwError $ UnboundVariable x) pure $ g !? fromInteger n
|
||||
s <- findType g t
|
||||
unless (isSort s) $ throwError $ NotASort t s
|
||||
(sSort, s) <- findType g t >>= isSort
|
||||
unless sSort $ throwError $ NotASort t s
|
||||
pure t
|
||||
findType g (Free n) = asks (M.lookup n) >>= maybe (throwError $ UnboundVariable n) (findType g)
|
||||
findType g (Free n) = envLookup n >>= findType g
|
||||
findType g e@(App m n) = do
|
||||
(a, b) <- findType g m >>= matchPi m
|
||||
a' <- findType g n
|
||||
equiv <- asks $ runReader (betaEquiv a a')
|
||||
equiv <- betaEquiv a a'
|
||||
unless equiv $ throwError $ NotEquivalent a a' e
|
||||
pure $ subst 0 n b
|
||||
findType g (Abs x a m) = do
|
||||
s1 <- findType g a
|
||||
unless (isSort s1) $ throwError $ NotASort a s1
|
||||
(s1Sort, s1) <- findType g a >>= isSort
|
||||
unless s1Sort $ throwError $ NotASort a s1
|
||||
b <- findType (incIndices a : map incIndices g) m
|
||||
s2 <- findType g (Pi x a b)
|
||||
unless (isSort s2) $ throwError $ NotASort (Pi x a b) s2
|
||||
(s2Sort, s2) <- findType g (Pi x a b) >>= isSort
|
||||
unless s2Sort $ throwError $ NotASort (Pi x a b) s2
|
||||
pure $ if occursFree 0 b then Pi x a b else Pi "" a b
|
||||
findType g (Pi _ a b) = do
|
||||
s1 <- findType g a
|
||||
unless (isSort s1) $ throwError $ NotASort a s1
|
||||
s2 <- findType (incIndices a : map incIndices g) b
|
||||
unless (isSort s2) $ throwError $ NotASort b s2
|
||||
(s1Sort, s1) <- findType g a >>= isSort
|
||||
unless s1Sort $ throwError $ NotASort a s1
|
||||
(s2Sort, s2) <- findType (incIndices a : map incIndices g) b >>= isSort
|
||||
unless s2Sort $ throwError $ NotASort b s2
|
||||
pure s2
|
||||
|
||||
checkType :: Env -> Context -> Expr -> CheckResult Expr
|
||||
checkType env g t = runReaderT (findType g t) env
|
||||
checkType :: Env -> Expr -> Result Expr
|
||||
checkType env t = runReaderT (findType [] t) env
|
||||
|
|
|
|||
22
lib/Errors.hs
Normal file
22
lib/Errors.hs
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
module Errors where
|
||||
|
||||
import Data.Text (Text)
|
||||
import qualified Data.Text as T
|
||||
import Expr
|
||||
|
||||
data Error
|
||||
= SquareUntyped
|
||||
| UnboundVariable Text
|
||||
| NotASort Expr Expr
|
||||
| ExpectedPiType Expr Expr
|
||||
| NotEquivalent Expr Expr Expr
|
||||
deriving (Eq, Ord)
|
||||
|
||||
instance Show Error where
|
||||
show SquareUntyped = "□ does not have a type"
|
||||
show (UnboundVariable x) = "Unbound variable: " ++ T.unpack x
|
||||
show (NotASort x t) = "Expected " ++ prettyS x ++ " to have type * or □, instead found " ++ prettyS t
|
||||
show (ExpectedPiType x t) = prettyS x ++ " : " ++ prettyS t ++ " is not a function"
|
||||
show (NotEquivalent a a' e) = "Cannot unify " ++ prettyS a ++ " with " ++ prettyS a' ++ " when evaluating " ++ prettyS e
|
||||
|
||||
type Result = Either Error
|
||||
43
lib/Eval.hs
43
lib/Eval.hs
|
|
@ -1,9 +1,12 @@
|
|||
{-# LANGUAGE TupleSections #-}
|
||||
|
||||
module Eval where
|
||||
|
||||
import Control.Monad.Except (MonadError (..))
|
||||
import Control.Monad.Reader
|
||||
import qualified Data.Map as M
|
||||
import Data.Maybe
|
||||
import Data.Text (Text)
|
||||
import Errors
|
||||
import Expr
|
||||
|
||||
type Env = M.Map Text Expr
|
||||
|
|
@ -21,19 +24,43 @@ subst k s (App m n) = App (subst k s m) (subst k s n)
|
|||
subst k s (Abs x m n) = Abs x (subst k s m) (subst (k + 1) (incIndices s) n)
|
||||
subst k s (Pi x m n) = Pi x (subst k s m) (subst (k + 1) (incIndices s) n)
|
||||
|
||||
whnf :: Expr -> Expr
|
||||
whnf (App (Abs _ _ v) n) = whnf $ subst 0 n v
|
||||
whnf e = e
|
||||
envLookup :: Text -> ReaderT Env Result Expr
|
||||
envLookup n = asks (M.lookup n) >>= maybe (throwError $ UnboundVariable n) pure
|
||||
|
||||
betaEquiv :: Expr -> Expr -> Reader Env Bool
|
||||
-- reduce until β reducts are impossible in head position
|
||||
whnf :: Expr -> ReaderT Env Result Expr
|
||||
whnf (App (Abs _ _ v) n) = whnf $ subst 0 n v
|
||||
whnf (App m n) = do
|
||||
m' <- whnf m
|
||||
if m == m'
|
||||
then pure $ App m n
|
||||
else whnf $ App m' n
|
||||
whnf (Free n) = envLookup n
|
||||
whnf e = pure e
|
||||
|
||||
betaEquiv :: Expr -> Expr -> ReaderT Env Result Bool
|
||||
betaEquiv e1 e2
|
||||
| e1 == e2 = pure True
|
||||
| otherwise = case (whnf e1, whnf e2) of
|
||||
| otherwise = do
|
||||
e1' <- whnf e1
|
||||
e2' <- whnf e2
|
||||
case (e1', e2') of
|
||||
(Var k1 _, Var k2 _) -> pure $ k1 == k2
|
||||
(Free n, Free m) -> pure $ n == m
|
||||
(Free n, e) -> fromMaybe False <$> (asks (M.lookup n) >>= traverse (`betaEquiv` e))
|
||||
(e, Free n) -> fromMaybe False <$> (asks (M.lookup n) >>= traverse (`betaEquiv` e))
|
||||
(Free n, e) -> envLookup n >>= betaEquiv e
|
||||
(e, Free n) -> envLookup n >>= betaEquiv e
|
||||
(Star, Star) -> pure True
|
||||
(Abs _ t1 v1, Abs _ t2 v2) -> (&&) <$> betaEquiv t1 t2 <*> betaEquiv v1 v2 -- i want idiom brackets
|
||||
(Pi _ t1 v1, Pi _ t2 v2) -> (&&) <$> betaEquiv t1 t2 <*> betaEquiv v1 v2
|
||||
_ -> pure False -- remaining cases impossible or false
|
||||
|
||||
checkBeta :: Env -> Expr -> Expr -> Result Bool
|
||||
checkBeta env e1 e2 = runReaderT (betaEquiv e1 e2) env
|
||||
|
||||
isSortPure :: Expr -> Bool
|
||||
isSortPure Star = True
|
||||
isSortPure Square = True
|
||||
isSortPure _ = False
|
||||
|
||||
isSort :: Expr -> ReaderT Env Result (Bool, Expr)
|
||||
isSort s = (,s) . isSortPure <$> whnf s
|
||||
|
|
|
|||
15
lib/Expr.hs
15
lib/Expr.hs
|
|
@ -32,11 +32,6 @@ occursFree n (App a b) = on (||) (occursFree n) a b
|
|||
occursFree n (Abs _ a b) = occursFree n a || occursFree (n + 1) b
|
||||
occursFree n (Pi _ a b) = occursFree n a || occursFree (n + 1) b
|
||||
|
||||
isSort :: Expr -> Bool
|
||||
isSort Star = True
|
||||
isSort Square = True
|
||||
isSort _ = False
|
||||
|
||||
shiftIndices :: Integer -> Integer -> Expr -> Expr
|
||||
shiftIndices d c (Var k x)
|
||||
| k >= c = Var (k + d) x
|
||||
|
|
@ -62,6 +57,14 @@ collectLambdas (Abs x ty body) = ((x, ty) : params, final)
|
|||
(params, final) = collectLambdas body
|
||||
collectLambdas e = ([], e)
|
||||
|
||||
cleanNames :: Expr -> Expr
|
||||
cleanNames (App m n) = App (cleanNames m) (cleanNames n)
|
||||
cleanNames (Abs x ty body) = Abs x (cleanNames ty) (cleanNames body)
|
||||
cleanNames (Pi x ty body)
|
||||
| occursFree 0 body = Pi x (cleanNames ty) (cleanNames body)
|
||||
| otherwise = Pi "" ty (cleanNames body)
|
||||
cleanNames e = e
|
||||
|
||||
collectPis :: Expr -> ([(Text, Expr)], Expr)
|
||||
collectPis p@(Pi "" _ _) = ([], p)
|
||||
collectPis (Pi x ty body) = ((x, ty) : params, final)
|
||||
|
|
@ -104,7 +107,7 @@ helper k e@(Abs{}) = if k >= 1 then parenthesize res else res
|
|||
res = "λ " <> T.unwords grouped <> " . " <> pretty body
|
||||
|
||||
pretty :: Expr -> Text
|
||||
pretty = helper 0
|
||||
pretty = helper 0 . cleanNames
|
||||
|
||||
prettyS :: Expr -> String
|
||||
prettyS = T.unpack . pretty
|
||||
|
|
|
|||
115
lib/Parser.hs
115
lib/Parser.hs
|
|
@ -1,27 +1,37 @@
|
|||
{-# LANGUAGE NamedFieldPuns #-}
|
||||
|
||||
module Parser (pAll) where
|
||||
module Parser (parseProgram, parseDef, parseDefEmpty, GlobalState (..)) where
|
||||
|
||||
import Check
|
||||
import Control.Monad
|
||||
import Control.Monad.State.Strict
|
||||
import Control.Monad.State.Strict (
|
||||
MonadState (get, put),
|
||||
State,
|
||||
evalState,
|
||||
modify,
|
||||
runState,
|
||||
)
|
||||
import Data.Bifunctor (first)
|
||||
import Data.Functor.Identity
|
||||
import Data.List (elemIndex)
|
||||
import Data.List.NonEmpty (NonEmpty ((:|)))
|
||||
import qualified Data.List.NonEmpty as NE
|
||||
import qualified Data.Map as M
|
||||
import Data.Text (Text)
|
||||
import qualified Data.Text as T
|
||||
import Errors (Error (..))
|
||||
import Eval
|
||||
import Expr (Expr (..), incIndices)
|
||||
import Text.Megaparsec hiding (State)
|
||||
import Text.Megaparsec.Char
|
||||
import qualified Text.Megaparsec.Char.Lexer as L
|
||||
|
||||
data InnerState = IS {_binds :: [Text], _defs :: Env}
|
||||
data TypeDef = TD {_ident :: Text, _type :: Expr}
|
||||
data DefinitionLine = DL {_td :: TypeDef, _body :: Expr}
|
||||
|
||||
newtype TypeError = TE TypeCheckError
|
||||
data GlobalState = GS {_defs :: Env, _types :: Env}
|
||||
data InnerState = IS {_binds :: [TypeDef], _gs :: GlobalState}
|
||||
|
||||
newtype TypeError = TE Error
|
||||
deriving (Eq, Ord, Show)
|
||||
|
||||
type Parser = ParsecT TypeError Text (State InnerState)
|
||||
|
|
@ -29,11 +39,14 @@ type Parser = ParsecT TypeError Text (State InnerState)
|
|||
instance ShowErrorComponent TypeError where
|
||||
showErrorComponent (TE e) = show e
|
||||
|
||||
bindsToIS :: ([Text] -> [Text]) -> InnerState -> InnerState
|
||||
bindsToIS :: ([TypeDef] -> [TypeDef]) -> InnerState -> InnerState
|
||||
bindsToIS f x@(IS{_binds}) = x{_binds = f _binds}
|
||||
|
||||
defsToIS :: (Env -> Env) -> InnerState -> InnerState
|
||||
defsToIS f x@(IS{_defs}) = x{_defs = f _defs}
|
||||
defsToIS f x@(IS{_gs = y@GS{_defs}}) = x{_gs = y{_defs = f _defs}}
|
||||
|
||||
typesToIS :: (Env -> Env) -> InnerState -> InnerState
|
||||
typesToIS f x@(IS{_gs = y@GS{_types}}) = x{_gs = y{_types = f _types}}
|
||||
|
||||
skipSpace :: Parser ()
|
||||
skipSpace =
|
||||
|
|
@ -54,7 +67,7 @@ pIdentifier = label "identifier" $ lexeme $ do
|
|||
pVar :: Parser Expr
|
||||
pVar = label "variable" $ lexeme $ do
|
||||
var <- pIdentifier
|
||||
binders <- _binds <$> get
|
||||
binders <- map _ident . _binds <$> get
|
||||
pure $ case elemIndex var binders of
|
||||
Just i -> Var (fromIntegral i) var
|
||||
Nothing -> Free var
|
||||
|
|
@ -65,18 +78,21 @@ defChoice options = lexeme $ label (T.unpack $ NE.head options) $ void $ choice
|
|||
pParamGroup :: Parser [(Text, Expr)]
|
||||
pParamGroup = lexeme $ label "parameter group" $ between (char '(') (char ')') $ do
|
||||
idents <- some pIdentifier
|
||||
_ <- defChoice $ ":" :| []
|
||||
_ <- defChoice $ pure ":"
|
||||
ty <- pExpr
|
||||
modify $ bindsToIS $ flip (foldl $ flip (:)) idents
|
||||
modify $ bindsToIS $ flip (foldl $ flip (:)) (map (\idt -> TD{_ident = idt, _type = ty}) idents)
|
||||
pure $ zip idents (iterate incIndices ty)
|
||||
|
||||
pParams :: Parser [(Text, Expr)]
|
||||
pParams = concat <$> some pParamGroup
|
||||
pSomeParams :: Parser [(Text, Expr)]
|
||||
pSomeParams = lexeme $ concat <$> some pParamGroup
|
||||
|
||||
pManyParams :: Parser [(Text, Expr)]
|
||||
pManyParams = lexeme $ concat <$> many pParamGroup
|
||||
|
||||
pLAbs :: Parser Expr
|
||||
pLAbs = lexeme $ label "λ-abstraction" $ do
|
||||
_ <- defChoice $ "λ" :| ["lambda", "fun"]
|
||||
params <- pParams
|
||||
params <- pSomeParams
|
||||
_ <- defChoice $ "." :| ["=>", "⇒"]
|
||||
body <- pExpr
|
||||
modify $ bindsToIS $ drop $ length params
|
||||
|
|
@ -85,7 +101,7 @@ pLAbs = lexeme $ label "λ-abstraction" $ do
|
|||
pPAbs :: Parser Expr
|
||||
pPAbs = lexeme $ label "Π-abstraction" $ do
|
||||
_ <- defChoice $ "∏" :| ["Pi", "forall", "∀"]
|
||||
params <- pParams
|
||||
params <- pSomeParams
|
||||
_ <- defChoice $ "." :| [","]
|
||||
body <- pExpr
|
||||
modify $ bindsToIS $ drop $ length params
|
||||
|
|
@ -98,13 +114,45 @@ pArrow = lexeme $ label "->" $ do
|
|||
Pi "" a . incIndices <$> pExpr
|
||||
|
||||
pApp :: Parser Expr
|
||||
pApp = foldl1 App <$> some pTerm
|
||||
pApp = lexeme $ foldl1 App <$> some pTerm
|
||||
|
||||
pStar :: Parser Expr
|
||||
pStar = Star <$ defChoice ("*" :| [])
|
||||
pStar = lexeme $ Star <$ defChoice (pure "*")
|
||||
|
||||
pSquare :: Parser Expr
|
||||
pSquare = Square <$ defChoice ("□" :| ["[]"])
|
||||
pSquare = lexeme $ Square <$ defChoice ("□" :| ["[]"])
|
||||
|
||||
checkAscription :: Text -> Expr -> Maybe Expr -> Parser DefinitionLine
|
||||
checkAscription ident value massert = do
|
||||
IS{_gs = GS{_defs, _types}} <- get
|
||||
case (checkType _defs value, massert) of
|
||||
(Left err, _) -> customFailure $ TE err
|
||||
(Right ty, Nothing) -> pure DL{_td = TD{_ident = ident, _type = ty}, _body = value}
|
||||
(Right ty, Just assert) -> case checkBeta _defs ty assert of
|
||||
Left err -> customFailure $ TE err
|
||||
Right equiv -> do
|
||||
unless equiv $ customFailure $ TE $ NotEquivalent ty assert value
|
||||
pure DL{_td = TD{_ident = ident, _type = assert}, _body = value}
|
||||
|
||||
updateStateDefinition :: DefinitionLine -> Parser ()
|
||||
updateStateDefinition DL{_td, _body} = do
|
||||
modify $ defsToIS (M.insert (_ident _td) _body)
|
||||
modify $ typesToIS (M.insert (_ident _td) (_type _td))
|
||||
|
||||
pDef :: Parser DefinitionLine
|
||||
pDef = lexeme $ label "definition" $ do
|
||||
skipSpace
|
||||
ident <- pIdentifier
|
||||
params <- pManyParams
|
||||
ascription <- fmap (flip (foldr (uncurry Pi)) params) <$> pAscription
|
||||
_ <- defChoice $ pure ":="
|
||||
value <- flip (foldr (uncurry Abs)) params <$> pExpr
|
||||
res <- checkAscription ident value ascription
|
||||
_ <- defChoice $ pure ";"
|
||||
pure res
|
||||
|
||||
pDefUpdate :: Parser ()
|
||||
pDefUpdate = pDef >>= updateStateDefinition
|
||||
|
||||
pTerm :: Parser Expr
|
||||
pTerm =
|
||||
|
|
@ -112,9 +160,9 @@ pTerm =
|
|||
label "term" $
|
||||
choice
|
||||
[ between (char '(') (char ')') pExpr
|
||||
, pVar
|
||||
, pStar
|
||||
, pSquare
|
||||
, pVar
|
||||
]
|
||||
|
||||
pAppTerm :: Parser Expr
|
||||
|
|
@ -123,20 +171,21 @@ pAppTerm = lexeme $ pLAbs <|> pPAbs <|> pApp
|
|||
pExpr :: Parser Expr
|
||||
pExpr = lexeme $ try pArrow <|> pAppTerm
|
||||
|
||||
pDef :: Parser ()
|
||||
pDef = lexeme $ label "definition" $ do
|
||||
ident <- pIdentifier
|
||||
_ <- defChoice $ ":=" :| []
|
||||
value <- pExpr
|
||||
_ <- defChoice $ ";" :| []
|
||||
foo <- get
|
||||
let ty = checkType (_defs foo) [] value
|
||||
case ty of
|
||||
Left err -> customFailure $ TE err
|
||||
Right _ -> modify $ defsToIS $ M.insert ident value
|
||||
pAscription :: Parser (Maybe Expr)
|
||||
pAscription = lexeme $ optional $ try $ defChoice (pure ":") >> label "type" pExpr
|
||||
|
||||
pProgram :: Parser ()
|
||||
pProgram = void $ many pDef
|
||||
pProgram :: Parser Env
|
||||
pProgram = lexeme $ skipSpace >> many pDefUpdate >> _types . _gs <$> get
|
||||
|
||||
pAll :: Text -> Either String ()
|
||||
pAll input = first errorBundlePretty $ fst $ runIdentity $ runStateT (runParserT pProgram "" input) $ IS{_binds = [], _defs = M.empty}
|
||||
parseDef :: Text -> State GlobalState (Either String ())
|
||||
parseDef input = do
|
||||
env <- get
|
||||
let (output, IS{_gs}) = runState (runParserT pDefUpdate "" input) (IS{_binds = [], _gs = env})
|
||||
put _gs
|
||||
pure $ first errorBundlePretty output
|
||||
|
||||
parseDefEmpty :: GlobalState -> Text -> (Either String (), GlobalState)
|
||||
parseDefEmpty env input = runState (parseDef input) env
|
||||
|
||||
parseProgram :: Text -> Either String Env
|
||||
parseProgram input = first errorBundlePretty $ evalState (runParserT pProgram "" input) IS{_binds = [], _gs = GS{_defs = M.empty, _types = M.empty}}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
cabal-version: 3.0
|
||||
name: dependent-lambda
|
||||
name: perga
|
||||
|
||||
-- PVP summary: +-+------- breaking API changes
|
||||
-- | | +----- non-breaking API additions
|
||||
|
|
@ -27,12 +27,13 @@ extra-doc-files: CHANGELOG.md
|
|||
common warnings
|
||||
ghc-options: -Wall
|
||||
|
||||
library dependent-lambda-lib
|
||||
library perga-lib
|
||||
import: warnings
|
||||
exposed-modules: Check
|
||||
Parser
|
||||
Expr
|
||||
Eval
|
||||
Errors
|
||||
|
||||
hs-source-dirs: lib
|
||||
build-depends: base ^>=4.19.1.0
|
||||
|
|
@ -45,18 +46,19 @@ library dependent-lambda-lib
|
|||
default-extensions: OverloadedStrings
|
||||
, GADTs
|
||||
|
||||
executable dependent-lambda
|
||||
executable perga
|
||||
import: warnings
|
||||
main-is: Main.hs
|
||||
other-modules: Repl
|
||||
|
||||
build-depends: base ^>=4.19.1.0
|
||||
, dependent-lambda-lib
|
||||
, perga-lib
|
||||
, text
|
||||
, containers
|
||||
, haskeline
|
||||
, directory
|
||||
, filepath
|
||||
, mtl
|
||||
hs-source-dirs: app
|
||||
default-language: Haskell2010
|
||||
default-extensions: OverloadedStrings
|
||||
|
|
@ -67,13 +69,12 @@ test-suite tests
|
|||
type: exitcode-stdio-1.0
|
||||
main-is: Tests.hs
|
||||
other-modules: ExprTests
|
||||
, ParserTests
|
||||
, CheckTests
|
||||
build-depends: base ^>=4.19.1.0
|
||||
, HUnit
|
||||
, text
|
||||
, containers
|
||||
, dependent-lambda-lib
|
||||
, perga-lib
|
||||
hs-source-dirs: tests
|
||||
default-language: Haskell2010
|
||||
default-extensions: OverloadedStrings
|
||||
2
test.pg
2
test.pg
|
|
@ -1,2 +0,0 @@
|
|||
id := fun (A : *) (x : A) . x ;
|
||||
foo := fun (A B : *) (f : A -> B) (x : A) . id (A -> B) f (id A x) ;
|
||||
|
|
@ -6,20 +6,12 @@ import Expr (Expr (..))
|
|||
import Test.HUnit
|
||||
|
||||
sort :: Test
|
||||
sort = TestCase $ assertEqual "*" (Right Square) (checkType M.empty [] Star)
|
||||
|
||||
stlc :: Test
|
||||
stlc =
|
||||
TestCase $
|
||||
assertEqual
|
||||
"fun (x : A) (y : B) . x"
|
||||
(Right $ Pi "" (Var 0 "A") (Pi "" (Var 2 "B") (Var 2 "A")))
|
||||
(checkType M.empty [Star, Star] $ Abs "x" (Var 0 "A") (Abs "y" (Var 2 "B") (Var 1 "x")))
|
||||
sort = TestCase $ assertEqual "*" (Right Square) (checkType M.empty Star)
|
||||
|
||||
freeVar :: Test
|
||||
freeVar =
|
||||
TestCase $
|
||||
assertEqual "{x = *} , [] |- x : □" (Right Square) (checkType (M.singleton "x" Star) [] (Free "x"))
|
||||
assertEqual "{x = *} , [] |- x : □" (Right Square) (checkType (M.singleton "x" Star) (Free "x"))
|
||||
|
||||
polyIdent :: Test
|
||||
polyIdent =
|
||||
|
|
@ -27,7 +19,7 @@ polyIdent =
|
|||
assertEqual
|
||||
"fun (A : *) (x : A) . x"
|
||||
(Right $ Pi "A" Star (Pi "" (Var 0 "A") (Var 1 "A")))
|
||||
(checkType M.empty [] (Abs "A" Star (Abs "x" (Var 0 "A") (Var 0 "x"))))
|
||||
(checkType M.empty (Abs "A" Star (Abs "x" (Var 0 "A") (Var 0 "x"))))
|
||||
|
||||
typeCons :: Test
|
||||
typeCons =
|
||||
|
|
@ -35,7 +27,7 @@ typeCons =
|
|||
assertEqual
|
||||
"fun (A B : *) . A -> B"
|
||||
(Right $ Pi "" Star (Pi "" Star Star))
|
||||
(checkType M.empty [] (Abs "A" Star (Abs "B" Star (Pi "" (Var 1 "A") (Var 1 "B")))))
|
||||
(checkType M.empty (Abs "A" Star (Abs "B" Star (Pi "" (Var 1 "A") (Var 1 "B")))))
|
||||
|
||||
useTypeCons :: Test
|
||||
useTypeCons =
|
||||
|
|
@ -43,7 +35,7 @@ useTypeCons =
|
|||
assertEqual
|
||||
"fun (C : * -> *) (A : *) (x : C A) . x"
|
||||
(Right $ Pi "C" (Pi "" Star Star) (Pi "A" Star (Pi "" (App (Var 1 "C") (Var 0 "A")) (App (Var 2 "C") (Var 1 "A")))))
|
||||
(checkType M.empty [] $ Abs "C" (Pi "" Star Star) (Abs "A" Star (Abs "x" (App (Var 1 "C") (Var 0 "A")) (Var 0 "x"))))
|
||||
(checkType M.empty $ Abs "C" (Pi "" Star Star) (Abs "A" Star (Abs "x" (App (Var 1 "C") (Var 0 "A")) (Var 0 "x"))))
|
||||
|
||||
dependent :: Test
|
||||
dependent =
|
||||
|
|
@ -51,7 +43,7 @@ dependent =
|
|||
assertEqual
|
||||
"fun (S : *) (x : S) . S -> S"
|
||||
(Right $ Pi "S" Star (Pi "" (Var 0 "S") Star))
|
||||
(checkType M.empty [] $ Abs "S" Star (Abs "x" (Var 0 "S") (Pi "" (Var 1 "S") (Var 2 "S"))))
|
||||
(checkType M.empty $ Abs "S" Star (Abs "x" (Var 0 "S") (Pi "" (Var 1 "S") (Var 2 "S"))))
|
||||
|
||||
useDependent :: Test
|
||||
useDependent =
|
||||
|
|
@ -59,7 +51,7 @@ useDependent =
|
|||
assertEqual
|
||||
"fun (S : *) (P : S -> *) (x : S) . P x"
|
||||
(Right $ Pi "S" Star (Pi "" (Pi "" (Var 0 "S") Star) (Pi "" (Var 1 "S") Star)))
|
||||
(checkType M.empty [] $ Abs "S" Star (Abs "P" (Pi "" (Var 0 "S") Star) (Abs "x" (Var 1 "S") (App (Var 1 "P") (Var 0 "x")))))
|
||||
(checkType M.empty $ Abs "S" Star (Abs "P" (Pi "" (Var 0 "S") Star) (Abs "x" (Var 1 "S") (App (Var 1 "P") (Var 0 "x")))))
|
||||
|
||||
big :: Test
|
||||
big =
|
||||
|
|
@ -67,13 +59,13 @@ big =
|
|||
assertEqual
|
||||
"fun (S : *) (P Q : S -> *) (H : forall (x : S), P x -> Q x) (G : forall (x : S), P x) (x : S) . H x (G x)"
|
||||
(Right $ Pi "S" Star (Pi "P" (Pi "" (Var 0 "S") Star) (Pi "Q" (Pi "" (Var 1 "S") Star) (Pi "" (Pi "x" (Var 2 "S") (Pi "" (App (Var 2 "P") (Var 0 "x")) (App (Var 2 "Q") (Var 1 "x")))) (Pi "" (Pi "x" (Var 3 "S") (App (Var 3 "P") (Var 0 "x"))) (Pi "x" (Var 4 "S") (App (Var 3 "Q") (Var 0 "x"))))))))
|
||||
(checkType M.empty [] $ Abs "S" Star (Abs "P" (Pi "" (Var 0 "S") Star) (Abs "Q" (Pi "" (Var 1 "S") Star) (Abs "H" (Pi "x" (Var 2 "S") (Pi "" (App (Var 2 "P") (Var 0 "x")) (App (Var 2 "Q") (Var 1 "x")))) (Abs "G" (Pi "x" (Var 3 "S") (App (Var 3 "P") (Var 0 "x"))) (Abs "x" (Var 4 "S") (App (App (Var 2 "H") (Var 0 "x")) (App (Var 1 "G") (Var 0 "x")))))))))
|
||||
(checkType M.empty $ Abs "S" Star (Abs "P" (Pi "" (Var 0 "S") Star) (Abs "Q" (Pi "" (Var 1 "S") Star) (Abs "H" (Pi "x" (Var 2 "S") (Pi "" (App (Var 2 "P") (Var 0 "x")) (App (Var 2 "Q") (Var 1 "x")))) (Abs "G" (Pi "x" (Var 3 "S") (App (Var 3 "P") (Var 0 "x"))) (Abs "x" (Var 4 "S") (App (App (Var 2 "H") (Var 0 "x")) (App (Var 1 "G") (Var 0 "x")))))))))
|
||||
|
||||
tests :: Test
|
||||
tests =
|
||||
TestList
|
||||
[ TestLabel "sort" sort
|
||||
, TestLabel "λ→" $ TestList [stlc, freeVar]
|
||||
, TestLabel "λ→" $ TestList [freeVar]
|
||||
, TestLabel "λ2" polyIdent
|
||||
, TestLabel "λω" $ TestList [typeCons, useTypeCons]
|
||||
, TestLabel "λP2" $ TestList [dependent, useDependent]
|
||||
|
|
|
|||
|
|
@ -35,19 +35,10 @@ substE1 =
|
|||
after
|
||||
(subst 0 (Var 2 "B") inner)
|
||||
|
||||
whnfE1 :: Test
|
||||
whnfE1 =
|
||||
TestCase $
|
||||
assertEqual
|
||||
"e1 B"
|
||||
after
|
||||
(whnf $ App e1 $ Var 2 "B")
|
||||
|
||||
tests :: Test
|
||||
tests =
|
||||
TestList
|
||||
[ TestLabel "fFree" fFree
|
||||
, TestLabel "incE1" incE1
|
||||
, TestLabel "substE1" substE1
|
||||
, TestLabel "whnfE1" whnfE1
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,70 +0,0 @@
|
|||
module ParserTests (tests) where
|
||||
|
||||
import Expr (Expr (..))
|
||||
import Parser (pAll)
|
||||
import Test.HUnit
|
||||
|
||||
ident :: Expr
|
||||
ident =
|
||||
Abs "A" Star $
|
||||
Abs "x" (Var 0 "A") $
|
||||
Var 0 "x"
|
||||
|
||||
ident1 :: Test
|
||||
ident1 =
|
||||
TestCase $
|
||||
assertEqual
|
||||
"ident 1"
|
||||
(Right ident)
|
||||
(pAll "lambda (A : *) . lambda (x : A) . x")
|
||||
|
||||
ident2 :: Test
|
||||
ident2 =
|
||||
TestCase $
|
||||
assertEqual
|
||||
"ident 2"
|
||||
(Right ident)
|
||||
(pAll "fun (A : *) (x : A) => x")
|
||||
|
||||
double :: Expr
|
||||
double =
|
||||
Abs "A" Star $
|
||||
Abs "B" Star $
|
||||
Abs "f" (Pi "" (Var 1 "A") (Var 1 "B")) $
|
||||
Abs "g" (Pi "" (Var 2 "A") (Var 2 "B")) $
|
||||
Abs "x" (Var 3 "A") $
|
||||
App (Var 2 "f") (Var 0 "x")
|
||||
|
||||
doubleTest :: Test
|
||||
doubleTest =
|
||||
TestCase $
|
||||
assertEqual
|
||||
"double"
|
||||
(Right double)
|
||||
(pAll "fun (A B : *) (f g : A -> B) (x : A) => f x")
|
||||
|
||||
theorem :: Expr
|
||||
theorem =
|
||||
Abs "S" Star $
|
||||
Abs "P" (Pi "" (Var 0 "S") Star) $
|
||||
Abs "Q" (Pi "" (Var 1 "S") Star) $
|
||||
Abs "HP" (Pi "x" (Var 2 "S") (App (Var 2 "P") (Var 0 "x"))) $
|
||||
Abs "H" (Pi "x" (Var 3 "S") (Pi "" (App (Var 3 "P") (Var 0 "x")) (App (Var 3 "Q") (Var 1 "x")))) $
|
||||
Abs "x" (Var 4 "S") (App (App (Var 1 "H") (Var 0 "x")) (App (Var 2 "HP") (Var 0 "x")))
|
||||
|
||||
theoremTest :: Test
|
||||
theoremTest =
|
||||
TestCase $
|
||||
assertEqual
|
||||
"theorem"
|
||||
(Right theorem)
|
||||
(pAll "fun (S : *) (P Q : S -> *) (HP : ∏ (x : S) . P x) (H : forall (x : S), P x -> Q x) (x : S) => H x (HP x)")
|
||||
|
||||
tests :: Test
|
||||
tests =
|
||||
TestList
|
||||
[ TestLabel "ident1" ident1
|
||||
, TestLabel "ident2" ident2
|
||||
, TestLabel "double" doubleTest
|
||||
, TestLabel "theorem" theoremTest
|
||||
]
|
||||
|
|
@ -2,7 +2,6 @@ module Main where
|
|||
|
||||
import qualified CheckTests as C
|
||||
import qualified ExprTests as E
|
||||
import qualified ParserTests as P
|
||||
import qualified System.Exit as Exit
|
||||
import Test.HUnit
|
||||
|
||||
|
|
@ -10,7 +9,6 @@ tests :: Test
|
|||
tests =
|
||||
TestList
|
||||
[ TestLabel "ExprTests" E.tests
|
||||
, TestLabel "ParserTests" P.tests
|
||||
, TestLabel "CheckTests" C.tests
|
||||
]
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue