Compare commits
5 commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 82b2077d4a | |||
| 5994096bb1 | |||
| 6b965fda1d | |||
| 1544a33644 | |||
| 2c1f193d77 |
12 changed files with 342 additions and 291 deletions
203
README.md
Normal file
203
README.md
Normal file
|
|
@ -0,0 +1,203 @@
|
|||
# Perga
|
||||
|
||||
`perga` is a basic proof assistant based on a dependently typed lambda calculus (calculus of constructions), but augmented with a simple universe hierarchy (Extended Calculus of Constructions but without Σ-types, though I intend to add them). This implementation is based on the exposition in Nederpelt and Geuvers' *Type Theory and Formal Proof*. Right now it is a perfectly capable higher order logic proof checker, though there is lots of room for improved ergonomics and usability, which I intend to work on. At the moment, `perga` is comparable to Automath in terms of power and ease of use, being slightly more powerful than Automath, and a touch less ergonomic.
|
||||
|
||||
|
||||
# Syntax
|
||||
|
||||
The syntax is fairly flexible and should work as you expect. Identifiers can be Unicode as long as `megaparsec` calls them alphanumeric. `λ` and `Π` abstractions can be written in the usual ways that should be clear from the examples below. Additionally, arrows can be used as an abbreviation for a Π type where the parameter doesn't appear in the body as usual.
|
||||
|
||||
|
||||
## Terms
|
||||
|
||||
All of the following example terms correctly parse, and should look familiar if you are used to standard lambda calculus notation or Coq syntax.
|
||||
|
||||
```perga
|
||||
λ (α : *) ⇒ λ (β : *) ⇒ λ (x : α) ⇒ λ (y : β) ⇒ x
|
||||
fun (A B C : *) (g : → C) (f : A → B) (x : A) : C ⇒ g (f x)
|
||||
fun (S : *) (P Q : S -> *) (H : Π (x : S) , P x -> Q x) (HP : forall (x : S), P x) => fun (x : S) => H x (HP x)
|
||||
```
|
||||
|
||||
To be perfectly clear, `λ` abstractions can be written with either `λ` or `fun`, and are separated from their bodies by either `=>` or `⇒`. Binders with the same type can be grouped together, and multiple binders can occur between the `λ` and the `⇒`. You can also optionally add the return type after the binders and before the `⇒`, though this can always be inferred and so isn't necessary
|
||||
|
||||
`Π` types can be written with either `Π`, `∀`, or `forall`, and are separated from their bodies with a `,`. Arrow types can be written `->` or `→`. Like with `λ` abstractions, binders with the same type can be grouped, and multiple binders can occur between the `Π` and the `,`. Like with `λ` types, the "return" type can optionally be added after the binders and before the `,`, however this is even more useless, as it is nearly always `*`, the type of types.
|
||||
|
||||
The universe hierarchy is very similar to Coq, with `* : □ : □₁ : □₂ : ...`, where `*` is impredicative and the `□ᵢ` are predicative. There is no universe polymorphism, making this rather limited. A lack of inductive types (or even just built-in `Σ`-types and sum types) makes doing logic at any universe level other than `*` extremely limited. For ease of typing, `[]1`, `□1`, `[]₁`, and `□₁` are all the same.
|
||||
|
||||
`Let` expressions have syntax shown below.
|
||||
|
||||
let ( (<ident> (: <type>)? := <expr>) )+ in <expr> end
|
||||
|
||||
Below is a more concrete example.
|
||||
|
||||
let (a : A := (and_elim_l A (and B C) h))
|
||||
(bc := (and_elim_r A (and B C) h))
|
||||
(b := (and_elim_l B C bc))
|
||||
(c := (and_elim_r B C bc))
|
||||
in
|
||||
and_intro (and A B) C (and_intro A B a b) c
|
||||
end
|
||||
|
||||
You can also directly bind functions. Here's an example.
|
||||
|
||||
let (f (A : *) (x : A) : A := x) in
|
||||
f x
|
||||
end
|
||||
|
||||
The syntax for binding functions is just like with definitions.
|
||||
|
||||
|
||||
## Definitions and Axioms
|
||||
|
||||
Definitions and axioms have abstract syntax as shown below.
|
||||
|
||||
def <ident> (<ident> : <type>)* : <type>? := <term>;
|
||||
axiom <ident> (<ident> : <type>)* : <type>;
|
||||
|
||||
(The distinction between `<type>` and `<term>` is purely for emphasis; they are the exact same syntactic category.) Here's a couple definitions of the `const` function from above showing the options with the syntax, and a more complex example declaring functional extensionality as an axiom (assuming equality has been previously defined having type `eq : Π (A : *) → A → A → *`). Duplicate definitions are not normally allowed and will result in an error.
|
||||
|
||||
```perga
|
||||
def const := λ (α : *) ⇒ λ (β : *) ⇒ λ (x : α) => λ (y : β) => x;
|
||||
def const : ∀ (α β : *), α → β → α := fun (α β : *) (x : α) (y : β) ⇒ x;
|
||||
def const (α β : *) (x : α) (y : β) : α := x;
|
||||
|
||||
axiom funext (A B : *) (f g : A → B) : (∀ (x : A), eq B (f x) (g x)) → eq (A → B) f g;
|
||||
```
|
||||
|
||||
Type ascriptions are optional in both definitions and `let` bindings. If included, `perga` will check to make sure your definition matches the ascription, and, if so, will remember the way your wrote the type when printing inferred types, which is particularly handy when using abbreviations for complex types. `perga` has no problem inferring the types of top-level definitions, as they are completely determined by the term, but I recommend including ascriptions most of the time, as they serve as a nice piece of documentation, help guide the implementation process, and make sure you are implementing the type you think you are.
|
||||
|
||||
If the RHS of a definition is `axiom`, then `perga` will assume that the identifier is an inhabitant of the type ascribed to it (as such when using axioms, a type ascription is required). This allows you to use axioms.
|
||||
|
||||
|
||||
## Sections
|
||||
|
||||
There is a Coq-like section mechanism. You open a section with `section <identifier>` and close it with `end <identifier>`. The identifiers used must match. The identifiers don't serve any purpose beyond organization.
|
||||
|
||||
```perga
|
||||
section Test
|
||||
variable (A B : *);
|
||||
def id (x : A) := x;
|
||||
def const (x : A) (y : B) := id x;
|
||||
end Test
|
||||
|
||||
def id_full (A : *) (x : A) := id A x;
|
||||
def const_full (A B : *) (x : A) (y : B) := const A B x y;
|
||||
```
|
||||
|
||||
In a section, you can add section variables to the context with either `variable` or `hypothesis`. The syntax is the same as function parameters, so you can define multiple variables in one line, and can group section variables of the same type. Then, in the section, any definition using the section variables will be automatically generalized over the section variables that they use. Furthermore, any usage of such section variables (like `id x` in the definition of `const` above) will automatically apply the necessary section variables.
|
||||
|
||||
Nested sections are supported, and work as you would expect.
|
||||
|
||||
|
||||
## Comments and preprocessor directives
|
||||
|
||||
Line comments are `--` like in Haskell, and block comments are `[* *]` somewhat like ML (and nest properly). There is no significant whitespace, so you are free to format code as you wish.
|
||||
|
||||
There isn't a proper module system (yet), but you can include other files in a dumb, C preprocessor way by using `@include <filepath>` (NOTE: this unfortunately messes up line numbers in error messages). Filepaths are relative to the current file. Additionally, `@include` automatically keeps track of what has been included, so duplicate inclusions are skipped, meaning no include guards are necessary.
|
||||
|
||||
|
||||
# Usage
|
||||
|
||||
Running `perga` without any arguments drops you into a basic repl. From here, you can type in definitions which `perga` will typecheck. Previous definitions are accessible in future definitions. The usual readline keybindings are available, including navigating history, which is saved between sessions (in `~/.cache/perga/history`). In the repl, you can enter `:q`, press C-c, or press C-d to quit. Entering `:e` shows everything that has been defined along with their types. If you want to see the value of an identifier defined in the environment, you can enter `:v <ident>`. Entering `:t <expr>` prints the type of an expression. Entering `:n <expr>` will fully normalize (including unfolding definitions) an expression, while `:w <expr>` will reduce it to weak head normal form. Finally `:l <filepath>` loads a file.
|
||||
|
||||
Here's an example session showing the capabilities of the repl.
|
||||
|
||||
> :l examples/computation.pg
|
||||
loading: examples/computation.pg
|
||||
> :e
|
||||
eight : nat
|
||||
eq : ∏ (A : *) , A -> A -> *
|
||||
eq_cong : ∏ (A B : *) (x y : A) (f : A -> B) . eq A x y -> eq B (f x) (f y)
|
||||
eq_refl : ∏ (A : *) (x : A) . eq A x x
|
||||
eq_sym : ∏ (A : *) (x y : A) . eq A x y -> eq A y x
|
||||
eq_trans : ∏ (A : *) (x y z : A) . eq A x y -> eq A y z -> eq A x z
|
||||
five : nat
|
||||
four : nat
|
||||
nat : *
|
||||
nine : nat
|
||||
one : nat
|
||||
one_plus_one_is_two : eq nat (plus one one) two
|
||||
plus : nat -> nat -> nat
|
||||
seven : nat
|
||||
six : nat
|
||||
suc : nat -> nat
|
||||
ten : nat
|
||||
three : nat
|
||||
times : nat -> nat -> nat
|
||||
two : nat
|
||||
two_plus_two_is_four : eq nat (plus two two) four
|
||||
two_times_five_is_ten : eq nat (times two five) ten
|
||||
zero : nat
|
||||
> :n plus one one
|
||||
λ (A : *) (f : A -> A) (x : A) => f (f x)
|
||||
> :n two
|
||||
λ (A : *) (f : A -> A) (x : A) => f (f x)
|
||||
> :w plus one one
|
||||
λ (A : *) (f : A -> A) (x : A) => one A f (one A f x)
|
||||
> :w two
|
||||
λ (A : *) (f : A -> A) (x : A) => f (one A f x)
|
||||
|
||||
You can also give `perga` a filename as an argument, in which case it will typecheck every definition in the file. If you give `perga` multiple filenames, it will process each one in turn, sharing an environment between them. Upon finishing, which should be nearly instantaneous, it will print out all files it processed, and "success!" if it successfully typechecked, and the first error it encountered otherwise.
|
||||
|
||||
|
||||
# Simple Example
|
||||
|
||||
There are many very well commented examples in the <./examples/> folder. These include
|
||||
|
||||
- <./examples/logic.pg>, which defines the standard logical operators and proves standard results about them,
|
||||
- <./examples/classical.pg>, which asserts the law of the excluded middle as an axiom, and proves several results that require it,
|
||||
- <./examples/computation.pg>, which demonstrates using `perga` for computational purposes,
|
||||
- <./examples/algebra.pg>, which defines standard algebraic structures and proves results for them,
|
||||
- <./examples/category.pg>, which formalizes some very basic category theory, and
|
||||
- <./examples/peano.pg>, which proves standard arithmetic results from the Peano axioms.
|
||||
|
||||
I intend to extend these examples further.
|
||||
|
||||
Here is an example file defining Leibniz equality and proving that it is reflexive, symmetric, and transitive.
|
||||
|
||||
```perga
|
||||
-- file: equality.pg
|
||||
|
||||
-- Defining Leibniz equality
|
||||
-- Note that we can leave the ascription off
|
||||
def eq (A : *) (x y : A) := forall (P : A -> *), P x -> P y;
|
||||
|
||||
-- Equality is reflexive, which is easy to prove
|
||||
-- Here we give an ascription so that when `perga` reports the type,
|
||||
-- it references `eq` rather than inferring the type.
|
||||
def eq_refl (A : *) (x : A) : eq A x x := fun (P : A -> *) (Hx : P x) => Hx;
|
||||
|
||||
-- Equality is symmetric. This one's a little harder to prove.
|
||||
def eq_sym (A : *) (x y : A) (Hxy : eq A x y) : eq A y x :=
|
||||
fun (P : A -> *) (Hy : P y) =>
|
||||
Hxy (fun (z : A) => P z -> P x) (fun (Hx : P x) => Hx) Hy;
|
||||
|
||||
-- Equality is transitive.
|
||||
def eq_trans (A : *) (x y z : A) (Hxy : eq A x y) (Hyz : eq A y z) : eq A x z :=
|
||||
fun (P : A -> *) (Hx : P x) => Hyz P (Hxy P Hx);
|
||||
```
|
||||
|
||||
Running `perga equality.pg` yields the following output.
|
||||
|
||||
loading: equality.pg
|
||||
success!
|
||||
|
||||
This means our proofs were accepted.
|
||||
|
||||
If we had an error in the proofs, we would get a somewhat useful error message. For example, if we had defined `eq_trans` as shown below, it would be incorrect (missing the `P` after `Hxy`).
|
||||
|
||||
```perga
|
||||
def eq_trans (A : *) (x y z : A) (Hxy : eq A x y) (Hyz : eq A y z) : eq A x z :=
|
||||
fun (P : A -> *) (Hx : P x) => Hyz P (Hxy Hx);
|
||||
```
|
||||
|
||||
Then running `perga equality.pg` yields the following output.
|
||||
|
||||
loading: equality.pg
|
||||
19:50:
|
||||
|
|
||||
19 | fun (P : A -> *) (Hx : P x) => Hyz P (Hxy Hx);
|
||||
| ^
|
||||
Cannot unify 'A -> *' with 'P x' when evaluating 'Hxy Hx'
|
||||
|
||||
This indicates that, when evaluating `Hxy Hx`, it was expecting something of type `A -> *`, but instead found something of type `P x`. Since `P` is type `A -> *`, we can then realize that we forgot the `P`.
|
||||
232
README.org
232
README.org
|
|
@ -1,232 +0,0 @@
|
|||
#+title: Perga
|
||||
|
||||
#+options: toc:nil
|
||||
|
||||
=perga= is a basic proof assistant based on a dependently typed lambda calculus (calculus of constructions), but augmented with a simple universe hierarchy (Extended Calculus of Constructions but without Σ-types, though I intend to add them). This implementation is based on the exposition in Nederpelt and Geuvers' /Type Theory and Formal Proof/. Right now it is a perfectly capable higher order logic proof checker, though there is lots of room for improved ergonomics and usability, which I intend to work on. At the moment, =perga= is comparable to Automath in terms of power and ease of use, being slightly more powerful than Automath, and a touch less ergonomic.
|
||||
|
||||
* Installation
|
||||
|
||||
To install, you first need to install Haskell. I would recommend using [[https://www.haskell.org/ghcup/][ghcup]]. It is tested to work with version 9.8.2 of GHC. Then cloning the repo and running =cabal install= should suffice.
|
||||
#+begin_src bash
|
||||
$ git clone https://forgejo.ballcloud.cc/wball/perga
|
||||
$ cd perga
|
||||
$ cabal install
|
||||
#+end_src
|
||||
|
||||
** Tooling
|
||||
|
||||
There isn't much in the way of tooling, though I would strongly recommend installing either the [[https://forgejo.ballcloud.cc/wball/perga.nvim][neovim]] or [[https://forgejo.ballcloud.cc/wball/perga-mode][emacs]] plugins/packages in order to get nice syntax highlighting.
|
||||
|
||||
* Syntax
|
||||
|
||||
*Warning*: this section is a little out of date. I've made several undocumented tweaks to the syntax since writing this. This will get you close enough. All of the examples in the [[./examples]] folder correctly parse and typecheck, so peruse those files if you run into any problems.
|
||||
|
||||
The syntax is fairly flexible and should work as you expect. Identifiers can be Unicode as long as =megaparsec= calls them alphanumeric. =λ= and =Π= abstractions can be written in the usual ways that should be clear from the examples below. Additionally, arrows can be used as an abbreviation for a Π type where the parameter doesn't appear in the body as usual.
|
||||
|
||||
** Terms
|
||||
|
||||
All of the following example terms correctly parse, and should look familiar if you are used to standard lambda calculus notation or Coq syntax.
|
||||
|
||||
#+begin_src perga
|
||||
λ (α : ★) ⇒ λ (β : ★) ⇒ λ (x : α) ⇒ λ (y : β) ⇒ x
|
||||
fun (A B C : ★) (g : → C) (f : A → B) (x : A) : C ⇒ g (f x)
|
||||
fun (S : ★) (P Q : S → ★) (H : Π (x : S) , P x → Q x) (HP : forall (x : S), P x) => fun (x : S) => H x (HP x)
|
||||
#+end_src
|
||||
|
||||
To be perfectly clear, =λ= abstractions can be written with either =λ= or =fun=, and are separated from their bodies by either ==>= or =⇒=. Binders with the same type can be grouped together, and multiple binders can occur between the =λ= and the =⇒=. You can also optionally add the return type after the binders and before the =⇒=, though this can always be inferred and so isn't necessary
|
||||
|
||||
=Π= types can be written with either =Π=, =∀=, or =forall=, and are separated from their bodies with a =,=. Arrow types are written =→=. Like with =λ= abstractions, binders with the same type can be grouped, and multiple binders can occur between the =Π= and the =,=. Like with =λ= types, the "return" type can optionally be added after the binders and before the =,=, however this is even more useless, as it is nearly always =★=, the type of types.
|
||||
|
||||
The universe hierarchy is very similar to Coq, with =★ : □ : □₁ : □₂ : ...=, where =★= is impredicative and the =□ᵢ= are predicative. There is no universe polymorphism, making this rather limited. A lack of inductive types (or even just built-in =Σ=-types and sum types) makes doing logic at any universe level other than =★= extremely limited. For ease of typing, =[]1=, =□1=, =[]₁=, and =□₁= are all the same.
|
||||
|
||||
=Let= expressions have syntax shown below.
|
||||
|
||||
#+begin_example
|
||||
let ( (<ident> (: <type>)? := <expr>) )+ in <expr> end
|
||||
#+end_example
|
||||
|
||||
Below is a more concrete example.
|
||||
|
||||
#+begin_src perga
|
||||
let (a : A := (and_elim_l A (and B C) h))
|
||||
(bc := (and_elim_r A (and B C) h))
|
||||
(b := (and_elim_l B C bc))
|
||||
(c := (and_elim_r B C bc))
|
||||
in
|
||||
and_intro (and A B) C (and_intro A B a b) c
|
||||
end
|
||||
#+end_src
|
||||
|
||||
You can also directly bind functions. Here's an example.
|
||||
|
||||
#+begin_src perga
|
||||
let (f (A : ★) (x : A) : A := x) in
|
||||
f x
|
||||
end
|
||||
#+end_src
|
||||
|
||||
The syntax for binding functions is just like with definitions.
|
||||
|
||||
|
||||
** Definitions and Axioms
|
||||
|
||||
Definitions and axioms have abstract syntax as shown below.
|
||||
|
||||
#+begin_example
|
||||
def <ident> (<ident> : <type>)* : <type>? := <term>;
|
||||
axiom <ident> (<ident> : <type>)* : <type>;
|
||||
#+end_example
|
||||
|
||||
(The distinction between =<type>= and =<term>= is purely for emphasis; they are the exact same syntactic category.) Here's a couple definitions of the =const= function from above showing the options with the syntax, and a more complex example declaring functional extensionality as an axiom (assuming equality has been previously defined having type =eq : forall (A : ★), A → A → ★=). Duplicate definitions are not normally allowed and will result in an error.
|
||||
|
||||
#+begin_src perga
|
||||
def const := λ (α : ★) ⇒ λ (β : ★) ⇒ λ (x : α) => λ (y : β) => x;
|
||||
def const : ∀ (α β : ★), α → β → α := fun (α β : ★) (x : α) (y : β) ⇒ x;
|
||||
def const (α β : ★) (x : α) (y : β) : α := x;
|
||||
|
||||
axiom funext (A B : ★) (f g : A → B) : (∀ (x : A), eq B (f x) (g x)) → eq (A → B) f g;
|
||||
#+end_src
|
||||
|
||||
Type ascriptions are optional in both definitions and =let= bindings. If included, =perga= will check to make sure your definition matches the ascription, and, if so, will remember the way your wrote the type when printing inferred types, which is particularly handy when using abbreviations for complex types. =perga= has no problem inferring the types of top-level definitions, as they are completely determined by the term, but I recommend including ascriptions most of the time, as they serve as a nice piece of documentation, help guide the implementation process, and make sure you are implementing the type you think you are.
|
||||
|
||||
If the RHS of a definition is =axiom=, then =perga= will assume that the identifier is an inhabitant of the type ascribed to it (as such when using axioms, a type ascription is required). This allows you to use axioms.
|
||||
|
||||
|
||||
** Sections
|
||||
|
||||
There is a Coq-like section mechanism. You open a section with =section <identifier>= and close it with =end <identifier>=. The identifiers used must match. The identifiers don't serve any purpose beyond organization.
|
||||
|
||||
#+begin_src perga
|
||||
section Test
|
||||
variable (A B : ★);
|
||||
def id (x : A) := x;
|
||||
def const (x : A) (y : B) := id x;
|
||||
end Test
|
||||
|
||||
def id_full (A : ★) (x : A) := id A x;
|
||||
def const_full (A B : ★) (x : A) (y : B) := const A B x y;
|
||||
#+end_src
|
||||
|
||||
In a section, you can add section variables to the context with either =variable= or =hypothesis=. The syntax is the same as function parameters, so you can define multiple variables in one line, and can group section variables of the same type. Then, in the section, any definition using the section variables will be automatically generalized over the section variables that they use. Furthermore, any usage of such section variables (like =id x= in the definition of =const= above) will automatically apply the necessary section variables.
|
||||
|
||||
Nested sections are supported, and work as you would expect.
|
||||
|
||||
|
||||
** Comments and preprocessor directives
|
||||
|
||||
Line comments are =--= like in Haskell, and block comments are =[* *]= somewhat like ML (and nest properly). There is no significant whitespace, so you are free to format code as you wish.
|
||||
|
||||
There isn't a proper module system (yet), but you can include other files in a dumb, C preprocessor way by using =@include <filepath>= (NOTE: this unfortunately messes up line numbers in error messages). Filepaths are relative to the current file. Additionally, =@include= automatically keeps track of what has been included, so duplicate inclusions are skipped, meaning no include guards are necessary.
|
||||
|
||||
|
||||
* Usage
|
||||
|
||||
Running =perga= without any arguments drops you into a basic repl. From here, you can type in definitions which =perga= will typecheck. Previous definitions are accessible in future definitions. The usual readline keybindings are available, including navigating history, which is saved between sessions (in =~/.cache/perga/history=). In the repl, you can enter =:q=, press C-c, or press C-d to quit. Entering =:e= shows everything that has been defined along with their types. If you want to see the value of an identifier defined in the environment, you can enter =:v <ident>=. Entering =:t <expr>= prints the type of an expression. Entering =:n <expr>= will fully normalize (including unfolding definitions) an expression, while =:w <expr>= will reduce it to weak head normal form. Finally =:l <filepath>= loads a file.
|
||||
|
||||
Here's an example session showing the capabilities of the repl.
|
||||
|
||||
#+begin_example
|
||||
> :l examples/computation.pg
|
||||
loading: examples/computation.pg
|
||||
> :e
|
||||
eight : nat
|
||||
eq : ∏ (A : ★) , A → A → ★
|
||||
eq_cong : ∏ (A B : ★) (x y : A) (f : A → B) . eq A x y → eq B (f x) (f y)
|
||||
eq_refl : ∏ (A : ★) (x : A) . eq A x x
|
||||
eq_sym : ∏ (A : ★) (x y : A) . eq A x y → eq A y x
|
||||
eq_trans : ∏ (A : ★) (x y z : A) . eq A x y → eq A y z → eq A x z
|
||||
five : nat
|
||||
four : nat
|
||||
nat : ★
|
||||
nine : nat
|
||||
one : nat
|
||||
one_plus_one_is_two : eq nat (plus one one) two
|
||||
plus : nat → nat → nat
|
||||
seven : nat
|
||||
six : nat
|
||||
suc : nat → nat
|
||||
ten : nat
|
||||
three : nat
|
||||
times : nat → nat → nat
|
||||
two : nat
|
||||
two_plus_two_is_four : eq nat (plus two two) four
|
||||
two_times_five_is_ten : eq nat (times two five) ten
|
||||
zero : nat
|
||||
> :n plus one one
|
||||
λ (A : ★) (f : A → A) (x : A) => f (f x)
|
||||
> :n two
|
||||
λ (A : ★) (f : A → A) (x : A) => f (f x)
|
||||
> :w plus one one
|
||||
λ (A : ★) (f : A → A) (x : A) => one A f (one A f x)
|
||||
> :w two
|
||||
λ (A : ★) (f : A → A) (x : A) => f (one A f x)
|
||||
#+end_example
|
||||
|
||||
You can also give =perga= a filename as an argument, in which case it will typecheck every definition in the file. If you give =perga= multiple filenames, it will process each one in turn, sharing an environment between them. Upon finishing, which should be nearly instantaneous, it will print out all files it processed, and "success!" if it successfully typechecked, and the first error it encountered otherwise.
|
||||
|
||||
|
||||
* Simple Example
|
||||
|
||||
There are many very well commented examples in the [[./examples/]] folder. These include
|
||||
|
||||
- [[./examples/logic.pg]], which defines the standard logical operators and proves standard results about them,
|
||||
- [[./examples/classical.pg]], which asserts the law of the excluded middle as an axiom, and proves several results that require it,
|
||||
- [[./examples/computation.pg]], which demonstrates using =perga= for computational purposes,
|
||||
- [[./examples/algebra.pg]], which defines standard algebraic structures and proves results for them,
|
||||
- [[./examples/category.pg]], which formalizes some very basic category theory, and
|
||||
- [[./examples/peano.pg]], which proves standard arithmetic results from the Peano axioms.
|
||||
|
||||
I intend to extend these examples further.
|
||||
|
||||
Here is an example file defining Leibniz equality and proving that it is reflexive, symmetric, and transitive.
|
||||
|
||||
#+begin_src perga
|
||||
-- file: equality.pg
|
||||
|
||||
-- Defining Leibniz equality
|
||||
-- Note that we can leave the ascription off
|
||||
def eq (A : ★) (x y : A) := forall (P : A → ★), P x → P y;
|
||||
|
||||
-- Equality is reflexive, which is easy to prove
|
||||
-- Here we give an ascription so that when =perga= reports the type,
|
||||
-- it references =eq= rather than inferring the type.
|
||||
def eq_refl (A : ★) (x : A) : eq A x x := fun (P : A → ★) (Hx : P x) => Hx;
|
||||
|
||||
-- Equality is symmetric. This one's a little harder to prove.
|
||||
def eq_sym (A : ★) (x y : A) (Hxy : eq A x y) : eq A y x :=
|
||||
fun (P : A → ★) (Hy : P y) =>
|
||||
Hxy (fun (z : A) => P z → P x) (fun (Hx : P x) => Hx) Hy;
|
||||
|
||||
-- Equality is transitive.
|
||||
def eq_trans (A : ★) (x y z : A) (Hxy : eq A x y) (Hyz : eq A y z) : eq A x z :=
|
||||
fun (P : A → ★) (Hx : P x) => Hyz P (Hxy P Hx);
|
||||
#+end_src
|
||||
|
||||
Running =perga equality.pg= yields the following output.
|
||||
|
||||
#+begin_example
|
||||
loading: equality.pg
|
||||
success!
|
||||
#+end_example
|
||||
|
||||
This means our proofs were accepted.
|
||||
|
||||
If we had an error in the proofs, we would get a somewhat useful error message. For example, if we had defined =eq_trans= as shown below, it would be incorrect (missing the =P= after =Hxy=).
|
||||
|
||||
#+begin_src perga
|
||||
def eq_trans (A : ★) (x y z : A) (Hxy : eq A x y) (Hyz : eq A y z) : eq A x z :=
|
||||
fun (P : A → ★) (Hx : P x) => Hyz P (Hxy Hx);
|
||||
#+end_src
|
||||
|
||||
Then running =perga equality.pg= yields the following output.
|
||||
|
||||
#+begin_example
|
||||
loading: equality.pg
|
||||
19:50:
|
||||
|
|
||||
19 | fun (P : A → ★) (Hx : P x) => Hyz P (Hxy Hx);
|
||||
| ^
|
||||
Cannot unify 'A → ★' with 'P x' when evaluating 'Hxy Hx'
|
||||
#+end_example
|
||||
|
||||
This indicates that, when evaluating =Hxy Hx=, it was expecting something of type =A → ★=, but instead found something of type =P x=. Since =P= is type =A → ★=, we can then realize that we forgot the =P=.
|
||||
|
|
@ -35,20 +35,17 @@ def double_neg_intro (A : ★) (a : A) : not (not A) :=
|
|||
|
||||
-- Conjunction
|
||||
|
||||
def ∧ (A B : ★) : ★ := forall (C : ★), (A → B → C) → C;
|
||||
def ∧ (A B : ★) : ★ := {A × B};
|
||||
infixl 10 ∧;
|
||||
|
||||
-- introduction rule
|
||||
def and_intro (A B : ★) (a : A) (b : B) : A ∧ B :=
|
||||
fun (C : ★) (H : A → B → C) => H a b;
|
||||
def and_intro (A B : ★) (a : A) (b : B) : A ∧ B := <a, b>;
|
||||
|
||||
-- left elimination rule
|
||||
def and_elim_l (A B : ★) (ab : A ∧ B) : A :=
|
||||
ab A (fun (a : A) (b : B) => a);
|
||||
def and_elim_l (A B : ★) (ab : A ∧ B) : A := π₁ ab;
|
||||
|
||||
-- right elimination rule
|
||||
def and_elim_r (A B : ★) (ab : A ∧ B) : B :=
|
||||
ab B (fun (a : A) (b : B) => b);
|
||||
def and_elim_r (A B : ★) (ab : A ∧ B) : B := π₂ ab;
|
||||
|
||||
-- --------------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
|
@ -149,31 +146,25 @@ section Theorems
|
|||
|
||||
-- ~(A ∨ B) => ~A ∧ ~B
|
||||
def de_morgan1 (h : not (A ∨ B)) : not A ∧ not B :=
|
||||
and_intro (not A) (not B)
|
||||
([a : A] h (or_intro_l A B a))
|
||||
([b : B] h (or_intro_r A B b));
|
||||
<[a : A] h (or_intro_l A B a)
|
||||
,[b : B] h (or_intro_r A B b)>;
|
||||
|
||||
-- ~A ∧ ~B => ~(A ∨ B)
|
||||
def de_morgan2 (h : not A ∧ not B) : not (A ∨ B) :=
|
||||
fun (contra : A ∨ B) =>
|
||||
or_elim A B false contra
|
||||
(and_elim_l (not A) (not B) h)
|
||||
(and_elim_r (not A) (not B) h);
|
||||
or_elim A B false contra (π₁ h) (π₂ h);
|
||||
|
||||
-- ~A ∨ ~B => ~(A ∧ B)
|
||||
def de_morgan3 (h : not A ∨ not B) : not (A ∧ B) :=
|
||||
fun (contra : A ∧ B) =>
|
||||
or_elim (not A) (not B) false h
|
||||
(fun (na : not A) => na (and_elim_l A B contra))
|
||||
(fun (nb : not B) => nb (and_elim_r A B contra));
|
||||
(fun (na : not A) => na (π₁ contra))
|
||||
(fun (nb : not B) => nb (π₂ contra));
|
||||
|
||||
-- the last one (~(A ∧ B) => ~A ∨ ~B) is not possible constructively
|
||||
|
||||
-- A ∧ B => B ∧ A
|
||||
def and_comm (h : A ∧ B) : B ∧ A :=
|
||||
and_intro B A
|
||||
(and_elim_r A B h)
|
||||
(and_elim_l A B h);
|
||||
def and_comm (h : A ∧ B) : B ∧ A := <π₂ h, π₁ h>;
|
||||
|
||||
-- A ∨ B => B ∨ A
|
||||
def or_comm (h : A ∨ B) : B ∨ A :=
|
||||
|
|
@ -183,23 +174,11 @@ section Theorems
|
|||
|
||||
-- A ∧ (B ∧ C) => (A ∧ B) ∧ C
|
||||
def and_assoc_l (h : A ∧ (B ∧ C)) : (A ∧ B) ∧ C :=
|
||||
let (a := (and_elim_l A (B ∧ C) h))
|
||||
(bc := (and_elim_r A (B ∧ C) h))
|
||||
(b := (and_elim_l B C bc))
|
||||
(c := (and_elim_r B C bc))
|
||||
in
|
||||
and_intro (A ∧ B) C (and_intro A B a b) c
|
||||
end;
|
||||
<<π₁ h, π₁ (π₂ h)>, π₂ (π₂ h)>;
|
||||
|
||||
-- (A ∧ B) ∧ C => A ∧ (B ∧ C)
|
||||
def and_assoc_r (h : (A ∧ B) ∧ C) : A ∧ (B ∧ C) :=
|
||||
let (ab := and_elim_l (A ∧ B) C h)
|
||||
(a := and_elim_l A B ab)
|
||||
(b := and_elim_r A B ab)
|
||||
(c := and_elim_r (A ∧ B) C h)
|
||||
in
|
||||
and_intro A (B ∧ C) a (and_intro B C b c)
|
||||
end;
|
||||
<π₁ (π₁ h), <π₂ (π₁ h), π₂ h>>;
|
||||
|
||||
-- A ∨ (B ∨ C) => (A ∨ B) ∨ C
|
||||
def or_assoc_l (h : A ∨ (B ∨ C)) : (A ∨ B) ∨ C :=
|
||||
|
|
@ -221,21 +200,15 @@ section Theorems
|
|||
|
||||
-- A ∧ (B ∨ C) => A ∧ B ∨ A ∧ C
|
||||
def and_distrib_l_or (h : A ∧ (B ∨ C)) : A ∧ B ∨ A ∧ C :=
|
||||
or_elim B C (A ∧ B ∨ A ∧ C) (and_elim_r A (B ∨ C) h)
|
||||
(fun (b : B) => or_intro_l (A ∧ B) (A ∧ C)
|
||||
(and_intro A B (and_elim_l A (B ∨ C) h) b))
|
||||
(fun (c : C) => or_intro_r (A ∧ B) (A ∧ C)
|
||||
(and_intro A C (and_elim_l A (B ∨ C) h) c));
|
||||
or_elim B C (A ∧ B ∨ A ∧ C) (π₂ h)
|
||||
(fun (b : B) => or_intro_l (A ∧ B) (A ∧ C) <π₁ h, b>)
|
||||
(fun (c : C) => or_intro_r (A ∧ B) (A ∧ C) <π₁ h, c>);
|
||||
|
||||
-- A ∧ B ∨ A ∧ C => A ∧ (B ∨ C)
|
||||
def and_factor_l_or (h : A ∧ B ∨ A ∧ C) : A ∧ (B ∨ C) :=
|
||||
or_elim (A ∧ B) (A ∧ C) (A ∧ (B ∨ C)) h
|
||||
(fun (ab : A ∧ B) => and_intro A (B ∨ C)
|
||||
(and_elim_l A B ab)
|
||||
(or_intro_l B C (and_elim_r A B ab)))
|
||||
(fun (ac : A ∧ C) => and_intro A (B ∨ C)
|
||||
(and_elim_l A C ac)
|
||||
(or_intro_r B C (and_elim_r A C ac)));
|
||||
(fun (ab : A ∧ B) => <π₁ ab, or_intro_l B C (π₂ ab)>)
|
||||
(fun (ac : A ∧ C) => <π₁ ac, or_intro_r B C (π₂ ac)>);
|
||||
|
||||
-- Thanks Quinn!
|
||||
-- A ∨ B => ~B => A
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@
|
|||
-- powerful enough to construct the natural numbers (or at least to prove the
|
||||
-- Peano axioms as theorems from a definition constructible in perga). However,
|
||||
-- working with axioms is extremely common in math. As such, perga has a system
|
||||
-- for doing just that, namely the *axiom* system.
|
||||
-- for doing just that, namely the ★axiom★ system.
|
||||
--
|
||||
-- In a definition, rather than give a value for the term, we can give it the
|
||||
-- value axiom, in which case a type ascription is mandatory. Perga will then
|
||||
|
|
@ -24,7 +24,7 @@ axiom nat : ★;
|
|||
|
||||
-- As you can imagine, this can be risky. For instance, there's nothing stopping
|
||||
-- us from saying
|
||||
-- axiom uh_oh : false;
|
||||
-- uh_oh : false := axiom;
|
||||
-- or stipulating more subtly contradictory axioms. As such, as in mathematics,
|
||||
-- axioms should be used with care.
|
||||
--
|
||||
|
|
@ -40,6 +40,7 @@ axiom nat : ★;
|
|||
-- With these warnings in place, the Peano axioms are proven to be consistent,
|
||||
-- so we should be fine. I'm formalizing the second order axioms given in the
|
||||
-- wikipedia article on the Peano axioms
|
||||
-- (https://en.wikipedia.org/wiki/Peano_axioms).
|
||||
|
||||
-- axiom 1: 0 is a natural number
|
||||
axiom zero : nat;
|
||||
|
|
@ -118,8 +119,8 @@ def suc_or_zero : forall (n : nat), szc n :=
|
|||
-- to be able to define addition in the first place. The normal way addition is
|
||||
-- defined is by the following two equations:
|
||||
--
|
||||
-- 1. n + 0 = n
|
||||
-- 2. n + (suc m) = suc (n + m)
|
||||
-- 1) n + 0 = n
|
||||
-- 2) n + (suc m) = suc (n + m)
|
||||
--
|
||||
-- It is clear that this definition is ok, since m gets smaller with every
|
||||
-- application of equation 2, until m reaches zero, whereupon we can use
|
||||
|
|
@ -131,13 +132,13 @@ def suc_or_zero : forall (n : nat), szc n :=
|
|||
-- Theorem (recursive definitions):
|
||||
-- For every type A, element z : A, and function fS : nat → A → A, there
|
||||
-- exists a unique function f : nat → A satisfying
|
||||
-- 1. f 0 = z
|
||||
-- 2. forall n : nat, f (suc n) = fS n (f n)
|
||||
-- 1) f 0 = z
|
||||
-- 2) forall n : nat, f (suc n) = fS n (f n)
|
||||
--
|
||||
-- Once we've proved this theorem, we can easily define addition such that,
|
||||
-- i.e. that for any fixed n : nat, "n+" is the unique function satisfying
|
||||
-- 1. n + 0 = n
|
||||
-- 2. forall m : nat, n + (suc m) = suc (n + m)
|
||||
-- 1) n + 0 = n
|
||||
-- 2) forall m : nat, n + (suc m) = suc (n + m)
|
||||
-- Notice that this is the exact form we need in order to apply the theorem.
|
||||
--
|
||||
-- However, proving this theorem is *extremely long and difficult*. I would
|
||||
|
|
@ -147,8 +148,8 @@ def suc_or_zero : forall (n : nat), szc n :=
|
|||
--
|
||||
-- Here's our game plan. We will define a relation R : nat → A → ★ such that
|
||||
-- R x y iff for every relation Q : nat → A → ★ satisfying
|
||||
-- 1. Q 0 z and
|
||||
-- 2. forall (x : nat) (y : A), Q x y → Q (suc x) (fS x y),
|
||||
-- 1) Q 0 z and
|
||||
-- 2) forall (x : nat) (y : A), Q x y → Q (suc x) (fS x y),
|
||||
-- Q x y.
|
||||
-- In more mathy lingo, we take R to be the intersection of every relation
|
||||
-- satisfying 1 and 2. From there we will, with much effort, prove that R is
|
||||
|
|
|
|||
17
lib/Check.hs
17
lib/Check.hs
|
|
@ -16,6 +16,12 @@ matchPi x mt =
|
|||
(Pi _ a b) -> pure (a, b)
|
||||
t -> throwError $ ExpectedPiType x t
|
||||
|
||||
matchSigma :: Expr -> Expr -> ReaderT Env Result (Expr, Expr)
|
||||
matchSigma x mt =
|
||||
whnf mt >>= \case
|
||||
(Sigma _ a b) -> pure (a, b)
|
||||
t -> throwError $ ExpectedSigmaType x t
|
||||
|
||||
findLevel :: Context -> Expr -> ReaderT Env Result Integer
|
||||
findLevel g a = do
|
||||
s <- findType g a
|
||||
|
|
@ -72,6 +78,17 @@ findType g e@(Let _ (Just t) v b) = do
|
|||
_ <- findType g t
|
||||
betaEquiv' e t res
|
||||
pure t
|
||||
findType g (Sigma _ a b) = do
|
||||
aSort <- findType g a
|
||||
bSort <- findType g b
|
||||
liftEither $ compSort a b aSort bSort
|
||||
findType g (Pair a b) = do
|
||||
aType <- findType g a
|
||||
bType <- findType g b
|
||||
validateType g $ Sigma "" aType bType
|
||||
pure $ Sigma aType bType
|
||||
findType g (Pi1 x) = fst <$> (findType g x >>= matchSigma x)
|
||||
findType g (Pi2 x) = snd <$> (findType g x >>= matchSigma x)
|
||||
|
||||
checkType :: Env -> Expr -> Result Expr
|
||||
checkType env t = runReaderT (findType [] t) env
|
||||
|
|
|
|||
|
|
@ -116,6 +116,10 @@ usedVars (I.Let name ascr value body) = saveState $ do
|
|||
ascr' <- traverse usedVars ascr
|
||||
removeName name
|
||||
S.union (ty' `S.union` (ascr' ?: S.empty)) <$> usedVars body
|
||||
usedVars (I.Sigma m n) = S.union <$> usedVars m <*> usedVars n
|
||||
usedVars (I.Pair m n) = S.union <$> usedVars m <*> usedVars n
|
||||
usedVars (I.Pi1 x) = usedVars x
|
||||
usedVars (I.Pi2 x) = usedVars x
|
||||
|
||||
-- traverse the body of a definition, adding the necessary section arguments to
|
||||
-- any definitions made within the section
|
||||
|
|
@ -142,6 +146,10 @@ traverseBody (I.Let name ascr value body) = saveState $ do
|
|||
value' <- traverseBody value
|
||||
removeName name
|
||||
I.Let name ascr' value' <$> traverseBody body
|
||||
traverseBody (I.Sigma m n) = I.Sigma <$> traverseBody m <*> traverseBody n
|
||||
traverseBody (I.Pair m n) = I.Pair <$> traverseBody m <*> traverseBody n
|
||||
traverseBody (I.Pi1 x) = I.Pi1 <$> traverseBody x
|
||||
traverseBody (I.Pi2 x) = I.Pi2 <$> traverseBody x
|
||||
|
||||
mkPi :: (Text, IRExpr) -> IRExpr -> IRExpr
|
||||
mkPi (param, typ) = I.Pi param typ
|
||||
|
|
@ -206,3 +214,7 @@ elaborate ir = evalState (elaborate' ir) []
|
|||
ty' <- elaborate' ty
|
||||
modify (name :)
|
||||
E.Let name (Just ty') val' <$> elaborate' body
|
||||
elaborate' (I.Sigma m n) = E.Sigma <$> elaborate' m <*> elaborate' n
|
||||
elaborate' (I.Pair m n) = E.Pair <$> elaborate' m <*> elaborate' n
|
||||
elaborate' (I.Pi1 x) = E.Pi1 <$> elaborate' x
|
||||
elaborate' (I.Pi2 x) = E.Pi2 <$> elaborate' x
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ data Error
|
|||
= UnboundVariable Text
|
||||
| NotASort Expr
|
||||
| ExpectedPiType Expr Expr
|
||||
| ExpectedSigmaType Expr Expr
|
||||
| NotEquivalent Expr Expr Expr
|
||||
| PNMissingType Text
|
||||
| DuplicateDefinition Text
|
||||
|
|
@ -18,6 +19,7 @@ instance Pretty Error where
|
|||
pretty (UnboundVariable x) = "Unbound variable: '" <> pretty x <> "'"
|
||||
pretty (NotASort x) = group $ hang 4 ("Term:" <> line <> pretty x) <> line <> "is not a type"
|
||||
pretty (ExpectedPiType x t) = group $ hang 4 ("Term:" <> line <> pretty x) <> line <> hang 4 ("is not a function, instead is type" <> line <> pretty t)
|
||||
pretty (ExpectedSigmaType x t) = group $ hang 4 ("Term:" <> line <> pretty x) <> line <> hang 4 ("is not a pair, instead is type" <> line <> pretty t)
|
||||
pretty (NotEquivalent a a' e) =
|
||||
group $
|
||||
hang 4 ("Cannot unify" <> line <> pretty a)
|
||||
|
|
|
|||
26
lib/Eval.hs
26
lib/Eval.hs
|
|
@ -45,6 +45,10 @@ subst k s (App m n) = App (subst k s m) (subst k s n)
|
|||
subst k s (Abs x m n) = Abs x (subst k s m) (subst (k + 1) (incIndices s) n)
|
||||
subst k s (Pi x m n) = Pi x (subst k s m) (subst (k + 1) (incIndices s) n)
|
||||
subst k s (Let x t v b) = Let x t (subst k s v) (subst (k + 1) (incIndices s) b)
|
||||
subst k s (Sigma x m n) = Sigma x (subst k s m) (subst (k + 1) (incIndices s) n)
|
||||
subst k s (Pair m n) = Pair (subst k s m) (subst k s n)
|
||||
subst k s (Pi1 x) = Pi1 (subst k s x)
|
||||
subst k s (Pi2 x) = Pi2 (subst k s x)
|
||||
|
||||
envLookupVal :: Text -> ReaderT Env Result Expr
|
||||
envLookupVal n = asks ((_val <$>) . M.lookup n) >>= maybe (throwError $ UnboundVariable n) pure
|
||||
|
|
@ -59,6 +63,12 @@ reduce (Abs x t v) = Abs x <$> reduce t <*> reduce v
|
|||
reduce (Pi x t v) = Pi x <$> reduce t <*> reduce v
|
||||
reduce (Free n) = envLookupVal n
|
||||
reduce (Let _ _ v b) = pure $ subst 0 v b
|
||||
reduce (Sigma x a b) = Sigma x <$> reduce a <*> reduce b
|
||||
reduce (Pair a b) = Pair <$> reduce a <*> reduce b
|
||||
reduce (Pi1 (Pair a _)) = pure a
|
||||
reduce (Pi2 (Pair _ b)) = pure b
|
||||
reduce (Pi1 x) = Pi1 <$> reduce x
|
||||
reduce (Pi2 x) = Pi2 <$> reduce x
|
||||
reduce e = pure e
|
||||
|
||||
normalize :: Expr -> ReaderT Env Result Expr
|
||||
|
|
@ -78,6 +88,18 @@ whnf (App m n) = do
|
|||
else whnf $ App m' n
|
||||
whnf (Free n) = envLookupVal n >>= whnf
|
||||
whnf (Let _ _ v b) = whnf $ subst 0 v b
|
||||
whnf (Pi1 (Pair a _)) = pure a
|
||||
whnf (Pi2 (Pair _ b)) = pure b
|
||||
whnf (Pi1 x) = do
|
||||
x' <- whnf x
|
||||
if x == x'
|
||||
then pure $ Pi1 x
|
||||
else whnf $ Pi1 x'
|
||||
whnf (Pi2 x) = do
|
||||
x' <- whnf x
|
||||
if x == x'
|
||||
then pure $ Pi2 x
|
||||
else whnf $ Pi2 x'
|
||||
whnf e = pure e
|
||||
|
||||
betaEquiv :: Expr -> Expr -> ReaderT Env Result Bool
|
||||
|
|
@ -99,6 +121,10 @@ betaEquiv e1 e2
|
|||
(Pi _ t1 v1, Pi _ t2 v2) -> (&&) <$> betaEquiv t1 t2 <*> betaEquiv v1 v2
|
||||
(Let _ _ v b, e) -> betaEquiv (subst 0 v b) e
|
||||
(e, Let _ _ v b) -> betaEquiv (subst 0 v b) e
|
||||
(Sigma _ a b, Sigma _ a' b') -> (&&) <$> betaEquiv a a' <*> betaEquiv b b'
|
||||
(Pair a b, Pair a' b') -> (&&) <$> betaEquiv a a' <*> betaEquiv b b'
|
||||
(Pi1 x, Pi1 x') -> betaEquiv x x'
|
||||
(Pi2 x, Pi2 x') -> betaEquiv x x'
|
||||
_ -> pure False -- remaining cases impossible, false, or redundant
|
||||
|
||||
betaEquiv' :: Expr -> Expr -> Expr -> ReaderT Env Result ()
|
||||
|
|
|
|||
21
lib/Expr.hs
21
lib/Expr.hs
|
|
@ -15,6 +15,10 @@ data Expr where
|
|||
Abs :: Text -> Expr -> Expr -> Expr
|
||||
Pi :: Text -> Expr -> Expr -> Expr
|
||||
Let :: Text -> Maybe Expr -> Expr -> Expr -> Expr
|
||||
Sigma :: Text -> Expr -> Expr -> Expr
|
||||
Pair :: Expr -> Expr -> Expr
|
||||
Pi1 :: Expr -> Expr
|
||||
Pi2 :: Expr -> Expr
|
||||
deriving (Show, Ord)
|
||||
|
||||
instance Pretty Expr where
|
||||
|
|
@ -30,6 +34,10 @@ instance Eq Expr where
|
|||
(Abs _ t1 b1) == (Abs _ t2 b2) = t1 == t2 && b1 == b2
|
||||
(Pi _ t1 b1) == (Pi _ t2 b2) = t1 == t2 && b1 == b2
|
||||
(Let _ _ v1 b1) == (Let _ _ v2 b2) = v1 == v2 && b1 == b2
|
||||
(Sigma _ x1 y1) == (Sigma _ x2 y2) = x1 == x2 && y1 == y2
|
||||
(Pair x1 y1) == (Pair x2 y2) = x1 == x2 && y1 == y2
|
||||
(Pi1 x) == (Pi1 y) = x == y
|
||||
(Pi2 x) == (Pi2 y) = x == y
|
||||
_ == _ = False
|
||||
|
||||
occursFree :: Integer -> Expr -> Bool
|
||||
|
|
@ -42,6 +50,10 @@ occursFree n (App a b) = on (||) (occursFree n) a b
|
|||
occursFree n (Abs _ a b) = occursFree n a || occursFree (n + 1) b
|
||||
occursFree n (Pi _ a b) = occursFree n a || occursFree (n + 1) b
|
||||
occursFree n (Let _ _ v b) = occursFree n v || occursFree (n + 1) b
|
||||
occursFree n (Sigma _ x y) = occursFree n x || occursFree n y
|
||||
occursFree n (Pair x y) = occursFree n x || occursFree n y
|
||||
occursFree n (Pi1 x) = occursFree n x
|
||||
occursFree n (Pi2 x) = occursFree n x
|
||||
|
||||
shiftIndices :: Integer -> Integer -> Expr -> Expr
|
||||
shiftIndices d c (Var x k)
|
||||
|
|
@ -55,6 +67,10 @@ shiftIndices d c (App m n) = App (shiftIndices d c m) (shiftIndices d c n)
|
|||
shiftIndices d c (Abs x m n) = Abs x (shiftIndices d c m) (shiftIndices d (c + 1) n)
|
||||
shiftIndices d c (Pi x m n) = Pi x (shiftIndices d c m) (shiftIndices d (c + 1) n)
|
||||
shiftIndices d c (Let x t v b) = Let x t (shiftIndices d c v) (shiftIndices d (c + 1) b)
|
||||
shiftIndices d c (Sigma x m n) = Sigma x (shiftIndices d c m) (shiftIndices d c n)
|
||||
shiftIndices d c (Pair m n) = Pair (shiftIndices d c m) (shiftIndices d c n)
|
||||
shiftIndices d c (Pi1 x) = Pi1 (shiftIndices d c x)
|
||||
shiftIndices d c (Pi2 x) = Pi2 (shiftIndices d c x)
|
||||
|
||||
incIndices :: Expr -> Expr
|
||||
incIndices = shiftIndices 1 0
|
||||
|
|
@ -206,6 +222,11 @@ prettyExpr k expr = case expr of
|
|||
where
|
||||
(binds, body) = collectLets expr
|
||||
bindings = sep $ map pretty binds
|
||||
(Sigma "" x y) -> parens $ parens (pretty x) <+> "×" <+> parens (pretty y)
|
||||
(Sigma x t m) -> parens $ "Σ" <+> pretty x <+> ":" <+> pretty t <> "," <+> pretty m
|
||||
(Pair x y) -> parens $ pretty x <> "," <+> pretty y
|
||||
(Pi1 x) -> parens $ "π₁" <+> parens (pretty x)
|
||||
(Pi2 x) -> parens $ "π₂" <+> parens (pretty x)
|
||||
|
||||
prettyT :: Expr -> Text
|
||||
prettyT = renderStrict . layoutSmart defaultLayoutOptions . pretty
|
||||
|
|
|
|||
10
lib/IR.hs
10
lib/IR.hs
|
|
@ -27,6 +27,16 @@ data IRExpr
|
|||
, letValue :: IRExpr
|
||||
, letBody :: IRExpr
|
||||
}
|
||||
| Sigma
|
||||
{ prodLeft :: IRExpr
|
||||
, prodRight :: IRExpr
|
||||
}
|
||||
| Pair
|
||||
{ pairLeft :: IRExpr
|
||||
, pairRight :: IRExpr
|
||||
}
|
||||
| Pi1 IRExpr
|
||||
| Pi2 IRExpr
|
||||
deriving (Show, Eq, Ord)
|
||||
|
||||
data IRSectionDef
|
||||
|
|
|
|||
|
|
@ -145,7 +145,7 @@ pApp :: Parser IRExpr
|
|||
pApp = lexeme $ foldl1 App <$> some pTerm
|
||||
|
||||
pStar :: Parser IRExpr
|
||||
pStar = lexeme $ Star <$ (symbol "★" <|> symbol "⋆" <|> pKeyword "TYPE")
|
||||
pStar = lexeme $ Star <$ symbol "★"
|
||||
|
||||
subscriptDigit :: Parser Integer
|
||||
subscriptDigit =
|
||||
|
|
@ -174,8 +174,26 @@ pSort = lexeme $ pStar <|> pSquare
|
|||
pOpSection :: Parser IRExpr
|
||||
pOpSection = lexeme $ parens $ Var <$> pSymbol
|
||||
|
||||
pSigma :: Parser IRExpr
|
||||
pSigma = lexeme $ between (char '{') (char '}') $ do
|
||||
left <- pIRExpr
|
||||
_ <- symbol "×"
|
||||
Sigma left <$> pIRExpr
|
||||
|
||||
pPair :: Parser IRExpr
|
||||
pPair = lexeme $ between (char '<') (char '>') $ do
|
||||
left <- pIRExpr
|
||||
_ <- symbol ","
|
||||
Pair left <$> pIRExpr
|
||||
|
||||
pPi1 :: Parser IRExpr
|
||||
pPi1 = lexeme $ symbol "π₁" >> Pi1 <$> pIRExpr
|
||||
|
||||
pPi2 :: Parser IRExpr
|
||||
pPi2 = lexeme $ symbol "π₂" >> Pi2 <$> pIRExpr
|
||||
|
||||
pTerm :: Parser IRExpr
|
||||
pTerm = lexeme $ label "term" $ choice [pSort, pPureVar, pVar, try pOpSection, parens pIRExpr]
|
||||
pTerm = lexeme $ label "term" $ choice [pSort, pPi1, pPi2, pPureVar, pVar, pSigma, pPair, try pOpSection, parens pIRExpr]
|
||||
|
||||
pInfix :: Parser IRExpr
|
||||
pInfix = parseWithPrec 0
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ category: Math
|
|||
build-type: Simple
|
||||
|
||||
extra-doc-files: CHANGELOG.md
|
||||
, README.org
|
||||
, README.md
|
||||
|
||||
common warnings
|
||||
ghc-options: -Wall
|
||||
|
|
@ -36,7 +36,7 @@ library perga-lib
|
|||
Program
|
||||
|
||||
hs-source-dirs: lib
|
||||
build-depends: base
|
||||
build-depends: base ^>=4.19.1.0
|
||||
, relude
|
||||
, filepath
|
||||
, megaparsec
|
||||
|
|
@ -57,7 +57,7 @@ executable perga
|
|||
main-is: Main.hs
|
||||
other-modules: Repl
|
||||
|
||||
build-depends: base
|
||||
build-depends: base ^>=4.19.1.0
|
||||
, relude
|
||||
, directory
|
||||
, filepath
|
||||
|
|
|
|||
Loading…
Reference in a new issue