2

Assume I have the following idris source code:

module Source

import Data.Vect

--in order to avoid compiler confusion between Prelude.List.(++), Prelude.String.(++) and Data.Vect.(++)
infixl 0 +++
(+++) : Vect n a -> Vect m a -> Vect (n+m) a
v +++ w = v ++ w
--NB: further down in the question I'll assume this definition isn't needed because the compiler
--    will have enough context to disambiguate between these and figure out that Data.Vect.(++)
--    is the "correct" one to use.

lemma : reverse (n :: ns) +++ (n :: ns) = reverse ns +++ (n :: n :: ns)
lemma {ns = []}       = Refl
lemma {ns = n' :: ns} = ?lemma_rhs

As shown, the base case for lemma is trivially Refl. But I can't seem to find a way to prove the inductive case: the repl "just" spits out the following

*source> :t lemma_rhs
  phTy : Type
  n1 : phTy
  len : Nat
  ns : Vect len phTy
  n : phTy
-----------------------------------------
lemma_rhs : Data.Vect.reverse, go phTy
                                  (S (S len))
                                  (n :: n1 :: ns)
                                  [n1, n]
                                  ns ++
            n :: n1 :: ns =
            Data.Vect.reverse, go phTy (S len) (n1 :: ns) [n1] ns ++
            n :: n :: n1 :: ns

I understand that phTy stands for "phantom type", the implicit type of the vectors I'm considering. I also understand that go is the name of the function defined in the where clause for the definition of the library function reverse.

Question

How can I continue the proof? Is my inductive strategy sound? Is there a better one?

Context

This has came up in one of my toy projects, where I try to define arbitrary tensors; specifically, this seems to be needed in order to define "full index contraction". I'll elaborate a little bit on that:

I define tensors in a way that's roughly equivalent to

data Tensor : (rank : Nat) -> (shape : Vector rank Nat) -> Type where
  Scalar : a -> Tensor Z [] a
  Vector : Vect n (Tensor rank shape a) -> Tensor (S rank) (n :: shape) a

glossing over the rest of the source code (since it isn't relevant, and it's quite long and uninteresting as of now), I was able to define the following functions

contractIndex : Num a =>
                Tensor (r1 + (2 + r2)) (s1 ++ (n :: n :: s2)) a ->
                Tensor (r1 + r2) (s1 ++ s2) a
tensorProduct : Num a =>
                Tensor r1 s1 a ->
                Tensor r2 s2 a ->
                Tensor (r1 + r2) (s1 ++ s2) a
contractProduct : Num a =>
                  Tensor (S r1) s1 a ->
                  Tensor (S r2) ((last s1) :: s2) a ->
                  Tensor (r1 + r2) ((take r1 s1) ++ s2) a

and I'm working on this other one

fullIndexContraction : Num a =>
                       Tensor r (reverse ns) a ->
                       Tensor r ns a ->
                       Tensor 0 [] a
fullIndexContraction {r = Z}   {ns = []}      t s = t * s
fullIndexContraction {r = S r} {ns = n :: ns} t s = ?rhs

that should "iterate contractProduct as much as possible (that is, r times)"; equivalently, it could be possible to define it as tensorProduct composed with as many contractIndex as possible (again, that amount should be r).

I'm including all this becuse maybe it's easier to just solve this problem without proving the lemma above: if that were the case, I'd be fully satisfied as well. I just thought the "shorter" version above might be easier to deal with, since I'm pretty sure I'll be able to figure out the missing pieces myself.

The version of idris i'm using is 1.3.2-git:PRE (that's what the repl says when invoked from the command line).

Edit: xash's answer covers almost everything, and I was able to write the following functions

nreverse_id : (k : Nat) -> nreverse k = k
contractAllIndices : Num a =>
                     Tensor (nreverse k + k) (reverse ns ++ ns) a ->
                     Tensor Z [] a
contractAllProduct : Num a =>
                     Tensor (nreverse k) (reverse ns) a ->
                     Tensor k ns a ->
                     Tensor Z []

I also wrote a "fancy" version of reverse, let's call it fancy_reverse, that automatically rewrites nreverse k = k in its result. So I tried to write a function that doesn't have nreverse in its signature, something like

fancy_reverse : Vect n a -> Vect n a
fancy_reverse {n} xs =
  rewrite sym $ nreverse_id n in
  reverse xs

contract : Num a =>
           {auto eql : fancy_reverse ns1 = ns2} ->
           Tensor k ns1 a ->
           Tensor k ns2 a ->
           Tensor Z [] a
contract {eql} {k} {ns1} {ns2} t s =
  flip contractAllProduct s $
  rewrite sym $ nreverse_id k in
  ?rhs

now, the inferred type for rhs is Tensor (nreverse k) (reverse ns2) and I have in scope a rewrite rule for k = nreverse k, but I can't seem to wrap my head around how to rewrite the implicit eql proof to make this type check: am I doing something wrong?

1 Answers1

1

The prelude Data.Vect.reverse is hard to reason about, because AFAIK the go helper function won't be resolved in the typechecker. The usual approach is to define oneself an easier reverse that doesn't need rewrite in the type level. Like here for example:

%hide Data.Vect.reverse

nreverse : Nat -> Nat
nreverse Z = Z
nreverse (S n) = nreverse n + 1

reverse : Vect n a -> Vect (nreverse n) a
reverse [] = []
reverse (x :: xs) = reverse xs ++ [x]

lemma : {xs : Vect n a} -> reverse (x :: xs) = reverse xs ++ [x]
lemma = Refl

As you can see, this definition is straight-forward enough, that this equivalent lemma can be solved without further work. Thus you can probably just match on the reverse ns in fullIndexContraction like in this example:

data Foo : Vect n Nat -> Type where
    MkFoo : (x : Vect n Nat) -> Foo x

foo : Foo a -> Foo (reverse a) -> Nat
foo (MkFoo [])      (MkFoo []) = Z
foo (MkFoo $ x::xs) (MkFoo $ reverse xs ++ [x]) =
    x + foo (MkFoo xs) (MkFoo $ reverse xs)

To your comment: first, len = nreverse len must sometimes be used, but if you had rewrite on the type level (through the usual n + 1 = 1 + n shenanigans) you had the same problem (if not even with more complicated proofs, but this is just a guess.)

vectAppendAssociative is actually enough:

lemma2 : Main.reverse (n :: ns1) ++ ns2 = Main.reverse ns1 ++ (n :: ns2)
lemma2 {n} {ns1} {ns2} = sym $ vectAppendAssociative (reverse ns1) [n] ns2

xash
  • 3,702
  • 10
  • 22
  • I'm trying to implement you answer in my project, but somethings bug me. The first one is that I'll have to prove (and rewrite over and over) that `nreverse k = k`; that's not difficult, but it will probably make the code even less readable. Or, I could change the type of reverse by rewriting such a proof in its definition: that'd probably save my day in that regard. – LorenzoPerticone Apr 21 '20 at 18:18
  • The second, and more important, is that it's not as easy as I'd expect to prove that `(reverse n :: ns1) ++ ns2 = (reverse ns1) ++ (n :: ns2)`: there's probably something I'm missing, and I'd be glad to hear your thake on this one. (I tried using the fact that `++` is associative, but apparently that's not enough for the compiler) – LorenzoPerticone Apr 21 '20 at 18:20
  • Updated my answer, as the vectassociate part didn't fit here into the commits. – xash Apr 21 '20 at 18:47
  • Thank you very much for your explanation! I do still have a small problem, and it won't fit in a comment. I'll edit the question to elaborate a bit on it, for the sake of completeness. – LorenzoPerticone Apr 21 '20 at 22:14
  • The `eql` problematic is exactly "`rewrite` on the type level". Sometimes using explicit `replace` helps to at least see what's happening, but usually you want to stick to easy functions like `reverse` or even simpler but more verbose use a datatype proof `IsReversed ns1 ns2`. Btw. if you are just annoyed by typing `(nrevert k) (revert xs)`, remember you can use auxiliary types, something like: `Flipped : (k : Nat) -> Vect k Nat -> Type -> Type; Flipped k xs a = Tensor (nreverse k) (reverse xs) a` – xash Apr 22 '20 at 11:32
  • The problem isn't being annoyed by having to type more than strictly necessary: I just want to avoid code duplication, and this seems to impose code duplication whenever I construct a value that's not explicitly `reverse`d. – LorenzoPerticone Apr 22 '20 at 17:02
  • Another point of view on this would be that I'm looking for a function whose signature looks like `{eql : fancy_reverse s1 = s2} -> Tensor k s2 a -> Tensor (nreverse k) (reverse s1) a`. I'm trying to work out something on my own, and I'm starting to think that this problem will require another question, as I'm clearly drifting away from the original points (that you've answered very clearly: once again, thanks for that!) – LorenzoPerticone Apr 22 '20 at 17:04