My fascination for types rivals my inability to define the concept.

Even though I don’t know what a type is, I can still recognize when a paper “is about types”: the paper usually contains many occurrences of formulas of the form “`t : T`

”, where “`t`

” is some piece of code, some program, and “`T`

” is one of these mysterious types. The colon in the middle is of no technical significance, but it signals some cultural awareness on the part of authors.

Hypothesis: Types are a meme.

My experience is also that things are very very bad when “`t : T`

” does not hold. Types are a way to tell right from wrong, for some domain-specific definitions of correctness.

Hypothesis: Types are specifications.

One idea that really narrows it down for me is that programming languages have types. You can assign types to everything in a language: static types to source code, dynamic types to run-time values.

Another way to look at this is to compare with other forms of specifications. How do you prove a specification? A priori, you could use any method, and all you care about is that it is somehow “sound”, but otherwise the proof method is a black box.

- Automated theorem proving, with an SMT solver as a black box.
- Interactive theorem proving, with PhD students as a black box.
- Testing, with the compiler/interpreter as a black box.
^{1}

Approaches using “types” seem different. To prove a specification, a typing judgement “`t : T`

”, the way forward is to prove more typing judgements by following the rules of some “type system”. Types tell me both “how to specify things” and “how to verify things”—by specifying and verifying sub-things.

Hypothesis: Types are compositional specifications.

I originally posted this question on the recently created TYPES Zulip server, a spin-off of the TYPES mailing list. Those are nice hangouts for people who like types, whatever you believe they are (and also for getting spammed with Calls-For-Papers).

Whereas most formal methods are sound for proving the absence of certain bugs, testing is a sound method of finding bugs.↩︎

```
{-# LANGUAGE TypeFamilies, DataKinds, PolyKinds, RankNTypes,
GADTs, TypeOperators, UndecidableInstances #-}
import Data.Kind (Type)
import Data.Proxy
```

Type families in Haskell offer a flavor of dependent types: a function `g`

or a type family `G`

may have a result whose type `F x`

depends on the argument `x`

:

```
type family F (x :: Type) :: Type
g :: forall x. Proxy x -> F x -- Proxy to avoid ambiguity
g = undefined -- dummy
type family G (x :: Type) :: F x
```

But it is not quite clear how well features of other “truly” dependently typed languages translate to Haskell. The challenge we’ll face in this post is to do type-level pattern-matching on GADTs indexed by type families.

Sorry if that was a bit of a mouthful. Let me illustrate the problem with a minimal non-working example. You run right into this issue when you try to defunctionalize a dependent function, such as `G`

, which is useful to reimplement “at the type level” libraries that use type families, such as *recursion-schemes*.

First encode `G`

as an expression, a symbol `SG`

, denoting a value of type `F x`

:

```
type Exp a = a -> Type
data SG (x :: Type) :: Exp (F x)
```

Declare an evaluation function, mapping expressions to values:

`type family Eval (e :: Exp a) :: a`

Define that function on `SG`

:

`type instance Eval (SG x) = G x`

And GHC complains with the following error message (on GHC 8.10.2):

```
error:
• Illegal type synonym family application ‘F x’ in instance:
Eval @(F x) (SG x)
• In the type instance declaration for ‘Eval’
```

The function `Eval :: forall a. Exp a -> a`

has two arguments, the type `a`

, which is implicit, and the expression `e`

of type `Exp a`

. In the clause for `Eval (SG x)`

, that type argument `a`

must be `F x`

. Problem: it contains a type family `F`

. To put it simply, the arguments in each `type instance`

must be “patterns”, made of constructors and variables only, and `F x`

is not a pattern.

As a minor remark, it is necessary for the constructor `SG`

to involve a type family in its result. We would not run into this problem with simpler GADTs where result types contain only constructors.

```
-- Example of a "simpler" GADT
data MiniExp a where
Or :: Bool -> Bool -> MiniExp Bool
Add :: Int -> Int -> MiniExp Int
```

It’s a problem specific to this usage of type families. For comparison, a similar value-level encoding does compile, where `eval`

is a function on a GADT:

```
data Exp1 (a :: Type) where
SG1 :: forall x. Proxy x -> Exp1 (F x)
-- Proxy is necessary to avoid ambiguity.
eval :: Exp1 a -> a
eval (SG1 x) = g x
```

You can also try to promote that example as a type family, only to run into the same error as earlier. The only difference is that `SG1`

is a constructor of an actual GADT, whereas `SG`

is a type contructor, using `Type`

as a pseudo-GADT.

```
type family Eval1 (e :: Exp1 a) :: a
type instance Eval1 (SG1 (_ :: Proxy x)) = G x
```

```
error:
• Illegal type synonym family application ‘F x’ in instance:
Eval1 @(F x) ('SG1 @x _1)
• In the type instance declaration for ‘Eval1’
```

Type families in Haskell may have implicit parameters, but they behave like regular parameters. To evaluate an applied type family, we look for a clause with matching patterns; the “matching” is done left-to-right, and it’s not possible to match against an arbitrary function application `F x`

. In contrast, in functions, type parameters are implicit, and also *irrelevant*. To evaluate an applied function, we jump straight to look at its non-type arguments, so it’s fine if some clauses instantiate type arguments with type families.

In Agda, an actual dependently-typed language, *dot patterns* generalize that idea: they indicate parameters (not only type parameters) whose values are determined by pattern-matching on later parameters.

A different way to understand this is that the constructors of GADTs hold *type equalities* that constrain preceding type arguments. For example, the `SG1`

constructor above really has the following type:

`SG1 :: forall x y. (F x ~ y) => Proxy x -> Exp1 y`

where the result type is the GADT `Eval1`

applied to a type variable, and the equality `F x ~ y`

turns into a field of the constructor containing that equality proof.

So those are other systems where our example does work, and type families are just weird for historical reasons. We can hope that Dependent Haskell will make them less weird.

In today’s Almost-Dependent Haskell, the above desugaring of GADTs suggests a workaround: type equality allows us to comply with the restriction that the left-hand side of a type family must consist of patterns.

Although there are no constraints in the promoted world to translate `(~)`

, type equality can be encoded as a type:

```
data a :=: b where
Refl :: a :=: a
```

A type equality `e :: a :=: b`

gives us a *coercion*, a function `Rewrite e :: a -> b`

. There is one case: if `e`

is the constructor `Refl :: a :=: a`

, then the coercion is the identity function:

```
type family Rewrite (e :: a :=: b) (x :: a) :: b
type instance Rewrite Refl x = x
```

Now we can define the defunctionalization symbol for `G`

, using an equality to hide the actual result type behind a variable `y`

:

```
data SG2_ (x :: Type) (e :: F x :=: y) :: Exp y
-- SG2_ :: forall y. forall x -> F x :=: y -> Exp y
```

We export a wrapper supplying the `Refl`

proof, to expose the same type as the original `SG`

above:

```
type SG2 x = SG2_ x Refl
-- SG2 :: forall x -> Exp (F x)
```

We can now define `Eval`

on `SG2_`

(and thus `SG2`

) similarly to the function `eval`

on `SG1`

, with the main difference being that the coercion is applied explicitly:

`type instance Eval (SG2_ x e) = Rewrite e (G x)`

To summarize, type families have limitations which get in the way of pattern-matching on GADTs, and we can overcome them by making type equalities explicit.

Thanks to Denis Stoyanov for discussing this issue with me.

]]>Recursion and iteration are two sides of the same coin. A common way to elaborate that idea is to express one in terms of the other. Iteration, recursively: to iterate an action, is to do the action, and then iterate the action again. Conversely, a recursive definition can be approximated by unfolding it iteratively. To implement recursion on a sequential machine, we can use a stack to keep track of those unfoldings.

So there is a sense in which these are equivalent, but that already presumes that they are not exactly the same. We think about recursion differently than iteration. Hence it may a little surprising when recursion and iteration both appear directly as two implementations of the same interface.

To summarize the main point without all the upcoming category theory jargon, there is one signature which describes an operator for iteration, recursion, or maybe a bit of both simultaneously, depending on how you read the symbols `==>`

and `+`

:

`iter :: (a ==> a + b) -> (a ==> b)`

The idea of “iteration” is encapsulated by the following function `iter`

:

```
iter :: (a -> Either a b) -> (a -> b)
iter f a =
case f a of
Left a' -> iter f a'
Right b -> b
```

`iter`

can be thought of as a “while” loop. The body of the loop `f`

takes some state `a`

, and either says “continue” with a new state `a'`

to keep the loop going, or “break” with a result `b`

.

We can generalize `iter`

. It transforms “loop bodies” into “loops”, and rather than functions, those could be entities in any category. An iteration operator on some category denoted `(==>)`

is a function with the following signature:

`iter :: (a ==> a + b) -> (a ==> b)`

satisfying a bunch of laws, with the most obvious one being a fixed point equation:^{1}

`iter f = (f >>> either (iter f) id)`

where `(>>>)`

and `id`

are the two defining components of a category, and `either`

is the eliminator for sums (`+`

). The technical term for “a category with sums” is a cocartesian category.

```
class Category k => Cocartesian k where
type a + b -- Not fully well-formed Haskell.
either :: k a c -> k b c -> k (a + b) c
left :: k a (a + b)
right :: k b (a + b)
-- Replacing k with an infix (==>)
-- either :: (a ==> c) -> (b ==> c) -> (a + b ==> c)
```

Putting this all together, an *iterative category* is a cocartesian category plus an `iter`

operation.

```
class Cocartesian k => Iterative k where
iter :: k a (a + b) -> k a b
```

The fixed point equation provides a pretty general way to define `iter`

. For the three in this post, it produces working functions in Haskell. In theory, properly sorting out issues of non-termination can get hairy.

```
iter :: (a -> Either a b) -> (a -> b)
iter f = f >>> either (iter f) id
-- NB: (>>>) = flip (.)
```

Recursion also provides an implementation for `iter`

, but in the opposite category, `(<==)`

. If you flip arrows back the right way, this defines a twin interface of “coiterative categories”. Doing so, sums `(+)`

become products `(*)`

.

```
class Cartesian k => Coiterative k where
coiter :: k (a * b) a -> k b a
-- with infix notation (==>) instead of k,
-- coiter :: (a * b ==> a) -> (b ==> a)
```

We can wrap any instance of `Iterative`

as an instance of `Coiterative`

and vice versa, so `iter`

and `coiter`

can be thought of as the same interface in principle. For particular implementations, one or the other direction may seem more intuitive.

If we curry and flip the argument, the type of `coiter`

becomes `(b -> a -> a) -> b -> a`

, which is like the type of `fix :: (a -> a) -> a`

but with the functor `(b -> _)`

applied to both the domain `(a -> a)`

and codomain `a`

: `coiter`

is `fmap fix`

.

```
coiter' :: (b -> a -> a) -> b -> a
coiter' = fmap fix
```

The fixed point equation provides an equivalent definition. We need to flip `(>>>)`

into `(<<<)`

(which is `(.)`

), and the dual of `either`

does not have a name in the standard library, but it is `liftA2 (,)`

.

```
coiter :: ((a, b) -> a) -> b -> a
coiter f = f . liftA2 (,) (coiter f) id
-- where --
liftA2 (,) :: (c -> a) -> (c -> b) -> (c -> (a, b))
```

That latter definition is mostly similar to the naive definition of `fix`

, where `fix f`

will be reevaluated with every unfolding.

```
fix :: (a -> a) -> a
fix f = f (fix f)
```

We have two implementations of `iter`

, one by iteration, one by recursion. Iterative categories thus provide a framework generalizing both iteration and recursion under the same algebraic rules.

From those two examples, one might hypothesize that `iter`

models iteration, while `coiter`

models recursion. But here is another example which suggests the situation is not as simple as that.

We start with the category of functors `Type -> Type`

, which is equipped with a sum:

`data (f :+: g) a = L (f a) | R (g a)`

But the real category of interest is the Kleisli category of the “monad of free monads”, *i.e.*, the mapping `Free`

from functors `f`

to the free monads they generate `Free f`

. That mapping is itself a monad.

`data Free f a = Pure a | Lift (f (Free f a))`

An arrow `f ==> g`

is now a natural transformation `f ~> Free g`

, *i.e.*, `forall a. f a -> Free g a`

:

```
-- Natural transformation from f to g
type f ~> g = forall a. f a -> g a
```

One intuition for that category is that functors `f`

are *interfaces*, and the free monad `Free f`

is inhabited by expressions, or *programs*, using operations from the interface `f`

. Then a natural transformation `f ~> Free g`

is an *implementation* of the interface `f`

using interface `g`

. Those operations compose naturally: given an implementation of `f`

in terms of `g`

(`f ~> Free g`

), and an implementation of `g`

in terms of `h`

(`g ~> Free h`

), we can obtain an implementation of `f`

in terms of `h`

(`f ~> Free h`

). Thus arrows `_ ~> Free _`

form a category—and that also mostly implies that `Free`

is a monad.

We can define `iter`

in that category. Like previous examples, we can define it without thinking by using the fixed point equation of `iter`

. We will call `rec`

this variant of `iter`

, because it actually behaves a lot like `fix`

whose name is already taken:

```
rec :: (f ~> Free (f :+: g)) -> (f ~> Free g)
rec f = f >>> either (rec f) id
-- where --
(>>>) :: (f ~> Free g) -> (g ~> Free h) -> (f ~> Free h)
id :: f ~> Free f
either :: (f ~> h) -> (g ~> h) -> (f :+: g ~> h)
```

We eventually do have to think about what `rec`

means.

The argument `f ~> Free (f :+: g)`

is a *recursive* implementation of an interface `f`

: it uses an interface `f :+: g`

which includes `f`

itself. `rec f`

composes `f`

with `either (rec f) id`

, which is basically some plumbing around `rec f`

. Consequently, `rec`

takes a recursive program `prog :: f ~> Free (f :+: g)`

, and produces a non-recursive program `f ~> Free g`

, using that same result to implement the `f`

calls in `prog`

, so only the other “external” calls in `g`

remain.

That third version of `iter`

(`rec`

) has similarities to both of the previous versions (`iter`

and `fix`

).

Obviously, the whole explanation above is given from perspective of recursion, or self-referentiality. While `fix`

simply describes recursion as fixed points, `rec`

provides a more elaborate model based on an explicit notion of syntax using `Free`

monads.

There is also a connection to the eponymous interpretation of `iter`

as iteration. Both `iter`

and `rec`

use a sum type (`Either`

or `(:+:)`

), representing a choice: to “continue” or “break” the loop, to “recurse” or “call” an external function.

That similarity may be more apparent when phrased in terms of low-level “assembly-like” languages, control-flow graphs. Here, programs consist of blocks of instructions, with “jump” instructions pointing to other blocks of instructions. Those programs form a category. The objects, *i.e.*, interfaces, are sets of “program labels” that one can jump to. A program `p : I ==> J`

exposes a set of “entry points” `I`

and a set of “exit points” `J`

: execution enters the program `p`

by jumping to a label in `I`

, and exits it by jumping to a label in `J`

. There may be other “internal jumps” within such a program, which are not visible in the interface `I ==> J`

.

The operation `iter : (I ==> I + J) -> (I ==> J)`

takes a program `p : I ==> I + J`

, whose exit points are in the disjoint union of `I`

and `J`

; `iter p : I ==> J`

is the result of linking the exit points in `I`

to the corresponding entry points, turning them into internal jumps. With some extra conditional constructs, we can easily implement “while” loops (“`iter`

on `_ -> _`

”) with such an operation.

Simple jumps (“jump to this label”) are pretty limited in expressiveness. We can make them more interesting by adding return locations to jumps, which thus become “calls” (“push a frame on the stack and jump to this label”)—to be complemented with “return” instructions. That generalization allows us to (roughly) implement `rec`

, suggesting that those various interpretations of `iter`

are maybe not as different as they seem.

```
iter :: (a ==> a + b) -> (a ==> b)
-- specializes to --
iter :: (a -> Either a b) -> (a -> b)
coiter :: ((a, b) -> a) -> (b -> a)
rec :: (f ~> Free (f :+: g)) -> (f ~> Free g)
```

The notion of “iterative category” is not quite standard; here is my version in Coq which condenses the little I could digest from the related literature (I mostly skip a lot and look for equations or commutative diagrams). Those and other relevant equations can be found in the book

*Iteration Theories: The Equational Logic of Iterative Processes*by Bloom and Ésik (in Section 5.2, Definition 5.2.1 (fixed point equation), and Theorems 5.3.1, 5.3.3, 5.3.9). It’s a pretty difficult book to just jump into though. The nice thing about category theory is that such dense formulas can be replaced with pretty pictures, like in this paper (page 7). For an additional source of diagrams and literature, a related notion is that of*traced monoidal categories*—every iterative category is traced monoidal.↩︎

In the past few weeks I’ve made some improvements to *hs-to-coq*. In particular, I wanted to verify the `Data.Sequence`

module from the *containers* library. I’ve managed to translate most of the module to Coq so I can start proving stuff.

In this post, I will present some of the changes made in *hs-to-coq* to be able to translate `Data.Sequence`

.

*hs-to-coq* had already been used to verify `Data.Set`

and `Data.IntSet`

, and their map analogues, which are the most commonly used modules of the *containers* library.^{1} The main feature distinguishing `Data.Sequence`

from those is polymorphic recursion. There were a couple of smaller issues to solve beyond that, and some usability improvements made in the process.

As its name implies, `Data.Sequence`

offers a data structure to represent sequences. The type `Seq a`

has a meaning similar to the type of lists `[a]`

, but `Seq a`

supports faster operations such as indexing and concatenation (logarithmic time instead of linear time). The implementation is actually in `Data.Sequence.Internal`

, while `Data.Sequence`

reexports from it.

The type `Seq`

is a thin wrapper around the type `FingerTree`

which is where the fun happens. `FingerTree`

is what one might call an *irregular recursive type*. In the type declaration of `FingerTree`

, the recursive occurrence of the `FingerTree`

type constructor is applied to an argument which is not the variable which appears in the left-hand side of the definition. The right-hand side of the type declaration mentions `FingerTree (Node a)`

, rather than `FingerTree a`

itself:

```
-- An irregular type. (Definitions of Digit and Node omitted.)
data FingerTree a
= EmptyT
| Single a
| Deep Int (Digit a) (FingerTree (Node a)) (Digit a)
newtype Elem a = Elem a
newtype Seq a = Seq (FingerTree (Elem a))
```

*Regular recursive types*^{2} are much more common. For example, the type of lists, `List a`

below, is indeed defined in terms of the same `List a`

as it appears on the left-hand side:

```
-- A regular type
data List a = Nil | Cons a (List a)
```

*hs-to-coq* has no trouble translating irregular recursive types such as `FingerTree`

; do the naive thing and it just works. Problems start once we look at functions involving them. For example, consider a naive recursive size function, `sizeFT`

:

```
sizeFT :: FingerTree a -> Int
sizeFT EmptyT = 0
sizeFT (Single _) = 1
sizeFT (Deep _ l m r) = sizeDigit l + sizeFT m + sizeDigit r
-- This is wrong.
```

We want to count the number of `a`

in a given `FingerTree a`

, but the function above is wrong. In the recursive call, `m`

has type `FingerTree (Node a)`

, so we are counting the number of `Node a`

in the subtree `m`

, when we should actually count the number of `a`

in every `Node a`

, and sum them up. The function above actually counts the sum of all “digits” in a `FingerTree`

, which isn’t a meaningful quantity when trees are viewed as sequences.

While it may seem roundabout, probably the most straightforward way to fix this function is to first define `foldMap`

:^{3}

```
foldMapFT :: Monoid m => (a -> m) -> FingerTree a -> m
foldMapFT _ EmptyT = mempty
foldMapFT f (Single x) = f x
foldMapFT f (Deep _ l m r) = foldMap f l <> foldMapFT (foldMap f) m <> foldMap f r
sizeFT :: FingerTree a -> Int
sizeFT = getSum . foldMapFT (\_ -> Sum 1) -- Data.Monoid.Sum
```

What makes `foldMapFT`

unusual (and also `sizeFT`

even though its behavior is unexpected) is that its recursive occurrence has a different type than its signature. On the left-hand side, `foldMapFT`

is applied to `f :: a -> m`

; in its body on the right-hand side, it is applied to `foldMap f :: Node a -> m`

. This is what it means for `foldMapFT`

to be *polymorphic recursive*: its own definition relies on the polymorphism of `foldMapFT`

in order to specialize it to a different type than its type parameter `a`

.

In Haskell, type parameters are often implicit; a lot of details are inferred, so we don’t think about them. In Coq, type parameters are plain function parameters. Whenever we write a lambda, if it is supposed to be polymorphic, it will take one or more extra arguments. And now, because of polymorphic recursion, it matters where type parameters are introduced relative to the fixpoint operator.

```
(* A polymorphic recursive foldMapFT *)
fix foldMapFT (a : Type) (m : Type) (_ : Monoid m) (f : a -> m) (t : FingerTree a) : m :=
...
(* Here, foldMapFT : forall a m `(Monoid m), (a -> m) -> FingerTree a -> m *)
(* A non-polymorphic recursive foldMapFT, won't typecheck *)
fun (a : Type) (m : Type) (_ : Monoid m) =>
fix foldMapFT (f : a -> m) (t : FingerTree a) : m :=
...
(* Here, foldMapFT : (a -> m) -> FingerTree a -> m *)
```

In the body of the first function, `foldMapFT`

is polymorphic. In the body of the second function, `foldMapFT`

is not polymorphic.

As you might have guessed, *hs-to-coq* picked the wrong version. I created an edit to make the other choice:

```
polyrec foldMapFT
# Make foldMapFT polymorphic recursive
```

The funny thing is that *hs-to-coq* internally goes out of its way to factor out the type parameters of recursive definitions, thus preventing polymorphic recursion. This new edit simply skips that step. One could consider just removing that code path, but I didn’t want that change to affect existing code. My gut feeling is that it might still be useful. It’s unlikely that there is one single rule that will work for translating all definitions to Coq, so “hey it works” is good enough for now, and things will improve as more counterexamples show up.

In Coq, functions are total. To define a recursive function, one must provide a *termination annotation* justifying that the function terminates. There are a couple of variants, but the general idea is that some quantity must “decrease” at every recursive call (and it cannot decrease indefinitely). The most basic annotation (`struct`

) names one of the arguments as “the decreasing argument”.

*hs-to-coq* already allowed more advanced annotations to be specified as edits, but not this most basic variant—until I implemented it. It can be inferred in simple situations, but at some point it is still necessary to make it explicit.

When we write a recursive function, we refer to its decreasing argument by its name, but what really matters is its position in the list of arguments. For example, here is a recursive function `f`

with two arguments `x`

and `y`

:

```
fix f x y {struct y} := ...
```

The annotation `{struct y}`

indicates that `y`

, the second argument of `f`

, is the “decreasing argument”. The function is well-defined only if all occurrences of `f`

in its body are applied to a second argument which is “smaller” than `y`

in a certain sense. Otherwise the compiler throws an error.

That the argument is *named* is a problem when it comes to *hs-to-coq*: in Haskell, some arguments don’t have names because we immediately pattern-match on them. When translated to Coq, all arguments are given generated names, and they are renamed/decomposed in the body of every function.

```
-- A recursive function whose second argument is decreasing,
-- [] or (x : xs) depending on the branch, but there is no variable to refer to it.
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x : xs) = f x : map f xs
```

*hs-to-coq* now allows specifying the decreasing argument by its position in the Haskell definition, i.e., ignoring type parameters. To implement that feature, we have to be a little careful since type parameters in Coq are parameters like any other, so they shift the positions of arguments. That turned out to be a negligible concern because, in the code of *hs-to-coq*, type parameters are kept separate from “value” parameters until a very late phase.

```
termination f {struct 2}
# The second argument of f is decreasing
```

Another potential solution is to fix the name generation to be more predictable. The arguments of top-level functions are numbered sequentially `arg_1__`

, `arg_2__`

, etc., which may be fine, but local functions just keep counting from wherever that left off (going up to `arg_38__`

in one case). Maybe they should also start counting from 1.

More complex termination annotations than `struct`

involve arbitrary terms mentioning those variables. For those, there is currently no workaround, one must use those fragile names to refer to a function’s arguments.

I initially expected that some functions in `Data.Sequence`

would have to be shown terminating based on the size of a tree as a decreasing measure, which involves more sophisticated techniques than justifications based on depth. In fact, only one function needs such sophistication (`thin`

, an internal function used by `liftA2`

). As mentioned earlier, the “size” of a `FingerTree`

is actually a little tricky to formalize, and that makes it even harder to use as part of such a termination annotation. Surprisingly, the naive and “wrong” version of `sizeFT`

shown earlier also works as a simpler decreasing measure for this function.

With the above two changes, *hs-to-coq* is now able to process quite a satisfactory fragment of `Data.Sequence.Internal`

. A few parts are not handled yet; they require either whole new features or more invasive edits than I have experience with at the moment.

There remains another issue with the `thin`

function we just mentioned: it is mutually recursive with another function. *hs-to-coq* currently does not support the combination of mutually recursive functions with termination annotations other than the basic one (`struct`

).

At the very beginning, *hs-to-coq* simply refused to process `Data.Sequence`

because *hs-to-coq* doesn’t handle pattern synonyms. Now it at least skips pattern synonyms with a warning instead of failing. One still has to manually add edits to ignore declarations that use pattern synonyms, since it’s not too easy to tell whether that’s the case without a more involved analysis than is currently done.

The remaining bits are partial functions, internally use partial functions, or are defined by recursion on `Int`

and I haven’t looked into how to do it yet.

Some changes that aren’t strictly necessary to get the job done, but made my life a little easier.

In Haskell, declarations can be written in any order (except when Template Haskell is involved) and they can refer to each other just fine.

In Coq, declarations must be ordered because of the restrictions on recursion. Type classes further complicate this story because of their implicitness: we cannot know whether an instance is used in an expression without type checking, and *hs-to-coq* currently stops at renaming.

For now, we have a “best guess” implementation using a “stable topological sort”, trying to preserve an *a priori* order as much as possible, putting instances before top-level values, and otherwise ordering value declarations as they appear in the Haskell source. Of course that doesn’t always work, so there are edits to create artificial dependencies between declarations.

It took me a while to notice something wrong with the implementation: independent definitions were sorted in reverse order, which is the opposite of what a “stable sort” should do. The sort algorithm itself was fine: the obvious dependencies were satisfied. And you expect to have things to fix by hand because of the underspecified nature of the problem at that point. So any single discrepancy was easily dismissed as “just what the algorithm does”. But after getting annoyed enough that nothing was where I expected it to be, I went to investigate. The culprit was GHC^{4}: renaming produces a list of declarations in reverse order! This is usually not a problem since the order of declarations should not matter in Haskell^{5}, but in our case we have to sort the declarations in source order before applying the stable topological sort. That ensures that the order in our Coq output is similar to the order in the Haskell input.

In edits files, identifiers must be fully qualified. This prevents ambiguities since edits don’t belong to any one module.

Module names can get quite long. It was tedious to repeat `Data.Sequence.Internal`

over and over. There was already an edit to *rename* a module, but that changes the name of the file itself and affects other modules using that module. I added a new edit to *abbreviate* a module, without those side effects. In fact, that edit only affects the edits file it is in. The parser expands the abbreviation on the fly whenever it encounters an identifier, and after the parser is done, the abbreviation is completely forgotten.

```
module alias Seq Data.Sequence.Internal
# "Seq" is now an abbreviation of "Data.Sequence.Internal"
```

Ready, Set, Verify!, IFCP 2018.↩︎

I don’t know whether

*irregular*/*regular*is conventional terminology, but my intuition to justify those names is that they generalize regular expressions. A regular recursive type defines a set of trees which can be recognized by a finite state machine (a*tree automaton*; Tree Automata, Techniques and Applications is a comprehensive book on the topic).↩︎Link to source which looks a bit different for performance reasons.↩︎

Tested with GHC 8.4↩︎

And the AST is annotated with source locations so we don’t get lost.↩︎

Can we prove `Monad`

instances lawful using *inspection-testing*?

In this very simple experiment, I’ve tried to make it work for the common monads found in *base* and *transformers*.

Main takeaways:

- Associativity almost holds for all of the considered monads, with the main constraint being that the transformers must be applied to a concrete monad such as
`Identity`

rather than an abstract`m`

. - The identity laws were relaxed to hold “up to eta-expansion”.
`[]`

cheats using rewrite rules.- This is a job for CPP.

The source code is available in this gist.

Let’s see how to use inspection testing through the first example of the associativity law. It works similarly for the other two laws.

Here’s the associativity law we are going to test. I prefer this formulation since it makes the connection with monoids and categories obvious:

`((f >=> g) >=> h) = (f >=> (g >=> h))`

To use inspection testing, turn the two sides of the equation into functions:

```
assoc1, assoc2 :: Monad m => (a -> m b) -> (b -> m c) -> (c -> m d) -> (a -> m d)
assoc1 f g h = (f >=> g) >=> h
assoc2 f g h = f >=> (g >=> h)
```

These two functions are not the same if we don’t know anything about `m`

and `(>=>)`

. So choose a concrete monad `m`

. For example, `Identity`

:

```
assoc1, assoc2 :: (a -> Identity b) -> (b -> Identity c) -> (c -> Identity d) -> (a -> Identity d)
assoc1 f g h = (f >=> g) >=> h
assoc2 f g h = f >=> (g >=> h)
```

GHC will be able to inline the definition of `(>=>)`

for `Identity`

and simplify both functions.

Using the *inspection-testing* library, we can now assert that the simplified functions in GHC Core are in fact equal:

```
{-# LANGUAGE TemplateHaskell #-}
inspect $ 'assoc1 ==- 'assoc2
```

This test is executed at compile time. The quoted identifiers `'assoc1`

and `'assoc2`

are the names of the functions as values (different things from the functions themselves), that the function `inspect`

uses to look up their simplified definitions in GHC Core. The `(==-)`

operator asserts that they must be the same, while ignoring coercions and type lambdas—constructs of the GHC Core language which will be erased in later compilation stages.

These tests can be tedious to adapt for each monad. The main change is the monad name; another concern is to use different function names for each test case. The result is a fair amount of code duplication:

```
assoc1Identity, assoc2Identity
:: (a -> Identity b) -> (b -> Identity c) -> (c -> Identity d) -> (a -> Identity d)
assoc1Identity f g h = (f >=> g) >=> h
assoc2Identity f g h = f >=> (g >=> h)
inspect $ 'assoc1Identity ==- 'assoc2Identity
assoc1IO, assoc2IO
:: (a -> IO b) -> (b -> IO c) -> (c -> IO d) -> (a -> IO d)
assoc1IO f g h = (f >=> g) >=> h
assoc2IO f g h = f >=> (g >=> h)
inspect $ 'assoc1IO ==- 'assoc2IO
```

The best way I found to handle the boilerplate is a CPP macro:

```
{-# LANGUAGE CPP #-}
#define TEST_ASSOC(NAME,M,FFF) \
assoc1'NAME, assoc2'NAME :: (a -> M b) -> (b -> M c) -> (c -> M d) -> a -> M d ; \
assoc1'NAME = assoc1 ; \
assoc1'NAME = assoc2 ; \
inspect $ 'assoc1'NAME FFF 'assoc2'NAME
```

It can be used as follows:

```
TEST_ASSOC(Identity,Identity,==-)
TEST_ASSOC(Maybe,Maybe,==-)
TEST_ASSOC(IO,IO,==-)
TEST_ASSOC(Select,Select,=/=)
```

Template Haskell is the other obvious candidate, but it is not as convenient:

- There’s no syntax to parameterize quotes by function names; at best, they can be wrapped in a pattern or expression quote, but type declarations require raw names; I object to explicitly constructing the AST.
- The
`inspect`

function must execute after the two given functions are defined; these two steps cannot be done in a single splice.

The inspection tests pass for almost all of the monads under test. Three tests fail. One (`Writer`

) could be fixed with a little tweak. The other two (`Select`

and `Product`

) can probably be fixed, I’m not sure.

Nevertheless, thinking through why the other tests succeed can also be an instructive exercise.

The writer monad consists of pairs, where one component can be thought of as a “log” produced by the computation. All we really need is a way to concatenate logs, so logs can formally be elements of an arbitrary monoid:

```
newtype Writer log a = Writer (a, log)
instance Monoid log => Monad (Writer log) where
return a = Writer (a, mempty)
Writer (a, log) >>= k =
let Writer (b, log') = k a in
b, log <> log') (
```

The writer monad does not pass any of the three inspection tests out-of-the-box (associativity, left identity, right identity) because the order of composition using `(>=>)`

is reflected after inlining in the order of composition using `(<>)`

,^{1} which GHC cannot reassociate in general.

A simple fix is to instantiate the monoid `log`

to a concrete one whose operations do get reassociated, such as `Endo e`

. While that makes the test less general technically, it can also be argued that this is such a localized change that we should still be able to derive from it a fair amount of confidence that the law holds in the general case.

The fact that `Maybe`

passes the test is a good illustration of one extremely useful simplification rule applied by GHC: the “case-of-case” transformation.

Expand both sides of the equation:

`((f >=> g) >=> h) = (f >=> (g >=> h))`

The left-hand side is a `case`

expression whose scrutinee is another `case`

expression:

The right-hand side is a `case`

expression containing a `case`

expression in one of its branches:

The code in the latter Figure B tends to execute faster. One simple reason for that is that if `f a`

evaluates to `Nothing`

, the whole expression will then immediately reduce to `Nothing`

, whereas Figure A will take one more step to reduce the inner `case`

before the outer `case`

. Computations nested in `case`

scrutinees also tend to require additional bookkeeping when compiled naively.

The key rule, named “case-of-case”, stems from remarking that eventually, a case expression reduces to one of its branches. Therefore, when it is surrounded by some context—an outer `case`

expression—we might as well apply the context to the branches directly. Figure A transforms into the following:

And the first branch reduces to `Nothing`

.

This transformation is not always a good idea to apply, because it duplicates the context, once for each branch of the inner `case`

. That rule pays off when some of these branches are constructors and when the context is a `case`

, so the transformation turns them into “case of constructor” which can be simplified away.

The representation of `IO`

in GHC Core looks like a strict state monad.

`data IO a = IO# (State# RealWorld -> (# a, State# RealWorld #))`

However, the resemblance between `IO`

and `State`

is purely syntactic, viewing Haskell programs only as terms to be rewritten, rather than mathematical functions “from states to states”. The token that is being passed around as the “state” in `IO`

has no meaning other than as a gadget to maintain the evaluation order required by the semantics of `IO`

. It is merely an elegant coincidence that the implementation of `IO`

matches perfectly the mechanics of the state monad.

Out of all the examples considered in this experiment, the continuation monad is the only example of a monad transformer applied to an abstract monad `m`

. All the other transformers are specialized to the identity monad.

That is because the other monad transformers use the underlying monad’s `(>>=)`

in their own definition of `(>>=)`

, and that blocks simplification. `ContT`

is special: its `Monad (ContT r m)`

instance does not even use a `Monad m`

instance. That allows it to compute where other monad transformers cannot.

This observation also suggests only using concrete monads as a strategy for optimizations to take place. The main downside is the lack of modularity. Some computations are common to many monads (e.g., traversals), and it also seems desirable to not have to redefine and recompile them from scratch for every new monad we come up with.

For the list monad, `(>>=)`

is `flip concatMap`

:

`concatMap :: (a -> [b]) -> [a] -> [b]`

`concatMap`

is a recursive function, and GHC does not inline those. Given that, it may be surprising that it passes the inspection test. This is thanks to bespoke rewrite rules in the standard library to implement list fusion.

You can confirm that by defining your own copy of the list monad and see that it fails the test.

Another idea was to disable rewrite rules (`-fno-enable-rewrite-rules`

), but this breaks even things unrelated to lists for mysterious reasons.

`pure`

to the right of `(>>=)`

cancels out.

`(u >>= pure) = u`

The right-hand side is very easy to simplify: there is nothing to do.

The problem is that on the left-hand side, we need to do some work to combine `u`

and `pure`

, and that almost always some of that work remains visible after simplification. Sadly, the main culprit is laziness.

For example, in the `Reader`

monad, `u >>= pure`

reduces to the following:

`Reader (\r -> runReader u r)`

If we ignore the coercions `Reader`

and `runReader`

, then we have:

`r -> u r \`

That is the eta-expanded^{2} form of `u`

. In Haskell, where `u`

might be undefined but a lambda is not undefined, `\r -> u r`

is not equivalent to `u`

. To me, the root of the issue is that we can use `seq`

on everything, including functions, and that allows us to distinguish `undefined`

(blows up) from `\x -> undefined x`

(which is equivalent to `\x -> undefined`

; does not blow up until it’s applied). A perhaps nicer alternative is to put `seq`

in a type class which can only be implemented by data types, excluding various functions and computations. That would add extra constraints on functions that do use strictness on abstract types, such as `foldl'`

. It’s unclear whether that would be a flaw or a feature.

So `u`

and `\r -> u r`

are not always the same, but really because of a single exception, when `u`

is undefined. So they are still *kinda* the same. Eta-expansion can only make an undefined term somewhat less undefined, but arguably not in any meaningful way.

This suggests to relax the equality relation to allow terms to be equal “up to eta-expansion”:

`f = g if (\x -> f x) = (\x -> g x)`

Furthermore, eta-expansion is an idempotent operation:

`r -> (\r1 -> u r1) r = \r -> u r \`

So to compare two functions, we can expand both sides, and if one side was already eta-expanded, it will reduce back to itself.

We can write the test case as follows:

```
lid1, lid2 :: Reader r a -> Reader r a
lid1 x = eta x
lid2 x = eta (x >>= pure)
eta :: Reader r a -> Reader r a
eta (Reader u) = Reader (\r -> u r)
inspect $ 'lid1 ==- 'lid2
```

The notion of “eta-expansion” can be generalized to other types than function types, notably for pairs:

```
eta :: (a, b) -> (a, b)
eta xy = (fst x, snd y)
```

The situation is similar to functions: `xy`

may be undefined, but `eta xy`

is never undefined.^{3}

This suggests the definition of a type class for generalized eta-expansion:

```
class Eta a where
-- Law: eta x = x modulo laziness
eta :: a -> a
instance Eta (a, b) where
eta ~(x, y) = (x, y) -- The lazy pattern is equivalent to using projections
instance Eta (Reader r a) where
eta (Reader f) = Reader (\r -> f r)
```

The handling of type parameters here is somewhat arbitrary: one could also try to eta-expand the components of the pair for instance.

Two more interesting cases are `ContT`

and `IO`

.

For `ContT`

, we not only expand `u`

to `\k -> u k`

, but we also expand the continuation to get `\k -> u (\x -> k x)`

.

```
instance Eta (ContT r m a) where
eta (ContT u) = ContT (\k -> u (\x -> k x))
```

It is also possible, and necessary, to eta-expand `IO`

, whatever that means.

```
instance Eta (IO a) where
eta u = IO (\s -> case IO f -> f s)
-- Note: eta is lazier than id.
-- eta (undefined :: IO a) /= (undefined :: IO a)
```

`pure`

on the left of `(>>=)`

cancels out.

`(pure x >>= k) = k x`

The left identity has the same issue with eta-expansion that we just described for the right identity. It also has another problem with sharing.

In the `Reader`

monad for example, `(pure x >>= k)`

first expands to—ignoring the coercions for clarity:

`r -> k x r \`

However, GHC also decides to extrude the `k x`

because it doesn’t depend on `r`

:

`let u = k x in \r -> u r`

The details go a little over my head, but I found a cunning workaround in the magical function `GHC.Exts.inline`

in the `Eta`

instance for `Reader`

:

```
instance Eta (ReaderT e m a) where
eta u = ReaderT (\x -> runReaderT (inline u) x)
```

When these inspection tests pass, that is proof that the monad laws hold.

If we reduce what the compiler does to inlining and simplification, then on the one hand, not all monads can be verified that way (e.g., lists that don’t cheat with rewrite rules); on the other hand, when the proof works, it proves a property stronger than “lawfulness”.

Let’s call it “definitional lawfulness”: we say that the laws hold “by definition”, with trivial simplification steps only. There is some subjectivity about what qualifies as a “trivial” simplification; it boils down to how dumb the compiler/proof-checker can be. Nevertheless, what makes definitional lawfulness interesting is that:

it is immediately inspection-testable and the test is actually a proof, unlike with random property testing (QuickCheck) for example;

if the compiler can recognize the monad laws by mere simplification, that very likely implies that it can simplify the overhead of more complex monadic expressions.

That implication is not obviously true, it’s actually false in practice without some manual help, but definitional lawfulness gets us some of the way there. A sufficient condition is for inlining and simplification to be confluent (“the order of simplification does not matter”), but inlining being limited by heuristics jeopardizes that property because those heuristics depend on the order of simplifications.

Custom rewrite rules also make the story more complicated, which is why I just consider it cheating, and prefer structures that enable fusion by simplification, such as difference lists and other continuation-passing tricks.

`(<>)`

is also called`mappend`

, and at the level of Core there is an unfortunately visible difference, which is why the source code uses`mappend`

.↩︎Paradoxically, it is sometimes called “eta-reduction” even if it makes the term look “bigger”, because it also makes them look more “regular”.↩︎

There is in fact a deeper analogy. Pairs can be seen as (dependent) functions with domain

`Bool`

. Pairs and functions can also be viewed in terms of a more general notion of “negative types”, “codata”.↩︎

The Haskell library *generic-data* provides generic implementations of standard type classes. One of the goals of *generic-data* is to generate code which performs (at least) as well as something you would write manually. Even better, we can make sure that the compiled code is identical to what one would obtain from hand-written code, using inspection testing^{1}.

During the exercise of building some inspection tests for *generic-data*, the most notable discrepancy to resolve was with the `Traversable`

class.

To improve the traversals generated by *generic-data*, a useful data structure is *applicative difference lists* (it’s also been called `Boggle`

before). It is a type-indexed variant of difference lists which simplifies applicative operations at compile time. This data structure is available as a library on Hackage: *ap-normalize*.

The `Traversable`

type class describes type constructors `t`

which can be “mapped over” similarly to `Functor`

, but using an effectful function `a -> f b`

:

```
class (Functor t, Foldable t) => Traversable t where
traverse :: forall f a b. Applicative f => (a -> f b) -> t a -> f (t b)
```

(We will not discuss the `Functor`

and `Foldable`

super classes.)

Throughout this post, fixing an applicative functor `f`

, *actions* are what we call values of type `f x`

, to evoke the idea that they are first-class computations.

Intuitively, a traversal walks over a data structure `t a`

which contains “leaves” of type `a`

, and performs some action (given by the `a -> f b`

function) to transform them into “leaves” of type `b`

, producing a new data structure `t b`

.

There is a straightforward recipe to define `traverse`

for many data types. This is best illustrated by a example. We will call this the “naive” definition because it’s just the obvious thing to write if one were to write a traversal. That is not meant to convey that it’s bad in any way.

Using applicative laws,

`Example <$> pure a`

can fuse into`pure (Example a)`

, that in turn can fuse with the following`(<*>)`

into`Example a <$> ...`

.Going the other way, we can expand

`Example <$>`

into`pure Example <*>`

for a more uniform look.`liftA2`

can also be used to fuse the first`(<$>)`

and`(<*>)`

.

For the sake of completeness, the recipe is roughly as follows, for a data type with a type parameter `a`

:

traverse each field individually, in one of the following ways depending on its type:

if its type does not depend on

`a`

(e.g.,`Int`

), then the field is kept intact, and returned purely (using`pure`

);if its type is equal to

`a`

, then we can apply the function`a -> f b`

to it;if its type is of the form

`t a`

, where`t`

is a traversable type we`traverse`

it recursively;

combine the field traversals using the

`Applicative`

combinator`(<*>)`

.

Noticing that the only case where we need another type to be traversable is to traverse fields whose type depend on `a`

, we can define `traverse`

for all types which don’t involve non-traversable primitive types such as `IO`

or `(->)`

.

This is quite formulaic, and it can be automated in many ways. The most practical solution is to use the GHC extension `DeriveTraversable`

(which implies `DeriveFunctor`

and `DeriveFoldable`

):

```
{-# LANGUAGE DeriveTraversable #-}
data Example a = Example Int a (Maybe a) (Example a)
deriving (Functor, Foldable, Traversable)
```

You may be wondering: if it’s built into GHC, why would I bother with generic deriving? There isn’t a substantial difference from a user’s perspective (so you should just use the extension). But from the point of view of implementing a language, deriving instances for a particular type class is a pretty ad hoc feature to include in a language specification. Generic deriving subsumes it, turning that feature into a library that regular people, other than compiler writers, can understand and improve independently from the language itself.

Well, that’s the theory. Generic metaprogramming in Haskell has a ways to go before it can fully replace an integrated solution like `DeriveTraversable`

. The biggest issue currently is that GHC does not perform as much inlining as one might want. A coarse but effective answer to overcome this obstacle might be an annotation to explicitly indicate that some pieces of source code must be gone after compilation.

So the other way to derive `traverse`

that I want to talk about is to use `GHC.Generics`

. *generic-data* provides a function `gtraverse`

which can be used as the definition of `traverse`

for many types with a `Generic1`

instance. Although it does not use the `deriving`

keyword^{2}, it is still a form of “deriving” since the syntax of the instance does not depend on the particular shape of `Example`

.

```
{-# LANGUAGE DeriveGeneric #-}
data Example a = Example Int a (Maybe a) (Example a)
deriving Generic1
instance Traversable Example where
traverse = gtraverse
```

All three instances above *behave* the same (the naive one, the `DeriveTraversable`

one, and the generic one). However, if we look not only at the *behavior* but the generated *code* itself, the optimized GHC Core code produced by the compiler is not the same in all cases. The definition of `gtraverse`

until *generic-data* version 0.8.3.0 results in code which looks like the following (various details were simplified for clarity’s sake^{3}):

That function traverses each field (using `pure`

, `update`

, `traverse update`

), wraps them in a newtype `K1`

, and collects them in a pair of pairs `(_ :*: _) :*: (_ :*: _)`

, and then replaces those pairs with the `Example`

constructor.

Clearly, this does not look the same as the naive version shown earlier. Let’s enumerate the differences:

There are many uses of

`(<$>)`

, which can be fused together.It constructs and immediately destructs intermediate pairs

`(:*:)`

. It would be more direct to wrap the fields in`Example`

.The actions are associated differently (

`(a <*> b) <*> (c <*> d)`

) whereas the previous two implementations associate actions to the left (`((a <*> b) <*> c) <*> d`

).

This definition cannot actually be simplified because the applicative functor `f`

and its operations `(<$>)`

and `(<*>)`

are abstract, their definitions are not available for inlining. This definition (Figure B) is only equivalent to the naive one (Figure A) if we assume the laws of the `Applicative`

class (notably associativity), but the compiler has no knowledge of those. And so the simplifier is stuck there.

To be fair, it’s actually not so clear that these differences lead to performance problems in practice. Here are some mitigating factors to consider:

For many concrete applicative functors, inlining

`(<$>)`

and`(<*>)`

does end up simplifying away all of the noise.Even if we didn’t build up pairs explicitly using

`(:*:)`

,`(<*>)`

may allocate closures which are about as costly as pairs anyway.Tree-like (

`Free`

) monads are more performant when associating actions to the right (`a <*> (b <*> (c <*> d))`

).

Nevertheless, it seems valuable to explore alternative implementations. To echo the three points just above:

The new definition of

`gtraverse`

will simplify to the naive version even while the applicative functor is still abstract.Properly measuring the subtle difference between pairs and closures sounds like a pain. Knowing that the code that eventually runs allows one to switch from one system (

`DeriveTraversable`

) to another (*generic-data*) without risk of regressions—modulo all the transient caveats that make this ideal story not true today.If actions must be associated another way, this is just another library function to be written.

The main idea in this solution is that the definition of `gtraverse`

should explicitly reassociate and simplify the traversal.

An obvious approach is thus to represent the syntax of the traversal explicitly, as an algebraic data type where a constructor encodes the applicative combinator `(<*>)`

, possibly in a normalized form. This is a free applicative functor:

```
data Free f a where
Pure :: a -> Free f a
Ap :: Free f (b -> a) -> f b -> Free f a
-- This is actually a regular data type, just using GADTSyntax for clarity.
```

However, this is a recursive structure: that blocks compiler optimizations because GHC does not inline recursive functions (if it did, this could be a viable approach).

Notice that this free applicative functor is basically a list, a heterogeneous list of `f b`

values where `b`

varies from element to element. If recursion is the problem, maybe we should find another representation of lists which is not recursive. As it turns out, difference lists will be the answer.

Let us digress with a quick recap of difference lists, so we’re on the same page, and as a reference to explain by analogy the fancier version that’s to come.

Here’s a list.

`1 : 2 : 3 : []`

A difference list is a list with a hole `_`

in place of its end `[]`

. The word “difference” means that it is the result of “subtracting” the end from the list:

`1 : 2 : 3 : _`

In Haskell, a difference list is represented as a function, whose input is a list to fill the hole with, and whose output is the whole list after adding the difference around the hole. A function “from holes to wholes”.

```
type DList a = [a] -> [a]
example :: DList Int
example hole = 1 : 2 : 3 : hole
```

Difference lists are interesting because they are super easy to concatenate: just fill the hole in one difference list with the other list. In Haskell, this is function composition, `(.)`

. For instance, the list above is the composition of the two lists below:

```
ex1, ex2 :: DList Int
ex1 hole = 1 : hole
ex2 hole = 2 : 3 : hole
example = ex1 . ex2
```

Difference lists are an alternative representation of lists with a performant concatenation operation, which doesn’t allocate any transient intermediate structure. The trade-off is that other list operations are more difficult to have, notably because it’s expensive to inspect the list to know whether it is empty or not.

The following functions complete the picture with a constructor of difference lists and an eliminator into regular list. Internally they involve the list constructors in very simple ways, which is actually key to the purpose of difference lists. `singleton`

is literally the list constructor `(:)`

(thus representing the singleton list `[x]`

as a list with a hole `x : []`

), and `toList`

applies a difference list to the empty list (filling the hole with `[]`

).

```
singleton :: a -> DList a
singleton = (:)
toList :: DList a -> [a]
toList u = u []
```

Why were we talking about lists? Applicative programming essentially consists in describing lists of actions (values of type `f x`

for some `x`

) separated by `(<*>)`

and terminated (on the left end) by `pure _`

(we’ll come back later to putting `(<$>)`

back).

```
pure Example
<*> pure a
<*> update b
<*> traverse update c
<*> traverse update d
```

Once we have a notion of lists, a difference list is a list with a hole `_`

in place of its end:

```
_
<*> pure a
<*> update b
<*> traverse update c
<*> traverse update d
```

So constructing a traversal as a difference list of actions would allow us to maintain this structure of left-associated `(<*>)`

. In particular, this will guarantee that there are no extra `(<$>)`

in the middle. Once we’ve completed the list, we top it off with a term `pure _`

, where the remaining hole expects a pure function without any of the applicative functor nonsense which was blocking compile-time evaluation.

Let’s see how it looks in Haskell. This is where things ramp up steeply. While the types may look daunting, I want to show that we can ignore 90% of it to see that, under the hood, this is the same as plain difference lists `[a] -> [a]`

.

You’re not missing much from ignoring 90% of the code: it is entirely constrained by the types. That’s the magic of parametricity.

The first thing to note is that since we’ve replaced the list constructor `(:)`

with `(<*>)`

, actions `f x`

represent both individual list elements `a`

and whole lists `[a]`

.

We will draw the analogy between simple difference lists and applicative difference lists by *erasure*. Erase the type parameter of `f`

, and anything that has something to do with that parameter. What is left is basically the code of `DList`

, carefully replacing `f`

with `a`

or `[a]`

as is appropriate.^{4}

A first example is that `(<*>)`

thus corresponds to both `flip (:)`

and `(++)`

(`flip (:)`

because as we will soon see, we are really building snoc lists here).

```
-- (<*>) vs (:) and (++)
(<*>) :: f (x -> y) -> f x -> f y
:: f -> f -> f -- erased
flip (:) :: [a] -> a -> [a]
(++) :: [a] -> [a] -> [a]
```

For another warm-up example, `fmap`

, which acts on the type parameter of `f`

, erases to the identity function.

```
-- fmap vs id
fmap :: (x -> y) -> f x -> f y
:: f -> f -- erased
id :: [a] -> [a]
```

The applicative difference lists described above are given by the following type `ApDList`

. Similar to simple difference lists, they are also functions “from holes to wholes”, where both “holes” and “wholes” (complete things without holes) are actions in this case, `f (x -> r)`

and `f r`

, if we ignore `Yoneda`

. We will not show the definition of `Yoneda`

, but for the purpose of extending the metaphor with `DList`

, `Yoneda f`

is the same as `f`

:

```
newtype ApDList f x = ApDList (forall r. Yoneda f (x -> r) -> f r)
type ApDList f = ( f -> f ) -- erased
type DList a = ( [a] -> [a])
```

While simple difference lists define a monoid, applicative difference lists similarly define an applicative functor, with a product `(<*>)`

and an identity `pure`

which indeed erase to the concatenation of difference lists (function composition) and the empty difference list (the identity function).

An important fact about this instance is that it has no constraints whatsoever; the type constructor `f`

can be anything. The lack of constraints restricts the possible constructs used in this instance. It’s all lambdas and applications: that’s how we can tell without even looking at the definitions that `pure`

and `(<*>)`

can only be some flavor of the identity function and function composition.

```
instance Applicative (ApDList f) where
pure x = ApDList (\t -> lowerYoneda (fmap ($ x) t))
ApDList uf <*> ApDList ux = ApDList (\t -> ux (Yoneda (\c -> uf (fmap (\d e -> c (d . e)) t))))
```

Empty difference lists: erasure of `pure`

, which corresponds to `id`

.

```
-- pure vs id, signature
pure :: x -> ApDList f x -- Empty ApDList
id :: DList a -- Empty DList
-- pure vs id, definition
pure x = ApDList (\t -> lowerYoneda (fmap ($ x) t))
id = (\t -> t )
```

Where `lowerYoneda`

is also analogous to the identity function.

`lowerYoneda :: Yoneda f x -> f x`

Concatenation of difference lists: erasure of `(<*>)`

, which corresponds to `(.)`

.

```
-- (<*>) vs (.), signature
(<*>) :: ApDList f (x -> y) -> ApDList f x -> ApDList f y -- Concatenate ApDList
(.) :: DList a -> DList a -> DList a -- Concatenate DList
-- (<*>) vs (.), definition
ApDList uf <*> ApDList ux = ApDList (\t -> ux (Yoneda (\c -> uf (fmap (\d e -> c (d . e)) t))))
uf . ux = (\t -> ux ( uf t ))
```

Remark: this composition operator is actually flipped. The standard definition goes `uf . ux = (\t -> uf (ux t))`

. This is fine here because applicative lists are actually snoc lists—the reverse of “cons”—where elements are added to the right of lists, so the “holes” of the corresponding difference lists are on the left:

```
uf = ((_ <*> a) <*> b) <*> c) -- A snoc list, separated by (<*>)
ux = (_ <*> x) <*> y
uf <*> ux = ((((_ <*> a) <*> b) <*> c) <*> x) <*> y
```

To concatenate `uf`

and `ux`

, we put `uf`

in the hole on the left end of `ux`

, rather than the other way around; this is why, in the definition above, `uf`

is inside and `ux`

is outside.

We have defined *concatenation* to combine applicative difference lists. We also need ways to construct and eliminate them. We *lift* elements as singleton lists and *lower* lists into simple actions.

Lifting creates a new `ApDList`

, with the invariant that it represents a left-associated list of actions separated by `(<*>)`

(the left-associativity is why it needs to be a snoc list). That invariant is preserved by the concatenation operation we defined just earlier. One can easily check that `liftApDList`

is the only function in this little `ApDList`

library (4 functions) where `(<*>)`

from the `Applicative f`

instance is used.

```
-- Singleton ApDList
liftApDList :: Applicative f => f x -> ApDList f x
listApDList u = ApDList (\t -> lowerYoneda t <*> u)
-- Singleton DList (snoc version)
snocSingleton :: a -> DList a
snocSingleton u = (\t -> t ++ [u])
```

Lowering consumes an `ApDList`

by filling the hole with an action, producing a whole action. This is the only function about `ApDList`

where `pure`

from `Applicative f`

is used. We use `pure`

to terminate lists of actions in the same way `[]`

terminates regular lists. (This is oversimplified from the real version.)

```
lowerApDList :: Applicative f => ApDList f x -> f x
lowerApDList (ApDList u) = u (Yoneda (\f -> pure f))
-- By analogy
toList :: DList a -> [a]
toList u = u []
-- lowerApDList vs toList
lowerApDList (ApDList u) = u (Yoneda (\f -> pure f))
u = u ( pure _) -- erased
toList u = u []
```

Having defined difference lists and their basic operations, we can pretend that they are really just lists. Similarly, we can pretend that applicative difference lists `ApDList f x`

are just actions `f x`

, thanks to there being an instance of the same `Applicative`

interface. With that setup, fixing the generic `gtraverse`

function is actually a very small change, that will be explained through an example. We started with this result:

After the patch, we get the following (Figure C), where the only textual difference is that we inserted `lowerApDList`

at the root and `liftApDList`

at the leaves of the traversal. That changes the types of things in the middle from `f`

to `ApDList f`

. In the source code behind `gtraverse`

, this type change appears as a single substitution in one type signature.^{5}

Again leveraging parametricity, we don’t need to work out the simplification in detail. Just from looking at the `traverseExample`

definition here and the `ApDList`

library outlined above, we can tell the following:

the resulting term will have three occurrences of

`(<*>)`

from`Applicative f`

, since there are three uses of`liftApDList`

;all

`Applicative`

combinators for`ApDList`

(`pure`

,`(<*>)`

and`<$>`

) maintain a list structure as an invariant, so that in the end these three`(<*>)`

will be chained together in the shape of a list;finally,

`lowerApDList`

puts a`pure`

at the end of that list.

Provided everything does get inlined, the resulting term is going to be of this form.

```
traverseExample :: Applicative f => (a -> f b) -> Example a -> f (Example b)
traverssExample update (Example a b c d) =
pure _
<*> update b
<*> traverse update c
<*> traverse update d
```

There is just one remaining unknown, which is the argument of `pure`

.

Whatever it is, it contains only stuff that was between

`lowerApDList`

and`liftApDList`

and that was not an`ApDList`

combinator (`pure`

,`(<*>)`

,`(<$>)`

), which we expect to have been reduced. That leaves us with`a`

, the constructors`(:*:)`

and`K1`

, and the lambda at the top.The only constructs allowed to combine those are function applications and lambdas, because the combinators where they could have come from,

`pure`

,`(<*>)`

, and`(<$>)`

, are pure lambda terms.

With the constraint that it must all be well-typed, that doesn’t leave a lot of room. I’d be hard-pressed to find another way to put these together:

```
b0 c0 d0 ->
\K1 a' :*: K1 b') :*: (K1 c' :*: K1 d')) -> Example a' b' c' d')
(\((K1 a :*: K1 b0) :*: (K1 c0 :*: K1 d0)) ((
```

Which beta-reduces to:

```
b0 c0 d0 -> Example a b0 c0 d0
\
-- equal to --
Example a
```

With all of that, we’ve managed to make `gtraverse`

simplify to roughly the naive definition, using only simplification rules from the pure Core calculus, and no external laws as rewrite rules.

There are a few more details to discuss, which explain the remaining differences between the naive definition, this latest definition, and what’s actually implemented in the latest version of *generic-data* (0.9.0.0).

`pure`

on the rightThe naive version of `traverse`

starts with `Example <$> pure a`

. By an `Applicative`

law, it is equivalent to `pure (Example a)`

. The naive version (Figure A) is indeed too naive: for fields of constant types, which don’t depend on the type parameter of the applicative functor `f`

, we can pass them directly as argument to the constructor instead of wrapping them in `pure`

and unwrapping them at run time.

The new version of `gtraverse`

achieves that by *not* wrapping `pure a`

under `liftApDList`

(Figure C). So this `pure`

does not come from `Applicative f`

, but from `Applicative (ApDList f)`

, where it is defined as the identity function (more or less). Notably, this “fusion” happens even if `pure`

is used in the middle of the list, not only at the end.

`pure`

on the leftWhile `pure`

on the right of `(<*>)`

got simplified, there is a remaining `pure`

on the left, which could be turned into `(<$>)`

(Figure D).

Applicative difference lists alone won’t allow us to do that, because all actions are immediately put into arguments of `(<*>)`

by `liftApDList`

. We cannot inspect applicative difference lists and change how the first element is handled, because it is an argument to an abstract `(<*>)`

. We can make another construction on top of applicative difference lists to postpone wrapping the first element in a difference list, so we can then use `(<$>)`

instead of `(<*>)`

(and an extra `pure`

) to its left. There is an extra constructor to represent an empty list of actions.

```
data ApCons f c where
PureAC :: c -> ApCons f c
LiftAC :: (a -> b -> c) -> f a -> ApDList f b -> ApCons f c
instance Applicative f => Applicative (ApCons f) where {- ... -}
```

In fact, we can extend that idea to use another applicative combinator, `liftA2`

which can do the job of one `(<$>)`

and one `(<*>)`

at the same time. We take off the first two elements of the list, using two extra constructors to represent empty and singleton lists.

```
data ApCons2 f c where
PureAC2 :: c -> ApCons2 f c
FmapAC2 :: (b -> c) -> f b -> ApCons2 f c
LiftApAC2 :: (a -> b -> c -> d) -> f a -> f b -> ApDList f c -> ApCons2 f d
-- Encodes liftA2 _ _ _ <*> _
instance Applicative f => Applicative (ApCons2 f) where {- ... -}
```

The complete implementation can be found in the library *ap-normalize*.

We’ve described a generic implementation of `traverse`

which can be simplified to the same Core term as a semi-naively handwritten version, using only pure lambda-calculus transformations—a bit more than beta-reduction, but nothing as dangerous as custom rewrite rules.

The `Traversable`

instances generated using the *generic-data* library are now identical to instances derived by GHC using the well-oiled `DeriveTraversable`

extension—provided sufficient inlining. This is a small step towards turning anything that has to do with deriving into a generic affair.

The heavy lifting is performed by *applicative difference lists*, an adaptation of difference lists, from list (really monoids) to applicative functors. This idea is far from new, depending on how you look at it:

the adaptation can also be seen in terms of heterogeneous lists first, with applicative functors adding a small twist to managing type-level lists (entirely existentially quantified);

this is an instance of Cayley’s generalized representation theorem in category theory. (More on this below.)

Difference lists are well-known as a technique for improving asymptotic (“big-O”) run time; it is less widespread that they can often be optimized away entirely using only inlining and simplification. (I’ve written about this before too.) Inlining and simplification are arguably a lightweight compiler optimization (“peephole optimization” of lambda terms) as opposed to, for instance, custom rewrite rules, which are unsafe, and other transformations that rely on non-trivial program analyses.

While a sufficiently smart compiler could be hypothesized to optimize anything, surely it is more practical to invest in data structures that can be handled by even today’s really dumb compilers.

I like expressive static type system. Here, the rich types allow us to give applicative difference lists the interface of an applicative functor, so that the original implementation of `gtraverse`

barely has to change, mostly swapping one implementation of `(<*>)`

for another. As I’ve tried to show here, while the types can appear a little crazy, we can rely on the fact that they must be erased during compilation, so the behavior of the program (what we care about) can still be understood in terms of more elementary structures: if we ignore coercions that have no run-time presence and a few minor details, an applicative difference list is just a difference list

Parametric polymorphism constrains the rest of the implementation; typed holes are a pretty fun way to take advantage of that, but if a program is uniquely constrained, maybe it shouldn’t appear in source code in the first place.

This technique is a form of staged metaprogramming: we use difference lists as a compile-time structure that should be eliminated by the compiler. This is actually quite similar to Typed Template Haskell,^{6} where the main difference is that the separation between compile-time and run-time computation is explicit. The advantage of that separation is that it allows arbitrary recursion and IO at compile-time, and there is a strong guarantee that none of it is left at run-time.

However, there are limitations on the expressiveness of Typed Template Haskell: only expressions can be quoted (whereas (Untyped) Template Haskell also has representations for patterns, types, and declarations), and we can’t inspect quoted expressions without giving up types (using `unType`

).

Aside from compile-time IO which is extremely niche anyway, I believe the other advantages have more to do with `GHC.Generics`

being held back by temporary implementation issues than some fundamental limitation of that kind of framework. In contrast, what makes `GHC.Generics`

interesting is that it is minimalist: all it gives us is a `Generic`

instance for each data type, the rest of the language stays the same. No need to quote things if it’s only to unquote them afterwards. Rather, a functional programming language can be its own metalanguage.

This is a **purely theoretical note** for curious readers. You do not need to understand this to understand the rest of the library. You do not need to know category theory to be an expert in Haskell.

`ApDList`

is exactly `Curry (Yoneda f) f a`

, where `Curry`

and `Yoneda`

are defined in the *kan-extensions* library as:

```
newtype Curry f g a = Curry (forall r. f (a -> r) -> g r)
newtype Yoneda f a = Yoneda (forall r. (a -> r) -> f r)
```

`Curry`

is particularly relevant to understand the theoretical connection between applicative difference lists and plain difference lists `[a] -> [a]`

via Cayley’s theorem:

A monoid`m`

is a submonoid of`Endo m = (m -> m)`

: there is an injective monoid morphism`liftEndo :: m -> Endo m`

.

(More precisely, `lift = (<>)`

.)

`Endo m`

corresponds to difference lists if we take the list monoid `m = [a]`

.

This theorem can be generalized to other categories, by generalizing the notion of monoid to *monoid objects*, and by generalizing the definition of `Endo`

to an *exponential object* `Endo m = Exp(m, m)`

.

As it turns out, applicative functors are monoid objects, and the notion of exponential is given by `Curry`

above.

An applicative functor`f`

is a substructure of`Endo f = Curry f f`

: there is an injective transformation`liftEndo :: f a -> Endo f a`

.

(“Sub-applicative-functor” is a mouthful.)

However if we take that naive definition `Endo f = Curry f f`

, that is different from `ApDList`

(missing a `Yoneda`

), and it is not what we want here. The instance `Applicative (Endo f)`

inserts an undesirable application of `(<$>)`

in its definition of `pure`

.

The mismatch is due to the fact that the syntactic concerns discussed here (all the worrying about where `(<$>)`

and `(<*>)`

get inserted) are not visible at the level of that categorical formalization. Everything is semantic, with no difference between `pure f <*> u`

and `f <$> u`

for instance.

Anyway, if one really wanted to reuse `Curry`

, `Curry (Yoneda f) (Yoneda f)`

should work fine as an alternative definition of `ApDList f`

.

Cayley’s theorem also applies to monads, so these three things are, in a sense, the same:

- Difference lists (
`Endo`

monoid) - Applicative difference lists (
`ApDList`

applicative functor) - Continuation transformers (continuation/codensity monad)

For more details about that connection, read the paper *Notions of computation as monoids*, by Exequiel Rivas and Mauro Jaskelioff (JFP 2017).

The same structure is already used in the

*lens*library by the`confusing`

combinator to optimize traversals. For recursive traversals, recursive calls are made in continuation-passing style, which makes these traversals stricter than what you get with*generic-data*.Eric Mertens also wrote about this before,

`ApDList`

was called`Boggle`

.

*A monad is just a submonad of the continuation monad, what’s the problem?**Making Haskell run fast: the many faces of*`reverse`

*Free applicative functors in Coq*

The

*inspection-testing*plugin; see also the paper*A Promise checked is a promise kept: Inspection testing*, by Joachim Breitner (Haskell Symposium 2017).↩︎We also can’t use

`DerivingVia`

for`Traversable`

.↩︎Some of those details: only used

`(<$>)`

and`(<*>)`

, ignoring the existence of`liftA2`

; dropped the`M1`

constructor; kept`K1`

(which is basically`Identity`

) around because it makes things typecheck if we look at it closely enough;`K1`

is actually used by only on of the fields in the derived`Generic1`

instance.↩︎This analogy would be tighter with an abstract monoid instead of lists.↩︎

Except for the fact that I first needed to copy over the

`Traversable`

instances from*base*in a fresh class before modifying them.↩︎A subset of Template Haskell, using the

`TExp`

expression type instead of`Exp`

. This is the approach used in the recent paper*Staged sums of products*, by Matthew Pickering, Andres Löh, and Nicolas Wu (Haskell Symposium 2020).↩︎

As a programming language enthusiast, I find lots of interesting news and discussions on a multitude of social media platforms. I made two sites to keep track of everything new online related to Coq and Haskell:

**Planet Coq**: https://coq.pl-a.net**Haskell Planetarium**:^{1}https://haskell.pl-a.net

If you were familiar with Haskell News, and missed it since it closed down, Haskell Planetarium is a clone of it.

While the inspiration came from Haskell News, this particular project started with the creation of Planet Coq. Since the Coq community is much smaller, posts and announcements are rarer while also more likely to be relevant to any one member, so there is more value in merging communication channels.

I’m told “planets”, historically, were more specifically about aggregating blogs of community members. In light of the evolution of social media, it is hopefully not too far-fetched to generalize the word to encompass posts on the various discussion platforms now available to us. Haskell Planetarium includes the blog feed of Planet Haskell; Planet Coq is still missing a blog feed, but that should only be temporary.

Under the hood, the link aggregators consist of a special-purpose static site generator, written in OCaml (source code). The hope was maybe to write some module of it in Coq, but I didn’t find an obvious candidate with a property worth proving formally. Some of the required libraries, in particular those for parsing (gzip, HTML, JSON, mustache templates), are clearer targets to be rewritten and verified in Coq.

I love pun domains. This one certainly makes me look for new projects related to programming languages (PL) just so that I could host them under that name.

An obvious idea is to spin up new instances of the link aggregator for other programming languages. If someone wants to see that happen, the best way is to clone the source code and submit a merge request with a new configuration containing links relevant to your favorite programming language (guide).

Questions and suggestions about the project are welcome, feel free to open a new issue on Gitlab or send me an email.

Other places for comments:

Thus named to not conflict with the already existing Planet Haskell.↩︎

How can we turn the infamous `head`

and `tail`

partial functions into total functions? You may already be acquainted with two common solutions. Today, we will investigate a more exotic answer using dependent types.

The meat of this post will be written in Agda, but should look familiar enough to Haskellers to be an accessible illustration of dependent types.

The list functions `head`

and `tail`

are frowned upon because they are partial functions: if they are applied to the empty list, they will blow up and break your program.

```
head :: [a] -> a
head (x : _) = x
head [] = error "empty list"
tail :: [a] -> [a]
tail (_ : xs) = xs
tail [] = error "empty list"
```

Sometimes we know that a certain list is never empty. For example, if two lists have the same length, then after pattern-matching on one, we also know the constructor at the head of the other. Or the list is hard coded in the source for some reason, so we can see right there that it’s not empty. In those cases, isn’t it safe to use `head`

and `tail`

?

Rather than argue that unsafe functions are safe to use in a particular situation (and sometimes getting it wrong), it is easier to side-step the question altogether and replace `head`

and `tail`

with safer idioms.

To start, directly pattern-matching on the list is certainly a fine alternative.

Just short of that, one variant of `head`

and `tail`

wraps the result in `Maybe`

so we can return `Nothing`

in error cases, to be unpacked with whatever error-handling mechanism is available at call sites.

```
headMaybe :: [a] -> Maybe a
tailMaybe :: [a] -> Maybe [a]
```

Another variant changes the argument type to be the type of non-empty lists, thus requiring callers to give static evidence that a list is not empty.

```
-- Data.List.NonEmpty
data NonEmpty a = a :| [a]
headNonEmpty :: NonEmpty a -> a
tailNonEmpty :: NonEmpty a -> [a]
```

In this post, I’d like to talk about one more total version of `head`

and `tail`

.

`headTotal`

and `tailTotal`

From now on, let us surreptitiously switch languages to Agda (syntactically speaking, the most disruptive change is swapping the roles of `:`

and `::`

). The functions `headTotal`

and `tailTotal`

are funny because they make the following examples well-typed:

```
(1 ∷ 2 ∷ 3 ∷ []) : Nat
headTotal (1 ∷ 2 ∷ 3 ∷ []) : List Nat tailTotal
```

Unlike

`headMaybe`

, the result has type`Nat`

, not`Maybe Nat`

.Unlike

`headNonEmpty`

, the input list`1 ∷ 2 ∷ 3 ∷ []`

has type`List Nat`

, a plain list, not`NonEmpty`

—or`List⁺`

as it is cutely named in Agda.

`headTotal`

and `tailTotal`

will be defined in Agda, so they are most definitely total. And yet they appear to be as convenient to use as the partial `head`

and `tail`

, where they can just be applied to a non-empty list to access its head and tail.

As you might have noticed, this post is an advertisement for dependent types, which are the key ingredients in the making of `headTotal`

and `tailTotal`

.

Naturally, this example only demonstrates the good points of these functions; we’ll get to the less good ones in time.

Let’s find the type and the body of `headTotal`

. We put question marks as placeholders to be filled incrementally.

```
: ?
headTotal = ? headTotal
```

Obviously the type is going to depend on the input list. To define that dependent type, we will declare one more function to be refined simultaneously.

`headTotal`

is a function parameterized by a type `a`

and a list `xs : List a`

, and with return type `headTotalType xs`

, which is another function of `xs`

. That tells us to add some quantifiers and arrows to the type annotations.

```
: ∀ {a : Set} (xs : List a) → ?
headTotalType = ?
headTotalType
: ∀ {a : Set} (xs : List a) → headTotalType xs
headTotal = ? headTotal
```

(Note: `Set`

is the “type of types” in Agda, called `Type`

in Haskell.)

`headTotalType`

must return a type, i.e., a `Set`

. Put that to the right of `headTotalType`

’s arrow. A function producing a type is also called a *type family*: a family of types indexed by lists `xs : List a`

.

```
: ∀ {a : Set} (xs : List a) → Set
headTotalType = ?
headTotalType
: ∀ {a : Set} (xs : List a) → headTotalType xs
headTotal = ? headTotal
```

Pattern-match on the list `xs`

, splitting both functions into two cases.

```
: ∀ {a : Set} (xs : List a) → Set
headTotalType (x ∷ xs) = ?
headTotalType = ?
headTotalType []
: ∀ {a : Set} (xs : List a) → headTotalType xs
headTotal (x ∷ xs) = ?
headTotal = ? headTotal []
```

In the non-empty case (`x ∷ xs`

), we know the head of the list is `x`

, of type `a`

. Therefore that case is solved.

```
: ∀ {a : Set} (xs : List a) → Set
headTotalType (_ ∷ _) = a
headTotalType = ?
headTotalType []
: ∀ {a : Set} (xs : List a) → headTotalType xs
headTotal (x ∷ _) = x
headTotal = ? headTotal []
```

What about the empty case? We are looking for two values `headTotalType []`

and `headTotal []`

such that the former is the type of the latter:

`headTotal [] : headTotalType []`

That tells us that the type `headTotalType []`

is inhabited.

What else can we say about those unknowns?

…

After much thought, there doesn’t appear to be any requirement besides the inhabitation of `headTotalType []`

. Then, a noncommittal solution is to instantiate it with the unit type, avoiding the arbitrariness in subsequently choosing its inhabitant, since there is only one. The unit type and its unique inhabitant are denoted `tt : ⊤`

in Agda.

```
: ∀ {a : Set} (xs : List a) → Set
headTotalType (_ ∷ _) = a
headTotalType = ⊤ -- unit type
headTotalType []
: ∀ {a : Set} (xs : List a) → headTotalType xs
headTotal (x ∷ _) = x
headTotal = tt -- unit value headTotal []
```

To recapitulate that last case, when the list is empty, there is no head to take, but we must still produce *something*. Having no more requirements, we produce a boring thing, which is `tt`

.

The definition of `headTotal`

is now complete.

Following similar steps, we can also define `tailTotal`

.

```
: ∀ {a : Set} (xs : List a) → Set
tailTotalType (_ ∷ _) = List a
tailTotalType = ⊤
tailTotalType []
: ∀ {a : Set} (xs : List a) → tailTotalType xs
tailTotal (_ ∷ xs) = xs
tailTotal = tt tailTotal []
```

And with that, we can finally build the examples above!

```
_number : Nat
some_number = headTotal (1 ∷ 2 ∷ 3 ∷ [])
some
_list : List Nat
some_list = tailTotal (1 ∷ 2 ∷ 3 ∷ []) some
```

We’re pretty much done, but we can still refactor a little to make this nicer to look at.

First, notice that the two type families `headTotalType`

and `tailTotalType`

are extremely similar, differing only on whether the `∷`

case equals `a`

or `List a`

. Let’s merge them into a single function: we define a type `b `ifNotEmpty` xs`

, equal to `b`

if `xs`

is not empty, otherwise equal to `⊤`

.

```
_`ifNotEmpty`_ : ∀ {a : Set} (b : Set) (xs : List a) → Set
(_ ∷ _) = b
b `ifNotEmpty` _ `ifNotEmpty` [] = ⊤
: ∀ {a : Set} (xs : List a) → a `ifNotEmpty` xs
headTotal : ∀ {a : Set} (xs : List a) → List a `ifNotEmpty` xs tailTotal
```

The infix notation reflects the intuition that `headTotal`

has a meaning close to a function `List a → a`

, and similarly with `tailTotal`

.

Finally, one last improvement is to reconsider the intention behind the unit type `⊤`

in this definition. If `headTotal`

or `tailTotal`

are applied to an empty list, we probably messed up somewhere. Such mistakes are made easier to spot by replacing `⊤`

with an isomorphic but more appropriately named type. If an empty list causes an error, we will either see a `Failure`

to unify, or some `ERROR`

screaming at us.

```
data Failure : Set where
: Failure
ERROR
_`ifNotEmpty`_ : ∀ {a : Set} (b : Set) (xs : List a) → Set
(_ ∷ _) = b
b `ifNotEmpty` = Failure
b `ifNotEmpty` []
: ∀ {a} (xs : List a) → a `ifNotEmpty` xs
headTotal (x ∷ _) = x
headTotal = ERROR
headTotal []
: ∀ {a} (xs : List a) → List a `ifNotEmpty` xs
tailTotal (_ ∷ xs) = xs
tailTotal = ERROR tailTotal []
```

We’ve now come full circle. The bodies of `headTotal`

and `tailTotal`

closely resemble those of the partial `head`

and `tail`

functions at the beginning of this post. The difference is that dependent types keep track of the erroneous cases.

A working Agda module with these functions can be found in the source repository of this blog. There is also a version in Coq.

(This was my first time programming in Agda. This language is super smooth.)

One might question how useful `headTotal`

and `tailTotal`

really are. They may be not so different from `headNonEmpty`

and `tailNonEmpty`

, because they’re all only meaningful with non-empty lists: the burden of proof is the same. Even if we added `ERROR`

values to cover the `[]`

case, the point is really to not ever run into that case.

Moreover, to actually get the head out, `headTotal`

requires its argument to be *definitionally* non-empty, otherwise the ergonomics are not much better than `headMaybe`

. In other words, for `headTotal e`

to have type `a`

rather than `a `ifNotEmpty` e`

, the argument `e`

must actually be an expression which reduces to a non-empty list `e1 :: e2`

, but that literally gives us an expression `e1`

for the head of the list. Why not use it directly?

The catch is that the expression for the head might be significantly more complex than the expression for the list itself, so we’d still rather write `headTotal e`

than whatever that reduces to.

For example, I’ve used a variation of this technique in a type-safe implementation of `printf`

.^{1} The function `printf`

takes a *format string* as its first argument, basically a string with holes. For instance, `"%s du %s"`

is a format string with two placeholders for strings. Then, `printf`

expects more arguments to fill in the holes. Once supplied, the result is a string with the holes correspondingly filled. Importantly, format strings may vary in number and types of holes.

```
"%s du %s" "omelette" "fromage" ≡ "omelette du fromage"
printf "%d * %d = %d" 6 9 42 ≡ "6 * 9 = 42" printf
```

Intuitively, that means the type of `printf`

depends on the format string:

```
: ∀ (fmt : string) → printfType fmt
printf : string → Set printfType
```

However, not all strings are valid format strings. If a special character is misused, for example, `printf`

may evaluate to `ERROR`

.^{2}

```
"%m" = ERROR -- "%m" makes no sense
printf "%m" = Failure printfType
```

In all “correct” programs, `printf`

is meant to be used with valid and statically known format strings, so the `ERROR`

case doesn’t happen. Nevertheless, `printf "%d * %d = %d"`

is a simpler expression to write than whatever it evaluates to, which would be some lambda that serializes its three arguments according to that format string.

I don’t have more examples right now, but this *dependently typed validation* technique seems well-suited to more general kinds of compile-time configurations, where it would not be practical to define a type encoding the necessary invariants.

Another hypothetical use case would be to extract the output of some parameterized search algorithm. Let’s imagine that it may not find a solution in general, so its return type should be a possibly empty `List a`

. If you know that it does output something for some hard-coded parameters, then `headTotal`

allows you to access it with little ceremony.

On a related note, `ifNotEmpty`

seems generalizable to a dependently typed variant of the `Maybe`

monad, keeping track of all the conditions for it to not be a `Failure`

at the type level.

`head`

is partial, undefined at`[]`

.`headTotal`

maps into two types,`Failure`

and`a`

, depending on the value of the input.`headMaybe`

maps into`Maybe a`

, a bigger type than`a`

, and cutting`a`

out of it would take a bit of work.`headNonEmpty`

has the cleanest looking diagram from putting the problem of`[]`

out of scope.

What other variations are there?

*coq-printf*. This trick is no longer used since version 2.0.0 though, a better alternative having been found in Coq’s new system for string notations.↩︎It would also be reasonable to ignore the “error” and accept all strings as valid.↩︎

I have just released two libraries to enhance QuickCheck for testing higher-order properties: *quickcheck-higherorder* and *test-fun*.

This is a summary of their purpose and main features. For more details, refer to the README and the implementations of the respective packages.

This project started from experiments to design laws for the *mtl* library. What makes a good law? I still don’t know the answer, but there is at least one sure sign of a bad law: find a counterexample! That’s precisely what property-based testing is useful for. As a byproduct, if you can’t find a counterexample after looking for it, that is some empirical evidence that the property is valid, especially if you expect counterexamples to be easy to find.

Ideally we would write down a property, and get some feedback from running it. Of course, complex applications will require extra effort for worthwhile results. But I believe that, once we have our property, the cost of entry to just start running test cases can be reduced to zero, and that many applications may benefit from it.

QuickCheck already offers a smooth user experience for testing simple “first-order properties”. *quickcheck-higherorder* extends that experience to *higher-order properties*.

A *higher-order property* is a property quantifying over functions. For example:

```
prop_bool :: (Bool -> Bool) -> Bool -> Property
prop_bool f x = f (f (f x)) === f x
```

Vanilla QuickCheck is sufficient to test such properties, provided you know where to find the necessary utilities. Indeed, simply passing the above property to the `quickCheck`

runner results in a type error:

```
main :: IO ()
main = quickCheck prop_bool -- Type error!
```

`quickCheck`

tries to convert `prop_bool`

to a `Property`

, but that requires `Bool -> Bool`

to be an instance of `Show`

, which is of course absurd.^{1}

Instead, functions must be wrapped in the `Fun`

type:

```
prop_bool' :: Fun Bool Bool -> Bool -> Property
prop_bool' (Fn f) x = f (f (f x)) === f x
main :: IO ()
main = quickCheck prop_bool' -- OK!
```

Compounded over many properties, this `Fun`

/`Fn`

boilerplate is repetitive. It becomes especially cumbersome when the functions are contained inside other data types.

*quickcheck-higherorder* moves that cruft out of sight. The `quickCheck'`

runner replaces the original `quickCheck`

, and infers that `(->)`

should be replaced with `Fun`

.

```
-- The first version
prop_bool :: (Bool -> Bool) -> Bool -> Property
prop_bool f x = f (f (f x)) === f x
main :: IO ()
main = quickCheck' prop_bool -- OK!
```

The general idea behind this is to distinguish the *data* that your application manipulates, from its *representation* that QuickCheck manipulates. The data can take any form, whatever is most convenient for the application, but its representation must be concrete enough so QuickCheck can randomly generate it, shrink it, and print it in the case of failure.

Vanilla QuickCheck handles the simplest case, where the data is identical to its representation, and gives up as soon as the representation has a different type, requiring us to manually modify the property to make the representation of its input data explicit. This is certainly not a problem that can generally be automated away, but the UX here still has room for improvement. *quickcheck-higherorder* provides a new way to associate data to its representation, via a type class `Constructible`

, which `quickCheck'`

uses implicitly.

```
class (Arbitrary (Repr a), Show (Repr a)) => Constructible a where
type Repr a :: Type
fromRepr :: Repr a -> a
```

Notably, we no longer require `a`

itself to be an instance of `Arbitrary`

and `Show`

. Instead, we put those constraints on an associated type `Repr a`

, which is thus inferred implicitly whenever values of type `a`

are quantified over.

Aiming to make properties higher-level, more declarative, the `prop_bool`

property above can also be written like this:

```
prop_bool :: (Bool -> Bool) -> Equation (Bool -> Bool)
prop_bool f = (f . f . f) :=: f
```

Where `(:=:)`

is a simple constructor. That defers the choice of how to interpret the equation to the caller of `prop_bool`

, leaving the above specification free of such operational details.

Behind the scenes, this exercises a new type class for testable equality,^{2} `TestEq`

, turning equality into a first-class concept even for higher-order data (the main examples being functions and infinite lists).

```
class TestEq a where
(=?) :: a -> a -> Property
```

For more details, see the README of *quickcheck-higherorder*.

QuickCheck offers a `Fun`

type to express properties of arbitrary functions.^{3} However, `Fun`

is limited to first-order functions. An example of type that cannot be represented is `Cont`

.

The library *test-fun* implements a generalization of `Fun`

which can represent higher-order functions. Any order!

It’s a very simple idea at its core, but it took quite a few iterations to get the design right. The end result is a lot of fun. The implementation exhibits the following characteristics, which are not obvious a priori:

like in QuickCheck’s version, the type of those

*testable functions*is a single GADT, i.e., a closed type, whereas an open design might seem more natural to account for user-defined types of inputs;the core functions to apply, shrink, and print testable functions impose no constraints on their domains;

*test-fun*doesn’t explicitly make use of randomness, in fact, it doesn’t even depend on QuickCheck! The library is parameterized by a functor`gen`

, and almost all of the code only depends on it being an`Applicative`

functor. There is (basically) just one function (`cogenFun`

) with a`Monad`

constraint and with a random generator as an argument.

As a consequence, *test-fun* can be reused entirely to work with Hedgehog. However, unlike with QuickCheck, some significant plumbing is required, which is work in progress. *test-fun* cannot just be specialized to Hedgehog’s `Gen`

monad; it will only work with QuickCheck’s `Gen`

,^{4} so we currently have to break into Hedgehog’s internals to build a compatible version of the “right” `Gen`

.

*test-fun* implements core functionality for the internals of libraries like *quickcheck-higherorder*. Users are thus expected to only depend directly on *quickcheck-higherorder* (or the WIP *hedgehog-higherorder* linked above).

*test-fun* only requires an `Applicative`

constraint in most cases, because intuitively a testable function has a fixed “shape”: we represent a function by a big table mapping every input to an output. To generate a random function, we can generate one output independently for each input, collect them together using `(<*>)`

, and build a table purely using `(<$>)`

.

However this view of “functions as tables” does not extend to higher-order functions, which may only make finite observations of their infinite inputs. A more general approach is to represent functions as decision trees over their inputs. “Function as tables” is the special case where those trees are *maximal*, such that there is a one-to-one correspondence between leaves and inputs. However, maximal trees don’t always exist. Then a random generator must preemptively terminate trees, and that requires stronger constraints such as `Monad`

(intermediate ones like `Alternative`

or `Selective`

might be worth considering too).

For more details, see the README of *test-fun*.

These libraries are already used extensively in my project *checkers-mtl*, which is where most of the code originated from.

One future direction on my mind is to port this to Coq, as part of the QuickChick project. I’m curious about the challenges involved in making the implementation provably total, and in formalizing the correctness of testing higher-order properties.

I’m always looking for opportunities to make testing as easy as possible. I’d love to hear use cases for these libraries you can come up with!

You could hack something in this case because

`Bool`

is a small type, but that does not scale to arbitrary types.↩︎*Shrinking and showing functions (functional pearl)*, by Koen Claessen, in Haskell Symposium 2012.↩︎It must be lazy, in the right way. A random monad built on top of lazy

`State`

is no good either. As of now, QuickCheck’s`Gen`

is the only monad I know which is useful for*test-fun*.↩︎

The previous post showed off the flexibility of the continuation monad to represent various effects. As it turns out, it has a deeper relationship with monads in general.

Disclaimer: this is not a monad tutorial. It will not be enlightening if you’re not already familiar with monads. Or even if you are, probably. That’s the joke.

The starting point is the remark that `lift`

for the `ContT`

monad transformer is `(>>=)`

, and `ContT`

is really `Cont`

.^{1} To make that identity most obvious, we define `Cont`

as a type synonym here.

```
type Cont r a = (a -> r) -> r
lift :: Monad m => m a -> Cont (m r) a
-- Monad m => m a -> (a -> m r) -> m r
lift = (>>=)
```

As a monad transformer, it is certainly an odd one. On the one hand, `Cont (m r)`

is a monad which doesn’t really care whether `m`

is a monad, or anything at all. On the other hand, `lift`

is `(>>=)`

: it directly depends on the full power of a `Monad`

. That contrasts with `StateT`

for example, whose `Monad`

instance uses the transformed monad’s `Monad`

instance, while `lift`

only needs a `Functor`

.

If `lift`

is `(>>=)`

, we can also say that `(>>=)`

is `lift`

, suggesting an alternative definition of monads as types that can be “lifted” into `Cont`

, and “unlifted” back, by passing `pure`

as a continuation.

```
class Monad m where
lift :: m a -> Cont (m r) a
pure :: a -> m a
unlift :: Monad m => Cont (m a) a -> m a
unlift u = u pure
```

We simply renamed `(>>=)`

in the `Monad`

class, nothing changed otherwise.^{2} The new monad laws below are also simple reformulations of the usual monad laws in terms of `lift`

and `unlift`

primarily. There’s a bit of work to fix the third law, but no serious difficulties in the process.^{3}

Nevertheless, such renaming opens the door to another point of view, where monads are merely “subsets” of the `Cont`

monad, and we can reframe the monad laws accordingly. They are the same, and yet, they look completely different.

```
-- Laws for the lift-pure definition of Monad
unlift . lift = id
lift . unlift) (pureCont x) = pureCont x
(lift . unlift) (lift u >>=? \x -> lift (k x))
(= (lift u >>=? \x -> lift (k x))
```

where the `pure`

and `(>>=)`

of `Cont`

are called `pureCont`

and `(>>=?)`

, clarifying that they are defined once for all, independently of the `Monad`

class. That is the key to resolve the apparent circularity in the title.

```
pureCont :: a -> Cont r a
pureCont a = (\k -> k a)
(>>=?) :: Cont r a -> (a -> Cont r b) -> Cont r b
c >>=? d = (\k -> c (\a -> d a k))
```

The second and third law have a common structure. An equation `(lift . unlift) y = y`

expresses the fact that `y`

is in the image of `lift`

. If we also assume the first law `unlift . lift = id`

, that says nothing more.

Another interpretation of the monad laws is now apparent: they say that a monad `m`

is defined by an injection `lift`

into a subset of `Cont (m r)`

closed under `pureCont`

and `(>>=?)`

. That’s why we can say that, by definition, `m`

is a “submonad” of `Cont (m r)`

.^{4}

But with that fact alone, it wouldn’t matter that the codomain of `lift`

is `Cont (m r)`

; any monad `n`

would do, as we could `unlift`

the `(>>=)`

of `n`

down to a `(>>=)`

for `m`

. The special thing about `Cont`

here is that `(>>=)`

for `m`

is literally `lift`

.

To push that idea further, one might propose a more symmetric redefinition of `Monad`

as a pair `(lift, unlift)`

:

```
class Monad m where
lift :: m a -> Cont (m r) a
unlift :: Cont (m a) a -> m a
```

The remaining asymmetry in the first type parameter of `Cont`

can also be removed by using the `CodensityT`

monad transformer:

```
type CodensityT m a = forall r. Cont (m r) a
class Monad m where
lift :: m a -> CodensityT m a
unlift :: CodensityT m a -> m a
```

That’s certainly fine. I just prefer the simplicity of `Cont`

over `CodensityT`

where we can get away with it.^{5}

In any case, we can then define `pure`

by “unlifting” `pureCont`

:

```
pure :: Monad m => a -> m a
pure = unlift . pureCont
```

A small wrinkle with taking `unlift`

as a primitive is that the new laws don’t quite match up to the old laws anymore. For example, for these two laws to be equivalent (remember that `lift`

is `(>>=)`

)…

```
unlift . lift = id
-- Corresponding classical monad law
u >>= pure = u
```

… we really want an extra law to “unfold” `unlift`

, which is its definition in the previous version of `Monad`

.

```
unlift u = u pure
-- or, without pure
unlift u = u (unlift . pureCont)
```

It’s also the only sensible implementation: `unlift`

has to apply its argument `u`

, which is a function, to some continuation. The only good choice is `pure`

, and we have to write it into law to prevent other not-so-good choices.^{6} `pure`

is arguably still a simpler primitive than `unlift`

in practice, because one has to implement `pure`

explicitly anyway.

To sum up, the `(lift, unlift)`

presentation of `Monad`

comes with an extra fourth law to keep `unlift`

in check.

```
unlift . lift = id
lift . unlift) (pureCont x) = pureCont x
(lift . unlift) (lift u >>=? \x -> lift (k x))
(= (lift u >>=? \x -> lift (k x))
unlift u = u (unlift . pureCont)
```

The title seems to be making a circular claim, defining monads in terms of monads. But it can really be read backwards in a well-founded manner.

The “continuation monad” is a concrete thing, consisting of a function on types `(_ -> m r) -> m r`

, and two operations `pureCont`

and `(>>=?)`

(which turn out to be essentially function application and function composition respectively).

A “submonad of the continuation monad” is a subset^{7} of the continuation monad closed under `pureCont`

and `(>>=?)`

.

Although “monad” appears in those terms, we are defining them as individual concepts independently of the general notion of “monad”, which can in turn be defined in those terms. Although confusing, the naming is meant to make sense a posteriori, after everything is defined.

That is an example of a representation theorem, where some general structure is reduced to another seemingly more specific one.

Cayley’s theorem says that every group on a carrier `a`

is a subgroup of the group of permutations (bijective functions) `a -> a`

, and the associated injection `a -> (a -> a)`

is exactly the binary operation of the group on `a`

.

The Yoneda lemma says that `fmap`

is an isomorphism between `m a`

and `forall r. (a -> r) -> m r`

for any functor `m`

(into Set).

Here we said that `(>>=)`

is a (split mono) morphism from `m a`

to `forall r. (a -> m r) -> m r`

for any monad `m`

.

As was pointed out to me on reddit, this is indeed an application of the generalized Cayley representation theorem. This connection is studied in detail in the paper *Notions of Computations as Monoids*, by Exequiel Rivas and Mauro Jaskelioff, JFP 2017. (PDF, extended version)

The paper shows how to view applicative functors, arrows and monads as monoids in different categories, and how useful constructions arise from common abstract concepts such as exponentials, Cayley’s theorem, free monoids. Below is the shortest summary I could make of Cayley’s theorem applied to monads.

Cayley’s theorem generalizes straightforwardly from groups to monoids (omitted), and then from monoids (in the category of sets) to monoids in any category with a tensor `×`

(i.e., a monoidal category) and with exponentials^{8}.

A (generalized) *monoid* `m`

consists of a pair of morphisms `mult : m × m -> m`

and `one : 1 -> m`

, satisfying some conditions. Cayley’s theorem constructs an injection from `m`

into the exponential object `(m -> m)`

, by currying the morphism `mult`

as `m -> (m -> m)`

. Said informally, a monoid `m`

is a submonoid of the monoid of endomorphisms `m -> m`

.

Then consider that statement in the category of endofunctors on *Set*, where the tensor `×`

is functor composition. In this category,

a monoid is a monad, i.e., a pair

`join : m × m -> m`

and`pure : 1 -> m`

(where`1`

is the identity functor);the exponential object

`(m -> m)`

is the*codensity monad*on`m`

(which we’ve been deliberately confusing with`Cont`

throughout the post):`CodensityT m a`

is the set of natural transformations^{9}between the functor`a -> m _`

and`m`

.

`type CodensityT m a = forall r. (a -> m r) -> m r`

Now, Cayley’s theorem translates directly to: a monad is a submonad of the codensity monad.

As before, there will be quite some blur on the distinction between

`Cont`

,`ContT`

, and`CodensityT`

.↩︎Assuming we’ve already ditched

`return`

for`pure`

.↩︎A proof in Coq that these new laws imply the old ones, just to be sure.↩︎

You may be more familiar with notions of “substructure” being refinements of the notion of “subset”, and strictly speaking,

`m`

is not a subset of`Cont (m r)`

. But it is convenient to generalize “substructure” directly to “anything that injects into a structure”, especially for working in category theory or formalizing those ideas in proof assistants based on type theory, where the set-theoretic notion of “subset” is awkward to express literally.↩︎By defining

`CodensityT`

as a type synonym instead of a newtype, we would also run into minor problems with impredicativity and type inference.↩︎I’m not actually sure whether the other laws entail this one.↩︎

“Subset” is not defined but I hope you get the idea.↩︎

or rather, it is sufficient for

`m`

alone to be an exponent, so`(m -> m)`

is defined as an object.↩︎which is not always a set, but we care when it is.↩︎