Tuesday, September 27, 2016

The fixpoint combinator

Consider the following recursive definition of the factorial function. \[ FAC = \lambda n.\;IF \left(=\;n\;0\right)\;1\;\left(*\;n\;\left(FAC\;\left(-\;n\;1\right)\right)\right) \nonumber \] The definition relies on the ability to name a $\lambda$-abstraction and then to refer to this name inside the $\lambda$-abstraction itself. No such facility is provided by the $\lambda$-calculus. $\beta$-abstraction is applying $\beta$-reduction backwards to introduce new $\lambda$-abstractions, thus $+\;4\;1\leftarrow \left(\lambda x.\;+\;x\;1\right)\; 4$. By $\beta$-abstraction on $FAC$, its definition can be written \[ FAC = \left(\lambda fac.\;\left(\lambda n.\;IF\left(=\;n\;0\right)\;1\;\left(*\;n\;\left(fac\;\left(-\;n\;1\right)\right)\right)\right)\right) FAC \nonumber \] This definition has taken the form $FAC = g\;FAC$ where $g = \left(\lambda fac.\;\left(\lambda n.\;IF\left(=\;n\;0\right)\;1\;\left(*\;n\;\left(fac\;\left(-\;n\;1\right)\right)\right)\right)\right)$ is without recursion. We see also that $FAC$ is a fixed point ("fixpoint") of $g$. It is clear this fixed point can only depend on $g$ so supposing there were a function $Y$ which takes a function and delivers a fixpoint of the function as the result, we'd have $FAC = Y\;g = g\;(Y\;g)$. Under the assumption such a function exists, in order to build confidence this definition of $FAC$ works, we will try to compute $FAC\;1$. Recall \[ \begin{eqnarray} &FAC& = Y\;g \nonumber \\ &g& = \lambda fac.\;\left(\lambda n.\;IF\left(=\;n\;0\right)\;1\;\left(*\;n\;\left(fac\;\left(-\;n\;1\right)\right)\right)\right) \nonumber \end{eqnarray} \] So, \[ \begin{eqnarray} FAC\;1 &\rightarrow& (Y\;g)\; 1 \nonumber \\ &\rightarrow& (g\;(Y\;g))\;1 \nonumber \\ &\rightarrow& (\left(\lambda fac.\;\left(\lambda n.\;IF\left(=\;n\;0\right)\;1\;\left(*\;n\;\left(fac\;\left(-\;n\;1\right)\right)\right)\right)\right) (Y\;g))\; 1 \nonumber \\ &\rightarrow& \left(\lambda n.\;IF\left(=\;n\;0\right)\;1\;\left(*\;n\;\left(\left(Y\;g\right)\;\left(-\;n\;1\right)\right)\right)\right)\; 1 \nonumber \\ &\rightarrow& *\;1\;\left(\left(Y\;g\right)\;0\right) \nonumber \\ &\rightarrow& *\;1\;\left(\left(g\;\left(Y\;g\right)\right)\;0\right) \nonumber \\ &\rightarrow& *\;1\;\left(\left(\left(\lambda fac.\;\left(\lambda n.\;IF\left(=\;n\;0\right)\;1\;\left(*\;n\;\left(fac\;\left(-\;n\;1\right)\right)\right)\right)\right)\;\left(Y\;g\right)\right)\;0\right) \nonumber \\ &\rightarrow& *\;1\;\left(\left(\lambda n.\;IF\left(=\;n\;0\right)\;1\;\left(*\;n\;\left(\left(Y\;g\right)\;\left(-\;n\;1\right)\right)\right)\right)\;0\right) \nonumber \\ &\rightarrow& *\;1\;1 \nonumber \\ &=& 1 \nonumber \end{eqnarray} \]

The $Y$ combinator of the $\lambda$-calculus is defined as the $\lambda$-term $Y = \lambda f.\;\left(\lambda x.\;f\;\left(x\;x\right)\right)\left(\lambda x.\;f\;\left(x\;x\right)\right)$. $\beta$ reduction of this term applied to an arbitrary function $g$ proceeds like this: \[ \begin{eqnarray} Y\;g &\rightarrow& \left(\lambda f.\;\left(\lambda x.\;f\;\left(x\;x\right)\right) \left(\lambda x.\;f\;\left(x\;x\right)\right)\right)\;g \nonumber \\ &\rightarrow& \left(\lambda x.\;g\;\left(x\;x\right)\right) \left(\lambda x.\;g\;\left(x\;x\right)\right) \nonumber \\ &\rightarrow& g\;\left(\left(\lambda x.\;g\;\left(x\;x\right)\right)\;\left(\lambda x.\;g\;\left(x\;x\right)\right)\right) \nonumber \\ &=& g\;\left(Y\;g\right) \end{eqnarray} \] The application of this term has produced a fixpoint of $g$. That is, we are satisfied that this term will serve as a definition for $Y$ having the property we need and call it the "fixpoint combinator".

In the untyped $\lambda$-calculus, $Y$ can be defined and that is sufficient for expressing all the functions that can be computed without having to add a special construction to get recursive functions. In typed $\lambda$-calculus, $Y$ cannot be defined as the term $\lambda x.\;f\;(x\;x)$ does not have a finite type. Thus, when implementing recursion in a functional programming language it is usual to implement $Y$ as a built-in function with the reduction rule $Y\;g \rightarrow g\;(Y\;g)$ or, in a strict language, $(Y\; g)\;x \rightarrow (g\;(Y\;g))\;x$ to avoid infinite recursion.

For an OCaml like language, the idea then is to introduce a built-in constant $\mathbf{Y}$ and to denote the function defined by $\mathbf{let\;rec}\;f\;x = e$ as $\mathbf{Y}(\mathbf{fun}\;f\;x \rightarrow e)$. Intuitivly, $\mathbf{Y}$ is a fixpoint operator that associates a functional $F$ of type $\left(\alpha \rightarrow \beta\right) \rightarrow \alpha \rightarrow \beta$ with a fixpoint of type $\alpha \rightarrow \beta$, that is, a value having the property $\mathbf{Y}\;F = F\;\left(\mathbf{Y}\;F\right)$. The relevant deduction rules involving this constant are: \[ \begin{equation} \frac{\vdash f\;(Y\;f)\;x \Rightarrow v} {\vdash (Y\;f)\;x \Rightarrow v} \tag{App-rec} \end{equation} \] \[ \begin{equation} \frac{\vdash e_{2}\left[Y(\mathbf{fun}\;f\;x \rightarrow e_{1})/f\right] \Rightarrow v} {\vdash \mathbf{let\;rec}\;f\;x=e_{1}\;\mathbf{in}\;e_{2} \Rightarrow v} \nonumber \tag {Let-rec} \end{equation} \]


References:
[1] The Implementation of Functional Programming Languages,Simon Peyton Jones, 1987.
[2] The Functional Approach to Programming, Guy Cousineau, Michel Mauny, 1998.

Tuesday, September 20, 2016

Custom operators in OCaml

If like me, you've always been a little hazy on the rules for defining OCaml operators then, this little post might help!

It is possible to "inject" user-defined operator syntax into OCaml programs. Here's how it works. First we define a set of characters called "symbol characters".

Symbol character (definition)

A character that is one of

! $ % & * + - . / : < = > ? @ ^ |

Prefix operators

The ! ("bang") prefix operator, has a predefined semantic as the operation of "de-referencing" a reference cell. A custom prefix operator can made by from a ! followed by one or more symbol characters.

So, to give some examples, one can define prefix operators like !!, !~ or even something as exotic as !::>. For example, one might write something like

let ( !+ ) x : int ref → unit = incr x
as a syntactic sugar equivalent to fun x → incr x

Additionally, prefix operators can begin with one of ~ and ? and, as in the case of !, must be followed by one or more symbol characters. So, in summary, a prefix operator begins with one of

! ~ ?
and is followed by one or more symbol characters.

For example let ( ~! ) x = incr x defines an alternative syntax equivalent to the !+ operator presented earlier.

Prefix operators have the highest possible precedence.

Infix operators

It is in fact possible to define operators in 5 different categories. What distinguish these categories from each other are their associativity and precedence properties.

Level 0

Level 0 operators are left associative with the same precedence as =. A level 0 operator starts with one of

= < > | & $
and is followed by zero or more symbol chars. For example, >>= is an operator much beloved by monadic programmers and |> (pipe operator) is a builtin equivalent to let ( |> ) x f = f x.

Level 1

Level 1 operators are right associative, have a precedence just above = and start with one of

@ ^
. That is, these operators are consistent with operations involving joining things. @@ (the "command" operator) of course has a predefined semantic as function application, that is, equivalent to the definition let ( @@ ) f x = f x.

Level 2

Level 2 operators are left associative have a precedence level shared with + and - and indeed, are defined with a leading (one of)

+ -
and, as usual, followed by a sequence of symbol characters. These operators are consistent for usage with operations generalizing addition or difference like operations. Some potential operators of this kind are +~, ++ and so on.

Level 3

Level 3 operators are also left associative and have a precedence level shared with * and /. Operators of this kind start with one of

* / %
followed by zero or more symbol characters and are evocative of operations akin to multiplication, division. For example, *~ might make a good companion for +~ of the previous section.

Level 4

Level 4 operators are right associative and have a precedence above *. The level 4 operators begin with

**
and are followed by zero or more symbol characters. The operation associated with ** is exponentiation (binds tight and associates to the right). The syntax **~ would fit nicely into the +~, *~ set of the earlier sections.

Saturday, August 27, 2016

Perfectly balanced binary search trees

The type of "association tables" (binary search trees).

type (α, β) t =
| Empty
| Node of (α , β) t * α * β * (α, β) t * int
There are two cases : a tree that is empty or, a node consisting of a left sub-tree, a key, the value associated with that key, a right sub-tree and, an integer representing the "height" of the tree (the number of nodes to traverse before reaching the most distant leaf).

The binary search tree invariant will be made to apply in that for any non empty tree $n$, every node in the left sub-tree is ordered less than $n$ and every node in the right sub-tree of $n$ is ordered greater than $n$ (in this program, ordering of keys is performed using the Pervasives.compare function).

This function, height, given a tree, extracts its height.

let height : (α, β) t -> int = function
  | Empty -> 0
  | Node (_, _, _, _, h) -> h

The value empty, is a constant, the empty tree.

let empty : (α, β) t = Empty

create l x d r creates a new non-empty tree with left sub-tree l, right sub-tree r and the binding of key x to the data d. The height of the tree created is computed from the heights of the two sub-trees.

let create (l : (α, β) t) (x : α) (d : β) (r : (α, β) t) : (α, β) t =
  let hl = height l and hr = height r in
  Node (l, x, d, r, (max hl hr) + 1)

This next function, balance is where all the action is at. Like the preceding function create, it is a factory function for interior nodes and so takes the same argument list as create. It has an additional duty though in that the tree that it produces takes balancing into consideration.

let balance (l : (α, β) t) (x : α) (d : β) (r : (α, β) t) : (α, β) t =
  let hl = height l and hr = height r in
  if hl > hr + 1 then
    match l with
In this branch of the program, it has determined that production of a node with the given left and right sub-trees (denoted $l$ and $r$ respectively) would be unbalanced because $h(l) > hr(1) + 1$ (where $h$ denotes the height function).

There are two possible reasons to account for this. They are considered in turn.

    (*Case 1*)
    | Node (ll, lv, ld, lr, _) when height ll >= height lr ->
      create ll lv ld (create lr x d r)
So here, we find that $h(l) > h(r) + 1$, because of the height of the left sub-tree of $l$.
    (*Case 2*)
    | Node (ll, lv, ld, Node (lrl, lrv, lrd, lrr, _), _) ->
      create (create ll lv ld lrl) lrv lrd (create lrr x d r)
In this case, $h(l) > h(r) + 1$ because of the height of the right sub-tree of $l$.
    | _ -> assert false
We assert false for all other patterns as we aim to admit by construction no further possibilities.

We now consider the case $h(r) > h(l) + 1$, that is the right sub-tree being "too long".

  else if hr > hl + 1 then
    match r with

There are two possible reasons.

    (*Case 3*)
    | Node (rl, rv, rd, rr, _) when height rr >= height rl ->
      create (create l x d rl) rv rd rr
Here $h(r) > h(l) + 1$ because of the right sub-tree of $r$.
    (*Case 4*)
    | Node (Node (rll, rlv, rld, rlr, _), rv, rd, rr, _) ->
      create (create l x d rll) rlv rld (create rlr rv rd rr)
Lastly, $h(r) > h(l) + 1$ because of the left sub-tree of $r$.
    | _ -> assert false
Again, all other patterns are (if we write this program correctly according to our intentions,) impossible and so, assert false as there are no further possibilities.

In the last case, neither $h(l) > h(r) + 1$ or $h(r) > h(l) + 1$ so no rotation is required.

  else
    create l x d r

add x data t computes a new tree from t containing a binding of x to data. It resembles standard insertion into a binary search tree except that it propagates rotations through the tree to maintain balance after the insertion.

let rec add (x : α) (data : β) : (α, β) t -> (α, β) t = function
    | Empty -> Node (Empty, x, data, Empty, 1)
    | Node (l, v, d, r, h) ->
      let c = compare x v in
      if c = 0 then
        Node (l, x, data, r, h)
      else if c < 0 then
        balance (add x data l) v d r
      else 
        balance l v d (add x data r)

To implement removal of nodes from a tree, we'll find ourselves needing a function to "merge" two binary searchtrees $l$ and $r$ say where we can assume that all the elements of $l$ are ordered before the elements of $r$.

let rec merge (l : (α, β) t) (r : (α, β) t) : (α, β) t = 
  match (l, r) with
  | Empty, t -> t
  | t, Empty -> t
  | Node (l1, v1, d1, r1, h1), Node (l2, v2, d2, r2, h2) ->
    balance l1 v1 d1 (balance (merge r1 l2) v2 d2 r2)
Again, rotations are propagated through the tree to ensure the result of the merge results in a perfectly balanced tree.

With merge available, implementing remove becomes tractable.

let remove (id : α) (t : (α, β) t) : (α, β) t = 
  let rec remove_rec = function
    | Empty -> Empty
    | Node (l, k, d, r, _) ->
      let c = compare id k in
      if c = 0 then merge l r else
        if c < 0 then balance (remove_rec l) k d r
        else balance l k d (remove_rec r) in
  remove_rec t

The remaining algorithms below are "stock" algorithms for binary search trees with no particular consideration of balancing necessary and so we won't dwell on them here.

let rec find (x : α) : (α, β) t -> β = function
  | Empty ->  raise Not_found
  | Node (l, v, d, r, _) ->
    let c = compare x v in
    if c = 0 then d
    else find x (if c < 0 then l else r)

let rec mem (x : α) : (α, β) t -> bool = function
  | Empty -> false
  | Node (l, v, d, r, _) ->
    let c = compare x v in
    c = 0 || mem x (if c < 0 then l else r)
    
let rec iter (f : α -> β -> unit) : (α, β) t -> unit = function
  | Empty -> ()
  | Node (l, v, d, r, _) ->
    iter f l; f v d; iter f r

let rec map (f : α -> β -> γ) : (α, β) t -> (α, γ) t = function
  | Empty -> Empty
  | Node (l, k, d, r, h) -> 
    Node (map f l, k, f k d, map f r, h)

let rec fold (f : α -> β -> γ -> γ) (m : (α, β) t) (acc : γ) : γ =
  match m with
  | Empty -> acc
  | Node (l, k, d, r, _) -> fold f r (f k d (fold f l acc))

open Format

let print 
    (print_key : formatter -> α -> unit)
    (print_data : formatter -> β -> unit)
    (ppf : formatter)
    (tbl : (α, β) t) : unit =
  let print_tbl ppf tbl =
    iter (fun k d -> 
           fprintf ppf "@[<2>%a ->@ %a;@]@ " print_key k print_data d)
      tbl in
  fprintf ppf "@[[[%a]]@]" print_tbl tbl

The source code for this post can be found in the file 'ocaml/misc/tbl.ml' in the OCaml source distribution. More information on balanced binary search trees including similar but different implementation techniques and complexity analyses can be found in this Cornell lecture and this one.

Friday, August 19, 2016

Even Sillier C++

The C++ try...catch construct provides a facility for discrimination of exceptions based on their types. This is a primitive "match" construct. It turns out, this is enough to encode sum types.

The program to follow uses the above idea to implement an interpreter for the language of additive expressions using exception handling for case discrimination.

#include <iostream>
#include <cassert>
#include <exception>
#include <memory>

struct expr {
  virtual ~expr() {}

  virtual void throw_ () const = 0;
};

using expr_ptr = std::shared_ptr<expr const>;

class expr is an abstract base class, class int_ and class add derived classes corresponding to the two cases of expressions. Sub-expressions are represented as std::shared_ptr<expr> instances.

struct int_ : expr { 
  int val; 
  int_ (int val) : val{val}
  {}

  void throw_ () const { throw *this; } 
};

struct add : expr { 
  expr_ptr left; 
  expr_ptr right; 

  template <class U, class V>
  add (U const& left, V const& right) 
    : left {expr_ptr{new U{left}}}
    , right {expr_ptr{new V{right}}}
  {}

  void throw_ () const { throw *this; } 
};

With the above machinery in place, here then is the "interpreter". It is implemented as a pair of mutually recursive functions.


int eval_rec ();

int eval (expr const& xpr) {
  try {
    xpr.throw_ ();
  }
  catch (...) {
    return eval_rec ();
  }
}

int eval_rec () {
  assert (std::current_exception());

  try {
    throw;
  }
  catch (int_ const& i) {
    return i.val;
  }
  catch (add const& op) {
    return eval (*op.left) +  eval (*op.right);
  }
}

This little program exercises the interpreter on the expression $(1 + 2) + 3$.

int main () {

  try{
    // (1 + 2) + 3
    std::cout << eval (add{add{int_{1}, int_{2}}, int_{3}}) << std::endl;
  }
  catch (...){
    std::cerr << "Unhandled exception\n";
  }
    
  return 0;
}

Credit to Mathias Gaunard who pointed out using a virtual function for the throwing of an expression, removed the need for explicit dynamic_cast operations in an earlier version of this program.

Saturday, July 23, 2016

Simple category theory constructions

This post is about some very simple category theory constructions and how one can model them in C++.

First a type for binary products.

template <class A, class B>
using product = std::pair<A, B>;

//`fst (x, y)` is the projection `x`
template <class A, class B>
inline auto fst (product<A, B> const& p) {
  return p.first;
}

//`snd (x, y)` is the projection `y`
template <class A, class B>
inline auto snd (product<A, B> const& p) {
  return p.second;
}

//`mk_product (a, b) computes the product of `a` and `b`
template <class A, class B>
inline product<A, B> mk_product (A&& a, B&& b) {
  return  std::make_pair (std::forward<A> (a), std::forward<B> (b));
}

Now $dup : X \rightarrow X \times X$ defined by $x \mapsto (x,\; x)$.

//`dup a` computes the product `(a, a)`
template <class A>
inline product<A, A> dup (A const& x) { return mk_product (x, x); }

Next up, $twist : X \times Y \rightarrow Y \times X$ defined by $(x,\; y) \mapsto (y,\; x)$.

//`twist (x, y)` is the product `(y, x)`
template <class A, class B>
inline product<B, A> twist (product<A, B> const& p) {
  return mk_product (snd (p), fst (p));
}

If $f : U \rightarrow R$ and $g : V \rightarrow S$ then we have $ravel : U \times V \rightarrow R \times S$ defined by $(x,\; y) \mapsto (f (x),\; g (y))$.

//`ravel f g (x, y)` is the product `(f x, g y)` (credit to Max
//Skaller on the name)
auto ravel = [=](auto f) {
  return [=](auto g) {
    return [=](auto const& x) { 
      return mk_product (f (fst (x)), g (snd (x))); 
    };
  };
};

If $X \times Y$ is a (binary) product with projections $\pi_{x}$ and $\pi_{y}$, $Z$ an object, morphisms $f : Z \rightarrow X$, $g : Z \rightarrow Y$ then $\left\langle f, g \right\rangle : Z \rightarrow X \times Y$ is the mediating arrow $z \mapsto (f (z),\;g (z))$.

//The product of morphisms <`f`, `g`> (see
//https://en.wikipedia.org/wiki/Product_(category_theory))
auto prod = [=](auto f, auto g) {
  return [=](auto const& z) { return mk_product (f (z), g (z)); };
};

We can use the Pretty good sum type library to define a type suitable for modeling (binary) coproducts.

template <class A>
struct Left { 
  A a; 
  template <class U> explicit Left (U&& u) : a {std::forward<U> (u)} {}
  A const& value () const { return a; }
};

template <class B>
struct Right { 
  B b; 
  template <class U> explicit Right (U&& u) : b {std::forward<U> (u)} {}
  B const& value () const { return b; }
};

template <class A, class B>
using sum = pgs::sum_type<Left<A>, Right<B>>;
template <class> struct sum_fst_type;
template <class A, class B>
  struct sum_fst_type<sum<A, B>> { using type = A; };
template <class S> struct sum_snd_type;
template <class A, class B>
  struct sum_snd_type<sum<A, B>> { using type = B; };

template <class S>
using sum_fst_t = typename sum_fst_type<S>::type;
template <class S>
using sum_snd_t = typename sum_snd_type<S>::type;

If $X + Y$ is a (binary) sum with injections $i_{x}$ and $i_{y}$, $Z$ an object, morphisms $f : X \rightarrow Z$, $g : Y \rightarrow Z$ then $\left[ f, g \right] : X \times Y \rightarrow Z $ is the mediating arrow $\begin{align*} e \mapsto \begin{cases} f (e) & \text{if $e \in X$} \\ g (e) & \text{if $e \in Y$} \end{cases} \end{align*}$.

//The coproduct of morphisms [`f, `g`] (see
//https://en.wikipedia.org/wiki/Coproduct)
template <class S>
auto co_product = [=](auto f) {
  return [=](auto g) {
    return [=](S const& s) {
      using A = sum_fst_t<S>;
      using B = sum_snd_t<S>;
      using lres_t = decltype (f (std::declval<A>()));
      using rres_t = decltype (g (std::declval<B>()));
      static_assert (
        std::is_same<lres_t, rres_t>::value
       , "co_product : result types differ");
      using res_t = lres_t;
      return s.match<lres_t>(
        [=](Left<A> const& l) { return f (l.value ()); },
        [=](Right<B> const& r) { return g (r.value ()); }
       );
    };
  };
};

That's it. Here's some example usage of all this stuff.

  auto succ=[](int i) { return i + 1; };
  auto pred=[](int i) { return i - 1; };
  auto twice=[](int i) { return 2 * i; };
  auto square=[](int i) { return i * i; };
  auto add=[](product<int, int> const& s) { return (fst (s) + snd (s)); };

  //--

  //Products

  //`dup`
  auto p = dup (1);
  std::cout << p << std::endl; //Prints '(1, 1)'

  //`prod`
  p = prod (succ, pred) (4);
  std::cout << p << std::endl; //Prints '(5, 3)'

  //`twist`
  p = twist (p);
  std::cout << p << std::endl; //Prints '(3, 5)'
  
  //`ravel`
  p = ravel (succ) (pred) (p);
  std::cout << p << std::endl; //Prints '(4, 4)'

  //--

  //Sums

  sum<int, float> l{pgs::constructor<Left<int>>{}, 1};
  sum<int, float> r{pgs::constructor<Right<float>>{}, 1.0f};
  std::cout << 
    co_product<sum<int, float>> 
      ([=](int i) { return std::to_string(i); })
      ([=](float f) { return std::to_string(f); })
      (l)
    << std::endl;
  ;//Prints '1'
  std::cout << 
    co_product<sum<int, float>> 
      ([=](int i) { return std::to_string(i); })
      ([=](float f) { return std::to_string(f); })
      (r)
    << std::endl;
  ;//Prints '1.000000'

  //--

Friday, June 17, 2016

Generic mappings over pairs

Browsing around on Oleg Kiselyov's excellent site, I came across a very interesting paper about "Advanced Polymorphism in Simpler-Typed Languages". One of the neat examples I'm about to present is concerned with expressing mappings over pairs that are generic not only in the datatypes involved but also over the number of arguments. The idea is to produce a family of functions $pair\_map_{i}$ such that

pair_map_1 f g (x, y) (x', y') → (f x, g y) 
pair_map_2 f g (x, y) (x', y') → (f x x', g y y') 
pair_map_3 f g (x, y) (x', y') (x'', y'', z'') → (f x x' x'', g y y' y'')
       .
       .
       .
The technique used to achieve this brings a whole bunch of functional programming ideas together : higher order functions, combinators and continuation passing style (and also leads into topics like the "value restriction" typing rules in the Hindley-Milner system).
let ( ** ) app k = fun x y -> k (app x y)
let pc k a b = k (a, b)
let papp (f1, f2) (x1, x2) = (f1 x1, f2 x2)
let pu x = x
With the above definitions, $pair\_map_{i}$ is generated like so.
(*The argument [f] in the below is for the sake of value restriction*)
let pair_map_1 f = pc (papp ** pu) (f : α -> β)
let pair_map_2 f = pc (papp ** papp ** pu) (f : α -> β -> γ)
let pair_map_3 f = pc (papp ** papp ** papp ** pu) (f : α -> β -> γ -> δ)
For example,
# pair_map_2 ( + ) ( - ) (1, 2) (3, 4) ;;
- : int * int = (4, -2)

Reverse engineering how this works requires a bit of algebra.

Let's tackle $pair\_map_{1}$. First

pc (papp ** pu) = (λk f g. k (f, g)) (papp ** pu) = λf g. (papp ** pu) (f, g)
and,
papp ** pu = λx y. pu (papp x y) = λx y. papp x y
so,
λf g. (papp ** pu) (f, g) =
    λf g. (λ(a, b) (x, y). (a x, b y)) (f, g) =
    λf g (x, y). (f x, g y)
that is, pair_map_1 = pc (papp ** pu) = λf g (x, y). (f x, g y) and, we can read the type off from that last equation as (α → β) → (γ → δ) → α * γ → β * δ.

Now for $pair\_map_{2}$. We have

pc (papp ** papp ** pu) =
    (λk f g. k (f, g)) (papp ** papp ** pu) =
    λf g. (papp ** papp ** pu) (f, g)
where,
papp ** papp ** pu = papp ** (papp ** pu) =
    papp ** (λa' b'. pu (papp a' b')) =
    papp ** (λa' b'. papp a' b') = 
    λa b. (λa' b'. papp a' b') (papp a b)
which means,
pc (papp ** papp ** pu) = 
    λf g. (papp ** papp ** pu) (f, g) =
    λf g. (λa b.(λa' b'. papp a' b') (papp a b)) (f, g) =
    λf g. (λb. (λa' b'. papp a' b') (papp (f, g) b)) =
    λf g. λ(x, y). λa' b'. (papp a' b') (papp (f, g) (x, y)) =
    λf g. λ(x, y). λa' b'. (papp a' b') (f x, g y) =
    λf g. λ(x, y). λb'. papp (f x, g y) b' =
    λf g. λ(x, y). λ(x', y'). papp (f x, g y) (x', y') =
    λf g (x, y) (x', y'). (f x x', g y y')
that is, a function in two binary functions and two pairs as we expect. Phew! The type in this instance is (α → β → γ) → (δ → ε → ζ) → α * δ → β * ε → γ * ζ.

To finish off, here's the program transliterated into C++(14).

#include <utility>
#include <iostream>

//let pu x = x
auto pu = [](auto x) { return x; };

//let ( ** ) app k  = fun x y -> k (app x y)
template <class F, class K>
auto operator ^ (F app, K k) {
  return [=](auto x) {
    return [=] (auto y) {
      return k ((app (x)) (y));
    };
  };
}

//let pc k a b = k (a, b)
auto pc = [](auto k) {
  return [=](auto a) {
    return [=](auto b) { 
      return k (std::make_pair (a, b)); };
  };
};

//let papp (f, g) (x, y) = (f x, g y)
auto papp = [](auto f) { 
  return [=](auto x) { 
    return std::make_pair (f.first (x.first), f.second (x.second)); };
};

int main () {

  auto pair = &std::make_pair<int, int>;

  {
  auto succ= [](int x){ return x + 1; };
  auto pred= [](int x){ return x - 1; };
  auto p  = (pc (papp ^ pu)) (succ) (pred) (pair (1, 2));
  std::cout << p.first << ", " << p.second << std::endl;
  }

  {
  auto add = [](int x) { return [=](int y) { return x + y; }; };
  auto sub = [](int x) { return [=](int y) { return x - y; }; };
  auto p = pc (papp ^ papp ^ pu) (add) (sub) (pair(1, 2)) (pair (3, 4));
  std::cout << p.first << ", " << p.second << std::endl;
  }

  return 0;
}

Thursday, April 21, 2016

Oh! Pascal!

I can't help but want to share my joy at coming across this pearl of a program from the "Pascal User Manual and Report" - Jensen and Wirth (circa 1974). In my edition, it's program 4.7 (graph1.pas).

This is it, rephrased in OCaml.

(* Graph of f x = exp (-x) * sin (2 * pi * x)

  Program 4.7, Pascal User Manual and Report, Jensen & Wirth
*)

let round (x : float) : int =
  let f, i = 
    let t = modf x in 
    (fst t, int_of_float@@ snd t) in
  if f = 0.0 then i
  else if i >= 0 then
    if f >= 0.5 then i + 1 else i
  else if -.f >= 0.5 then i - 1 else i

let graph (oc : out_channel) : unit =
  (*The x-axis runs vertically...*)
  let s = 32. in (*32 char widths for [y, y + 1]*)
  let h = 34 in (*char position of x-axis*)
  let d = 0.0625 in (*1/16, 16 lines for [x, x + 1]*)
  let c = 6.28318 in (* 2pi *)
  let lim = 32 in
  for i = 0 to lim do
    let x = d *. (float_of_int i) in
    let y = exp (-.x) *. sin (c *. x) in
    let n = round (s *. y) + h in
    for _ = n downto 0 do output_char oc ' '; done;
    output_string oc "*\n"
  done

let () = print_newline (); graph stdout; print_newline ()

The output from the above is wonderful :)

                                   *
                                               *
                                                       *
                                                            *
                                                            *
                                                         *
                                                   *
                                           *
                                   *
                            *
                       *
                    *
                    *
                      *
                          *
                              *
                                   *
                                       *
                                          *
                                            *
                                            *
                                           *
                                         *
                                      *
                                   *
                                *
                               *
                              *
                             *
                              *
                                *
                                 *
                                   *