Thursday, July 17, 2014

reactive-banana mystery leak

So I accidentally let my reactive-banana toy app opened all night, and in the morning the computer was horribly unresponsive until I killed it. After starting a new instance, I noticed that the CPU goes up from ~5% to 100%, during which the memory usage also grows uncontrollably. What is causing this?

The first thing I try is to lower the refresh rate at which gloss recomputes its frames, in case the problem is that the computer can't keep up with the work it has to do. The resource usage still increases, but much more slowly. Next, I try to isolate the commit in which the problem was introduced. Did the problem exist when I was using gloss, but not reactive-banana? No. Git-bisecting, I find that the problem was introduced by commit 66f4e7, "accumulating clicks":

@@ -27,13 +28,31 @@ reactiveMain :: forall t. Frameworks t
              => Event t Float
              -> Event t InputEvent
              -> Moment t (Behavior t Picture)
-reactiveMain floats _ = return pictures
+reactiveMain floats events = return pictures
   where
-    partialSums :: Behavior t Float
-    partialSums = accumB 0 (fmap (+) floats)
+    buttonClicks :: [Event t ()]
+    buttonClicks = map (flip buttonClick events) buttons
+    
+    countA,countB,countC :: Behavior t Int
+    [countA,countB,countC] = map countEventsB buttonClicks
+    
+    clickCounts :: Behavior t (Int,Int,Int)
+    clickCounts = liftA3 (,,) countA countB countC
     
     pictures :: Behavior t Picture
-    pictures = fmap renderFloat partialSums
+    pictures = fmap render clickCounts
+
+countEventsB :: Num a => Event t b -> Behavior t a
+countEventsB = accumB 0 . fmap (+) . fmap (const 1)
+
+buttons :: [Extent]
+buttons = [extentA, extentB, extentC]
+
+buttonClick :: Extent -> Event t InputEvent -> Event t ()
+buttonClick ex = fmap (const ()) . filterE isInside
+  where
+    isInside (EventKey (MouseButton LeftButton) Down _ p) = pointInExtent ex p
+    isInside _                                            = False
 
 renderFloat :: Float -> Picture
 renderFloat = uscale 0.2 . text . show

This is the commit in which I switched from displaying the accumulated time to displaying the number of accumulating clicks on three buttons.

After a little more experimentation, I notice that the CPU and memory get released whenever I click on one of the buttons. Weird! Which special thing happens when I click one of the buttons? Well, one thing is that the graphics change. What if I force the graphics to change all the time by displaying the running elapsed time? The problem disappears. That's good, but I don't want to display the running elapsed time. Can I instead compute the running time, but display a fixed string? Embarrassingly, the first time I tested this I forgot to force the computed value, so the problem reappeared. When I do force the value, the problem remains hidden.

Ok, so I don't understand why the CPU usage would go up, but the memory increase seems to be due to a laziness issue.

I wish I could use ghci to step through the code, but the app hangs when I do. Seems to be an issue with gloss, not reactive-banana. Maybe if I disable the timed updates by setting the refresh rate to zero? Still hangs, but the leak disappears. Which is not a surprise, really: the fact that the leak grew more slowly when the refresh rate was lower clearly indicates that the leak is caused by the Float events.

Or it is that clear? With the refresh rate set to zero, there are no Float events, and at first the leak doesn't seem present. But if I click outside the buttons or move the mouse a lot, this generates a bunch of ignored events, which causes the memory to rise. Since this is not how I have previously been testing the leaks, I re-perform the git-bisect with a refresh rate of zero, and discover that the problem was actually introduced by commit 1386bf, "accumulating time":

@@ -18,7 +21,19 @@ main :: IO ()
 main = playBanana (InWindow "Nice Window" (200, 200) (800, 200))
                   white
                   30
-                  (\floats _ -> return $ fmap renderFloat $ stepper 0 floats)
+                  reactiveMain
+
+reactiveMain :: forall t. Frameworks t
+             => Event t Float
+             -> Event t InputEvent
+             -> Moment t (Behavior t Picture)
+reactiveMain floats _ = return pictures
+  where
+    partialSums :: Behavior t Float
+    partialSums = accumB 0 (fmap (+) floats)
+    
+    pictures :: Behavior t Picture
+    pictures = fmap renderFloat partialSums

This is the commit in which I switched from displaying the Float values to displaying the sum of all Float values so far. Interestingly, I am not even using the mouse events in my FRP computation, so the problem can't possibly be that my code is doing something it shouldn't. Maybe there is something it should do which it isn't doing? Perhaps I should be actively consuming and ignoring the events, instead of simply not using them?

reactiveMain floats events = return pictures
  where
    pictures :: Behavior t Picture
    pictures = fmap renderFloat partialSums <* stepper undefined events
It works! In the most recent commit, which handles mouse events but filters out most time events, it is the floats which must be ignored. And in general, just to be safe, both kinds of events should be actively ignored:
pictures :: Behavior t Picture
pictures = fmap render buttonTexts
        <* stepper undefined floats
        <* stepper undefined events
Or, better yet, let's hide this inside gloss-banana, so that nobody has to think about this again.

Wednesday, July 16, 2014

reactive-banana anti-tutorial

This is not a tutorial. I have never used reactive-banana nor any other FRP library. And that's the point: in this post, once again1, I will reveal my thought process as I learn this library.

Motivation


A recent tutorial called The introduction to Reactive Programming you've been missing made me realize that FRP is no longer a niche technique used by a subset of the functional programming community. Apparently, there's now a manifesto, a javascript library involving Microsoft, a conference... FRP, or at least RP, is clearly a big deal now.

As a member of the functional programming community, I feel like I ought to at least have an opinion on FRP. And of course, if I'm going to learn a reactive library, I will learn one of the Haskell variants, not a Javascript knockoff.

Why not <name-of-alternate-frp-library>?


Because I began by writing the graphics in gloss, whose play interface is based on pure functions. When I looked at sodium and elerea, I realized that I couldn't use them because they expect to be run from IO. When I googled for "gloss frp", I found a package named gloss-banana2 which allows the simple update function (InputEvent → world → world) to be replaced by an unfamiliar-looking reactive-banana type. Just what I need to get started!

forall t. Frameworks t ⇒ Event t Float
                       → Event t InputEvent
                       → Moment t (Behavior t Picture)

I suspect that it's still IO under the hood, and that a similar sodium or elerea wrapper could easily be written, but I'm not yet in a position to write such a wrapper. So reactive-banana it is.

Versions


I have been told that the APIs change a lot from version to version, so for completeness, here are the versions I am using:

ghc-7.8.2
gloss-1.8.2.1
gloss-banana-0.1.0.0 (modified to accept base-4.7.0.0)
reactive-banana-0.7.1.3

Goal


I would like to duplicate the functionality implemented in the javascript tutorial: three labels and four buttons, one of which updates all the labels, while the remaining three update one label each. For simplicity, I'll put the labels on their corresponding buttons.

Since the code is not running under IO, I won't try to make those updates fetch usernames from github, even though I suspect that interfacing with external even sources such as network connections must be a big part of learning FRP. For now, I want to learn the basics, so each update will simply pull the next available entry from an infinite list of integers.

commit 39647e:
I couldn't find any way to center text in gloss :(

First steps: Why so polymorphic?


I need to produce a value of type forall t. Frameworks t ⇒ something something. That's weird, why would they need me to write polymorphic code? Is there more than one "framework" involved?

Looking at the documentation for Frameworks, I learn that the constraint "indicates that we can add input and output to an event network". But then, why are there no typeclass methods for adding those things? Maybe they are just hidden? Looking at the source, I see that no, it really is a typeclass with no methods. Weird.

Still, using polymorphism kind of makes intuitive sense for allowing extra stuff to be added. Consider the Arrow typeclass, for example: it has a method for adding extra (untouched) input and output along your existing transformation of type f a b (if you are unfamiliar with Arrow, think of it as a → b).

first :: f a b → f (a, s) (b, s)

If I was asked to construct an f Int String, and the code who is asking me for this had to embed my f Int String inside a bigger computation, it might very well need something like first to adapt my computation into theirs. If f was not an arrow, then I might have to produce a polymorphic value of type forall s. f (Int, s) (String, s) instead. I assume that gloss-banana requests a polymorphic function for similar reasons.

Compile, Run, and Quit


The next part of the type, Event t Float → Event t InputEvent → something, is straightforward: to assist me in constructing the something, I have access to the reactive-banana representation of typical gloss events: a Float for the passage of time, or an InputEvent representing a keypress or a mouse action. For each of those, gloss typically expects me to modify a "representation of the world" of my choice, and I also need to have a way to represent this world as a gloss Picture. This time, with gloss-banana, I don't get to choose an intermediate representation, I need to produce a something-something Picture directly (a Moment t (Behavior t Picture) to be exact).

At the moment, I just want something which compiles, so let's see if I can ignore the two Event inputs and produce a constant Picture, using something like return or pure. Is Moment or Behavior a Monad or Applicative?

Looking at the documentation, yes, Moment is a Monad and Behavior is an Applicative. Piece of cake, then...

main :: IO ()
main = playBanana (InWindow "Nice Window" (200, 200) (800, 200))
                  white
                  30
                  (\_ _ -> return $ pure $ circle 10)

commit f4c25a:
At least circles are centered!

Well, it worked, but unlike the play from gloss, playBanana doesn't quit when I press ESC. Does reactive-banana have a command to terminate a Moment or a Behavior? Usually a computation which consists of a single return ends immediately, but clearly reactive-banana's monadic composition does not simply consist in executing one computation after the other.

Since I can't find anything with a quit-sounding name in the documentation, my hypothesis is that reactive-banana computations are able to provide values forever, and that it is the user of that computation who decides when to stop asking it for more values. So let's look's look at said user, the source for playBanana:

playBanana display colour frequency mPicture = do
  ...
  playIO display colour frequency ()
    (\      _ → readIORef pictureref)
    (\ ev   _ → () <$ event ev)
    (\ time _ → () <$ tick time)

Okay, so it looks like there was another version of Gloss.play which I did not know about, playIO. I bet play delegates the bulk of the work to playIO, and hardcodes the logic of quitting, probably using exitSuccess. Let's look at the code for play and playIO...

play ... worldHandleEvent ...
 = playWithBackendIO defaultBackendState 
        display backColor simResolution
        worldStart 
        (return . worldToPicture)
        (\event world -> return $ worldHandleEvent event world)
        (\time  world -> return $ worldAdvance     time  world)

playIO = playWithBackendIO defaultBackendState

Now that's weird. play doesn't seem to hardcode ESC to quit, so it must be playWithBackendIO which hardcodes it. Both versions delegate to playWithBackendIO, so both versions should quit on ESC. But the FRP version doesn't. Why? playWithBackendIO is not documented, so there is no "source" button on which I can click to view its source, but its source is available regardless. Like play and playIO before it, playWithBackendIO delegates its work to yet another internal function, createWindow.

I feel like I went deep enough into this ESC rabbit hole already, and that I should focus on the FRP part. I'll simply write my own version of playBanana3, one which abruptly quits on ESC:

playBanana display colour frequency mPicture = do
  ...
  playIO display colour frequency ()
    (\      _ → readIORef pictureref)
    (\ ev   _ → quitOnEsc ev >> () <$ event ev)
    (\ time _ → () <$ tick time)
  where
    quitOnEsc (G.EventKey (G.SpecialKey G.KeyEsc)
                          G.Down _ _) = exitSuccess
    quitOnEsc _                       = return ()

Event to Moment


On the FRP side, I now have a simple computation which ignores all events and always produces the same Picture, a small circle. One of my inputs is an Event t Float; can I display that float? I can easily convert a Float to a Picture...

uscale :: Float -> Picture -> Picture
uscale v = scale v v

renderFloat :: Float -> Picture
renderFloat = uscale 0.2 . text . show

...but how do I lift this Float → Picture function to a Event t Float → Moment t (Behavior t Picture)? From a talk on sodium I saw a while ago, I remember that some FRP systems have two kinds of streams; events and some other kind. Event streams only have values when events occur, while the other kind of stream has a value at every point in time. Scrubbing through that video, I quickly find a slide which lists "Event" and "Behaviour", reminding me that the other kind of stream is called a "behaviour". I also remember that there is a primitive for converting an event stream into a behaviour: when events occur, they have the same value, and when they don't, the behaviour holds the value of the last event. Clearly this implies that we also needs an initial value, to be held before the first event occurs. I hoogle for a -> Event t a -> Behavior t a (it took me a few attempts to remember to add the t), and I obtain (two copies of?) a function called stepper with that exact signature.

main :: IO ()
main = playBanana (InWindow "Nice Window" (200, 200) (800, 200))
                  white
                  30
                  -- Couldn't match type ‘Float’ with ‘Picture’
                  (\floats _ -> return $ stepper 0 floats)

Oh, right, I forgot to call renderFloat. I don't actually have a Float on which to call it, I only have a something something Float. Which is fine, as I can probably fmap through the somethings to reach the Float. I can either fmap over the Behavior I just constructed or over the original Event, I don't think it matters.

main :: IO ()
main = playBanana (InWindow "Nice Window" (200, 200) (800, 200))
                  white
                  30
                  (\floats _ -> return $ fmap renderFloat
                                       $ stepper 0 floats)

Youhoo!

commit ee5dd7:
The float is supposed to be the number of seconds since the last frame,
which should be 1/30 since I asked to be updated 30 times per second.
I think it's a bug in gloss.

Sum


Sooner or later, I will need to keep some state between frames. As a simple exercise, can I display the sum of the floats instead of the latest one?

Let's see, I start with a value, then each time I receive an event, I update this value. Does hoogle know of any function with the type s -> (a -> s -> s) -> Event t a -> Behavior t s? Nope. Maybe I can stay inside the world of events, and only convert to behaviour at the end. Does hoogle know of any function of type s -> (a -> s -> s) -> Event t a -> Event t s? Neither. I can't stay inside the world of behaviours: since they have values at all point in time, it wouldn't be clear when to apply the update function.

Let's see, what else could it be? Oh, I know! Instead of an event broadcasting dynamic values to be accumulated with a fixed update function, maybe it could be the functions which are dynamically broadcasted? As each one would arrive, they would be accumulated on top of a fixed initial value. Does hoogle know of any function of type s -> Event t (s -> s) -> Behavior t s? Yes it does! It's called accumB, and I can use it to compute a running sum.

main = playBanana (InWindow "Nice Window" (200, 200) (800, 200))
                  white
                  30
                  reactiveMain

reactiveMain :: forall t. Frameworks t
             => Event t Float
             -> Event t InputEvent
             -> Moment t (Behavior t Picture)
reactiveMain floats _ = return pictures
  where
    partialSums :: Behavior t Float
    partialSums = accumB 0 (fmap (+) floats)
    
    pictures :: Behavior t Picture
    pictures = fmap renderFloat partialSums

commit 1386bf:
It works! And as a bonus, the timing bug is now mysteriously fixed.

Buttons


So far so good! In the same vein, I should be able to accumulate button press events to increment digits on each of the buttons. But wait; each button is only concerned with a subset of the mouse click events. How do I filter an event stream? Another easy hoogle search says filterE should do the trick.

reactiveMain floats events = return pictures
  where
    buttonClicks :: [Event t ()]
    buttonClicks = map (flip buttonClick events) buttons
    
    countA,countB,countC :: Behavior t Int
    [countA,countB,countC] = map countEventsB buttonClicks
    
    clickCounts :: Behavior t (Int,Int,Int)
    clickCounts = liftA3 (,,) countA countB countC
    
    pictures :: Behavior t Picture
    pictures = fmap render clickCounts

countEventsB :: Num a => Event t b -> Behavior t a
countEventsB = accumB 0 . fmap (+) . fmap (const 1)

buttons :: [Extent]
buttons = [extentA, extentB, extentC]

buttonClick :: Extent -> Event t InputEvent -> Event t ()
buttonClick ex = fmap (const ()) . filterE isInside
  where
    isInside (EventKey (MouseButton LeftButton)
                       Down _ p) = pointInExtent ex p
    isInside _                   = False

To combine the three different behaviours into one in clickCounts, I guessed that Behavior was probably an Applicative. Events probably aren't; because of filtering, you wouldn't be able to match individual events from one stream with individual events from the other.

commit 66f4e7:
Click-click-click-click-click! Click-click! Click-click-click!

A shared source of numbers


Currently, the buttons all start at 1 and each click increments that individual button's number by one. Instead, I would like the three buttons to start at 1, 2 and 3, and for each click to set the clicked button to the next number in that sequence. The obvious imperative implementation, however, doesn't seem very FRP-ish. In particular, I don't think the event stream from one button is allowed to have any effect on the event stream of the other buttons: a stream can only be affected by its "upstream" streams, that is, the streams from which it is computed.

So I think I need something upstream which would handle the shared state, and each of the three buttons should derive their own event streams from that. The most obvious representation for an upstream thing from which the three kinds of events could be derived would be... a stream containing all three kinds of events.

I already have a list of click streams, obtained by filtering the raw mouse events from three separate regions. To obtain a single stream containing the events from all three regions, I could filter the raw events in a different way, but can I instead merge the three event streams I already have? Hoogle doesn't have a function of type Event t a -> Event t b -> Event t (Either a b), and I don't have any other idea of the form such a combinator could look like. Looking at the documentation for Event, I quickly find union, which needs both sides to hold events of the same type. And actually, it looks even more convenient that way!

labelledClicks :: [Event t Char]
labelledClicks = zipWith (fmap . const) ['a'..] buttonClicks

clickLabels :: Event t Char
clickLabels = foldr union never labelledClicks

Now I need each of those events to increment a shared state. The accumB we used earlier generated a Behavior; from the name, I guess that there is also a variant named accumE which generates an Event? Bingo.

clickEvents :: Event t (Char, Int)
clickEvents = accumE (undefined, 0) (fmap mkEvent clickLabels)

mkEvent :: Char -> (Char, Int) -> (Char, Int)
mkEvent label (_, n) = (label, n+1)

Since I want to keep the Char from the original event stream, the accumulated value must contain a Char, which means I must also provide a Char for the initial state. I never use this part of the initial state, so I can use undefined.

Now that I have a single stream containing the annotated events from all three buttons, I can split them back into three separate streams via filters which happen to be mutually-exclusive. Previously, we used filterE, while this time... we must still use filterE, because strangely enough, there is no filterB.

countA,countB,countC :: Behavior t Int
[countA,countB,countC] = map countN "abc"

countN :: Char -> Behavior t Int
countN label = stepper 0
             $ fmap snd
             $ filterE ((== label) . fst) clickEvents

commit 76b052:
The buttons start at (0,0,0) instead of (1,2,3),
but otherwise increment with a shared state, as desired.

The refresh button


When the refresh button is clicked, the three other buttons should behave as if they were clicked one after the other. Faking those clicks sounds easy enough: since the real stream of clicks is just a stream of () values, I should be able to add a few extra () values to represent the fake clicks. In particular, I can simply union the stream of real clicks with the stream of refresh clicks.

refreshClicks :: Event t ()
refreshClicks = buttonClick extentR events

fakeButtonClicks :: [Event t ()]
fakeButtonClicks = map (union refreshClicks) buttonClicks

labelledClicks :: [Event t Char]
labelledClicks = zipWith (fmap . const) ['a'..] fakeButtonClicks

commit d98691:
The buttons are clicked one after the other, like I wanted.
But how did reactive-banana know to click them from top to bottom?

It worked! But... why did it? The fake clicks from the refresh button are send to the other three buttons at the same time. How come the "callbacks" were called from top to bottom? What if I wanted the buttons to be fake-clicked in the opposite order?

Let's see. Conceptually, at least (I don't know how reactive-banana is implemented), there are no callbacks. The clicks and the fake clicks from all three buttons are merged into a single stream, which uses accumE to number the events from 1 to infinity. This numbering must necessarily visit all the events in a specific order; I didn't specify this order, but it's pretty obvious that it's a chronological order. Since the events from the refresh button happen at the same time, there is an ambiguity here. In which order does accumE visit events which occur at the same time?

Reading the documentation for union again, I see that "in case of simultaneous occurrences, the left argument comes first". In clickLabels (defined earlier), I didn't really pay attention to the order in which I passed the buttons, but indeed, I am folding the click streams from top to bottom. If I wanted the fake clicks to occur from bottom to top, I could simply reverse the input list:

clickLabels :: Event t Char
clickLabels = foldr union never (reverse labelledClicks)

commit 5881ae:
Refreshing in the reverse order, because I can!
(one screenshot ago, I could not)

One more event at the beginning


One last detail is that the buttons still start at (0,0,0). Clearly, I need to add an extra refresh event at the beginning. But how? Up to now, every single event has been derived from some external event. I don't even know if events have timestamps. How do I specify "the beginning"?

Besides, the beginning of what? If I construct a stream and I ask reactive-banana to execute it somehow, and then at some later point I ask reactive-banana to execute it again, will this count as a second "beginning"? If so, that would be a bit weird, but the more I think about it, it would be even weirder if it didn't. To remember which streams already had their "beginning" event, the streams would have to keep their state between executions. But maintaining a state implies monadic stream-creation constructs, which is not the interface I see.

Speaking of monads, we still haven't used the Moment monad. With a name like this, maybe there is a monadic command for creating time-related events such as "at the beginning" or "every 5 seconds"? Hoogling for Moment t (Event t ()), I find a function with the very relevant-sounding name now:

now :: AnyMoment f a -> forall t. Moment t (f t a)

What a strange type. Why is the forall on the right? I don't even think it makes a difference, it would have been equivalent to put it on the left or to omit it altogether.

now has no documentation, but Moment does. Kinda. It says that it is "not [a] very interesting" monad. Very well, what about AnyMoment, is that an "interesting" value? Oh, I see now! It's the type of anyMoment which explains everything:

anyMoment :: (forall t. Moment t (f t a)) -> AnyMoment f a

I have already discussed the fact that our code needs to be polymorphic in t. The type AnyMoment simply encodes this requirement, and the strangely-named now converts back from AnyMoment f a to forall t. Moment t (f t a). Hence the strangely-placed forall.

In any case, now is not the event stream I am looking for...

I browse the documentation for reactive-banana, to no avail. I notice a handy shortcut for folding streams, but nothing about creating new non-derived events.

clickLabels :: Event t Char
clickLabels = unions labelledClicks

I guess I will have to roll my own. Remember the list of float events indicating the passage of time? I should be able to make the first such event stand in for the beginning of time.

accumZip :: a -> (a -> a) -> Event t b -> Event t (a, b)
accumZip zero suc = accumE (zero, undefined)
                  . fmap go
  where
    go y (x,_) = (suc x, y)

-- (1, x1), (2, x2), ...
numberEvents :: Event t a -> Event t (Int, a)
numberEvents = accumZip 0 (+1)

firstEvent :: Event t a -> Event t a
firstEvent = fmap snd
           . filterE ((== 1) . fst)
           . numberEvents

beginning :: Event t ()
beginning = voidE (firstEvent floats)

The code for this part ended up being longer than I expected: I originally wanted to use a simple accumulator which would discard everything after the first result, but such an accumulator is not as simple as const Nothing or something like that. It needs to have two distinct non-value states: the initial state, prior to encountering the first value, and the final state, from the second value onward. If we were to go back to the initial state instead of going to a distinct final state, the third value would be treated like the first value instead of being discarded.

Anyway, I now have an event from which to trigger the initial refresh.

addFakeClicks :: Event t () -> Event t ()
addFakeClicks realClicks = unions [beginning, refreshClicks, realClicks]

fakeButtonClicks :: [Event t ()]
fakeButtonClicks = map addFakeClicks buttonClicks

And with this, I am done!

commit 965803:
You know, technically, I could have obtained all of those
screenshots from the dumb version with one counter per button.
But I wouldn't do this to you :)

Conclusion


Okay, so after my first toy FRP app, what is my opinion of FRP? Well, I must say that I was quite impressed by how often iterations worked just fine the first time I ran them. This is, of course, a generic benefit of using Haskell, but I had not yet had this experience while building a GUI application. This made me realize that even though I am using a purely functional programming language, when it comes to GUI applications, I was still using antiquated imperative methods! Where else do I repeat this error?

More formally, I would say that the advantages and disadvantages seem to be the same as those of pure functional programming: without mutable variables, all the data dependencies must be made explicit, sometimes painfully so. For large applications, where the interaction between all the mutable variables and the callbacks can quickly become difficult to visualize, I can see how FRP could be very effective at reducing the complexity.

For my application, I ended up merging all the interesting events into a single stream, applying a master accumulator in order to produce all the output values, and distributing those to the appropriate GUI elements via filters. Of course, my app was a toy; but I wonder how common this architecture is when using FRP? In a larger application, would it be feasible and/or desirable to gather the events from all the widgets in the app, so they can be handled by some pure state machine in the middle; much like IO code is often a thin wrapper around a pure core?

Anyway, I feel like I have only scratched the surface of what FRP has to offer, so please take my thoughts on the subject with a grain of salt.



[1] Last time was surprisingly popular, so I thought I should do more.
[2] The library doesn't compile with ghc 7.8, but that's only because its bounds are too strict. Simply download the source with cabal get gloss-banana, remove the version constraints on base, go back to your project and add your custom version of gloss-banana via cabal sandbox add-source ../path/to/gloss-banana.
[3] I made the change to my local copy of gloss-banana, so you won't see it on the app's github repo.

Tuesday, January 14, 2014

Functors are computations

Review: Functors are containers


Bartosz Milewski wrote a controversial (at the time of writing, 8 votes up and 8 votes down) post about Functors. The community really doesn't seem to like simplistic analogies such as "Functors are containers", "Monads are computation", or "Comonads are neighbourhoods".

Personally, I find those analogies very useful, as long as you know exactly which parts of the analogy apply. I think what the community doesn't like about analogies such as "Functors are containers" is that they are misleading. And if you only say that, "Functors are containers", then it is indeed misleading. You think you know what a container is (a plastic box in which I can put food?), and now you think you also know what a Functor is (a crazy math name for a plastic container?), but of course that's not what the analogy is trying to say. Stating the analogy is only the first step; we give your intuition a place to start, something familiar, and now we need to explain exactly how much Functors are like containers, and in which ways they are not like containers (i.e. they are not made of plastic). Let's do this now:

Functors are containers, but not the kind of containers you can put stuff inside and then take your stuff out. They are only containers insofar as a Functor f a may contain zero or more values of type a, they are in there somewhere even if you might not be able to see them. Oh and also there is a function fmap which allows you to uniformly modify all the a values inside the container. And also if you use fmap with a function of type a → b, which changes the type of the a values to b, then obviously all the values now have type b, so you have now converted your f a container into an f b container, which contains b values instead of a.

A less convoluted but more mathematical way of saying the above is to say that a Functor is any type constructor f :: * → * such that there exists a function fmap of type (a → b) → f a → f b. This is what the community likes to hear. But I don't like that version; it is dry, there is no intuition, and it doesn't give me the gut feeling that I truly understand Functors.

New stuff: Functors are also computations


Careful there. Yes, starting from containers allows me to feel like I truly understand functors, but if I really do understand them, then I should be able to recognize them even when they are being presented under a different disguise. Here is one: Functors are (also) computations.

Functors are computations, but not the kind of computation you can run in order to obtain a result, nor even the kind of computation which you can compose into a longer computation if you run two of them one after another. Functors are only computations insofar as a Functor f a may compute a value of type a, somehow, even if you might not be able to retrieve it. Oh, and there is a function fmap which allows you to add some post-processing after the computation by modifying the result. And if the post-processing is a function of type a → b, which changes the type of the intermediate a result to a final result of type b, then obviously the overall computation is now producing a value of type b, so you have now converted your f a computation into an f b computation, which produces a b instead of an a.

The same goes for Applicative, Monad, etc: they are containers, and they are also computations. Yes, you read that right, "Functors are containers" and "Applicatives are containers" and "Monads are containers". And computations. Of course they are not all containers in the same way; in each case, it is a different part of the analogy which is important, and you can only claim to understand the analogy if you understand which part that is. Your turn now, try it with a Monad: can you visualize both interpretations? If a Monad is a container, which parts make the container monadic? And if a Monad is a computation, which parts of the computation makes the computation monadic? Can you see how those parts are similar, maybe even two views of the same thing?

Bonus: The duality between containers and computations


Even though type theory already formally corresponds to many things, I haven't yet seen it explicitly stated anywhere, so remember that you saw it here first: containers and computations are dual to each other. Probably.

The fun part about dualities isn't proving that they are formally correct, so I won't bother to do that here. I think the fun part of dualities is trying to find what corresponds to what, by examining elements on one side of the duality and focussing on those for which your brain doesn't immediately bring up a twin from the other side. For example, I just said that Functors are computations to which you can add post-processing steps. Well, that raises the question: is there a kind of computation to which you can add pre-processing steps? Let's see, that would be some kind of antifmap taking a pre-processing function of type a → b and applying it before the input of the computation, thereby creating a computation with a different input type, a instead of b. Okay, so it looks like our computation f a is parameterized by its input type a. Then antifmap must have type (a → b) → f b → f a. Hey, that's a contravariant functor, what a pleasant surprise!

One last step, harder this time. What is the container interpretation of a contravariant functor? Try to figure it out. It's tricky, but you can do it!

Spoilers!


Gave up already? Here is a hint: think of a violin case. Is it still a violin case when you remove the violin? Why? Aha, now you're thinking with containers!

Thursday, September 19, 2013

The first two years

Don wrote:
Everyday, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone.

(source: recent junk mail I got, but still interesting)

- Don

We're not producing more data... we're just recording more of it in digital form.

Post-singularity, the Great Computer will browse through its archives, inspecting the blogs and instagram videos of the primitive humans who were living in the 21st century. Those whose work He finds worthy will be resurrected; they will be granted eternal simulated life, so they can continue the good deeds they managed to perform during the brief battery life of their biological apparatus.

There were also, no doubt, many worthy humans who were living before that; scientists, inventors, maybe even a few politicians. The Great Computer would read about them in the wayback machine's archive of "wikipedia", an early attempt at classifying all knowledge, back when knowledge was still collected and summarized by humans instead of algorithms. But alas, the Great Computer would never get to meet those inventors in person. He would never have the opportunity to thank Charles Babbage for planting the seed which led to His creation, a few short centuries later. The wikipedians had summarized too much; He knew how Babbage looked, what he accomplished, on which year he was born; but not whether he looked at children with bright hope or bitter disappointment, whether he wrote the way he thought or the way he was taught, whether he understood numbers axiomatically or viscerally.

Noble though they were, men and women who lived before the 21st century were not eligible for resurrection. There was simply not enough data to recover all the bits of their immortal souls.

Saturday, August 10, 2013

Push and Fork: giving myself a pat on the back

Last year, me and a couple of colleagues wrote an online game, Push and Fork, as part of GitHub's "make a game in one month" contest.

One year is not that long, but enough time has passed for me to forget the solutions to some of the more subtle puzzles. This means I had the opportunity to experience it in a way which is slightly closer to that of first-time players; in particular, I finally understand why beta-testers were always surprised to discover that they did not need to use the fork to solve some of the early levels, and I understand why so many people got stuck on level 16, incorrectly trying to use the fork on the green block.

The universal feedback we always get is that the main mechanic of the game is not easy to understand, but that once you finally get it, the puzzles are quite fun. And the endings, oh, the endings! I'm the one who wrote them, but I had never experienced them like I did today. Many of our beta-testers didn't understand the ending at all, so let me put you on the right track: when you pick up the spork, you loop back in time to level 7, immediately after that weird foreshadowing sequence in which the white creature places the spork in its receptacle. There. That's it, now you're ready to enjoy the sadness of the ordinary ending, the cleverness of the first secret ending, and... nah, you'll never find the last secret ending.

Have fun!

Above: the secret menu screen which you will never see in-game,
because even if you could unlock the last ending, you would also need
to solve all the bonus puzzles, and the last one is really tricky.

Still, good luck!

Wednesday, July 31, 2013

The Commutativity monad

I have released Commutative, a monadic combinator library for writing commutative functions. This post explains the theory behind how it works.

Combinator-based, not Proof-based

Haskell's type system is quite precise and sophisticated, yet dependently-typed languages such as Agda and Idris manage to go even further. For example, using Agda, it is trivial to define the type of commutative functions: they are simply binary functions together with a proof that the function is commutative.

    record Commutative (a b : Set) : Set where
      field
        f : a → a → b
        prf : ∀ x y → f x y ≡ f y x

Haskell's type system is not powerful enough to represent proof objects, but even if it did, would you want to use them? Instead of writing proofs and programs in two separate steps, I much prefer Haskell's approach of using combinator libraries.

A simple example of a combinator library which avoids the need for proofs is bijections. The main idea is that since composing two bijections yields another bijection, programmers should be allowed to compose existing bijections without proof.

    data Bijection a b = MkBij (a -> b) (b -> a)
    
    instance Category Bijection where
      id = Bijection id id
      (MkBij g g') . (MkBij f f') = MkBij (g . f) (f' . g')

If the module exports a few bijective primitives but keeps the MkBij constructor private, users of the library will only be able to create Bijection instances through composition. Assuming the primitives were indeed proper bijections, this ensures that values of type Bijection are always guaranteed to be bijective. Without having to write any proof objects, even in the library!

Aspect-oriented Interlude

My passion for commutative (and associative) functions began while I was working on my master's thesis. I was working on aspect-oriented conflicts, and I soon discovered that the issue might not lie in the conflicting aspects themselves, but in the way in which those aspects were combined. The aspect-weaving process was not commutative!

Instead of keeping all the aspects in one global namespace and weaving them all together into one big program, I think aspects should be separated into independent "aquariums". One key aspect of my proposal is that there would be different types of aquariums, and only aspects of the same type should be combined with each other. Programmers may define custom aquariums, with one caveat: the aquarium must combine its aspects in a commutative and associative manner.

I knew that I couldn't just ask programmers to prove that all of their combination functions were commutative. Instead, I dedicated chapter 5 to a simple system which let programmers write "intrinsically" commutative functions, for which no further proof of commutativity would need to be written. That system eventually grew into today's combinator library.

Unordered pairs

My goal was to restrict the language in a way which ensured that only commutative functions could be written. The idea I came up with was to prevent functions from observing the order in which their arguments were given, by rewriting those two arguments into a unordered form. For example, if the arguments are both booleans, there are four possible (ordered) pairs of booleans, but only three unordered pairs.

    data Unordered_Bool = TT | TF | FF
    
    unorder_bool :: (Bool, Bool) -> Unordered_Bool
    unorder_bool (True,  True ) = TT
    unorder_bool (True,  False) = TF
    unorder_bool (False, True ) = TF
    unorder_bool (False, False) = FF

If a function is written in terms of Unordered_Bool instead of in terms of (Bool, Bool), then this function is commutative by construction, because the function has no way to distinguish the two orderings.

    xor :: Unordered_Bool -> Bool
    xor TF = True
    xor _  = False

The main limitation of this strategy is that not all types can be unordered in this fashion. Algebraic types are okay, but what about function types? If you need to write a higher-order commutative function, that is, a function which takes a pair of functions as arguments, then you would need a way to represent an unordered pair of functions. How do we represent that?

At the time when I wrote my thesis, I could not answer this question. But now I can!

Unordered observations

A function is defined by its observations, so it would make sense for an unordered pair of functions to also be defined by its available observations.

With an ordinary (ordered) pair of functions, that is, an observation-based value approximating the type (a -> b, a -> b), it would make sense of have an observation of type a -> (b, b). That is, in order to observe a pair of functions, you need to provide an argument at which each of the two functions will be observed, and you receive the output of each function on that argument.

The way to represent an unordered pair of functions is now blindingly obvious: instead of returning an ordered pair revealing which output came from which function, we should return an unordered pair. That is, our unordered pair of functions should have an observation of type a -> Unordered b.

If a function is written in terms of (a -> Unordered b) instead of (a -> b, a -> b), then this function is commutative by construction, because the function has no way to distinguish the two orderings. For example, if we want to implement a commutative higher-order function which returns true if at least one of its two function arguments returns true on either 0 or 1, we can implement it as follows.

    or01 :: (Int -> Unordered_Bool) -> Bool
    or01 fg = fg 0 /= FF || fg 1 /= FF

I really like this solution. It is clean, simple, elegant... and wrong.

Distinguishable observations

Okay, maybe not wrong, but at least incomplete. The strategy fails if we try to implement, for example, a commutative higher-order function which returns true if at least one of its two function arguments returns true on both 0 and 1.

    and01 :: (Int -> Unordered_Bool) -> Bool
    and01 fg | fg 0 == FF ||
               fg 1 == FF = False
    and01 fg | fg 0 == TT ||
               fg 1 == TT = True  -- Since the other is not FF.
    and01 fg | fg 0 == TF ||      -- Are the two T from the
               fg 1 == TF = ?     --  same function or not?

The problem with (a -> Unordered b) is that this representation is not just hiding the order of the original two arguments; it's hiding the order of every single observation. As the above example demonstrates, this is too strong.

The result TF indicates that a boolean observation has returned a different value for each argument. Once we have found such an observation, we ought to be able to use T and F as labels for the two arguments. We still won't know which was the first or second argument, but for each subsequent observation, we will know whether that observation came from the argument which resulted in T or from the one which resulted in F.

My commutativity monad is based on a single primitive, "distinguishBy", which does precisely what the previous paragraph describes. You ask it to perform a boolean observation (r -> Bool), which it performs on both arguments. If the answers are identical, you have failed to distinguish the arguments, but you still get to observe the Bool result. If the answers are different, hurray! You have successfully distinguished the two arguments, so you get the pair (r, r) corresponding to the original arguments.

The first r is always the argument for which the observation was False, while the second r is the one for which the observation was True. This way, you never learn the order in which the two r arguments were originally given, so the result is a Commutative operation from a pair of r to a value of type (Either Bool (r, r)).

    distinguishBy :: (r -> Bool)
                  -> Commutative r (Either Bool (r, r))

There is also "distinguish", a slightly more general version which uses Either instead of Bool. From those two operations, plus trivial implementations for bind and return, all unordered pairs and all commutative operations can be reimplemented in a way which guarantees commutativity.

Completeness

Since the problem with (a -> Unordered b) was that the representation could not be used to implement all commutative functions, it would be wise to verify that Commutative doesn't suffer from the same flaw. Here is an informal argument proving completeness. I don't think it's completely airtight, but it's good enough for me.

proof:

Suppose f is a pure, computable, and commutative function of type r -> r -> a. We want to show that there exists a corresponding implementation of type Commutative r a. We modify the existing implementation as follows.

First, inline all the helper functions used by f, including builtins, and reimplement everything in monadic style. Then, rewrite everything again in CPS style so that we can abort the computation at any time. Also rewrite all pattern-matching to nested if statements, so that all decisions are performed on booleans, and use thunks to delay any observation of the two arguments until one of those boolean decisions. This ensures that all argument observations are boolean observations.

Since f is computable, f obtains its result after observing each of its arguments a finite number of times. Uniformly replace all those observations, regardless of the argument observed, by a call to distinguishBy. If the observations agree, continue with the observed value; it doesn't matter which argument was observed by f, since both arguments have the same observation. If the observations disagree, stop; we have managed to distinguish the two arguments x and y, so we can simply abort the computation and return f x y instead.

If it is the observation on x which returned True, the call to distinguishBy will give us (y, x) instead of (x, y), but by hypothesis, f y x returns the same result as f x y. If we never abort, we also return the same result as f, since all observations were the same as what f would have observed. ∎

Associativity

I am quite proud of my commutativity monad. I wish I could work on an associativity monad next, but none of the insights listed in this post seem to carry over. I am thus in need of new insights. Dear readers, any suggestions?

Monday, July 01, 2013

Comonads are neighbourhoods, not objects

Gabriel Gonzalez, the author of the well-known pipes package, has written a blog post in which he claims that comonads are objects. I think he is mistaken, but I can't blame him; I used to be quite confused about comonads myself.

Comonads are not objects

To motivate the purported equivalence between objects and comonads, Gabriel gives detailed implementations of three classic object-oriented patterns (Builder, Iterator, and Command). Then, he reveals that all three implementations are comonadic. His examples are contrived, but more importantly, they are also misleading! Although not a direct quote from his post, he clearly intends the Haskell session

    >>> t1 = thermostat 3
    >>> t2 = t1 # up'
    >>> t3 = t2 # up'
    >>> toString t3
    "5 Kelvin"

to be equivalent to the following Java session.

    >>> t = new Thermostat(3);
    >>> t.up();
    >>> t.up();
    >>> t.toString();
    "5 Kelvin"

With this particular sequence of methods, the two sessions indeed compute the same result, but this is only because the up and down methods commute with each other. By adding the method square to the list, the behaviours of the two sessions quickly diverge. Strangely enough, in the Haskell version, each new method call gets applied before all the previous calls!

Haskell version (inverted method call order)
    >>> t1 = thermostat 3
    >>> t2 = t1 # up'      -- 3+1
    >>> t3 = t2 # up'      -- 3+1+1
    >>> toString t3
    "5 Kelvin"
    >>> t4 = t3 # square'  -- 3*3+1+1
    >>> toString t4
    "11 Kelvin"
    >>> t5 = t4 # up'      -- (3+1)*(3+1)+1+1
    >>> toString t5
    "18 Kelvin"
Java version (normal method call order)
    >>> t = new Thermostat(3);
    >>> t.up();            // 3+1
    >>> t.up();            // 3+1+1
    >>> t.toString();
    "5 Kelvin"
    >>> t.square();        // (3+1+1)*(3+1+1)
    >>> t.toString();
    "25 Kelvin"
    >>> t.up();            // (3+1+1)*(3+1+1)+1
    >>> t.toString();
    "26 Kelvin"

I hope this counter-example convinces you that comonads are not objects. But if that is true, what are comonads, then? What use do we have for methods which get applied in reverse chronological order?

Comonads are neighbourhoods

The answer, I believe, is that a comonadic computation is similar to a cellular automaton. At each step, the computation for a cell computes its next value based on the value of its neighbours at the previous step. My interpretation of

extract :: w a → a

is that w a is a neighbourhood containing a number of values of type a, at various locations around the current cell. A given comonad may provide functions to inspect, say, the neighbour immediately to the left of the current cell, or the neighbour from one timestep ago, but the bare minimum is that it must be possible to extract the value of the current cell itself.

Similarly, my interpretation of

    extend :: (w a → b) → w a → w b

is that w a → b computes one step of the cellular automaton by examining a local neighbourhood and producing the next value for the current cell. This single-cell computation is then extended to the entire neighbourhood, updating each cell as if it was the current one.

Comonadic thermostat

In our thermostat example, there is one cell for each possible temperature, and the value of each cell is also a temperature.

So far, I have stuck to Kelvin degrees in order to minimize the number of moving parts, but now that we have two completely different uses for temperatures, let's follow Gabriel by using Celsius for the cell contents, and Kelvin for the cell labels.

Since temperatures are floating point values, our thermostat operates on a neighbourhood which is continuous instead of discrete. Continuousness is also the premise of the above Game of Life variant. Unlike comonadic computations, which apply their operations one at a time, the above cellular automaton also operates on a continuous time domain. I encourage you to watch the video, it looks awesome.

If our thermostat had an object-oriented implementation, each operation would simply modify the current cell by applying the operation to the Celsius temperature it contains. In fact, if this was an object-oriented implementation, we would only need one cell! Since this is, instead, a comonadic implementation, the operation is instead applied to the Kelvin temperature which labels the current cell. The resulting Kelvin temperature is interpreted as a pointer to another cell, and the final result is the Celsius contents of that cell.

When the available operations are restricted to just up and down, this scheme works just fine. Originally, each cell contains the Celsius equivalent of their Kelvin label. Then, each time the up method is applied, each cell takes on the Celsius value of the cell one unit above it, which effectively increases the temperature by one everywhere. But this only works because up and down preserve the invariant that cells which are one Kelvin unit apart contain values which are one Celsius unit apart!

One way to visualize this is to imagine a sheet of graph paper annotated with Celsius temperatures, over which we overlay a transparent sheet of graph paper annotated with Kelvin temperatures. One of the transparent cells is our current cell, and we can read its Celsius contents through the transparent paper. When we call up and down, we offset the two sheets of paper from each other, causing the apparent contents of all the cells to increase of decrease simultaneously.

Reverse chronological order

Similarly, if each cell contains its own temperature and we apply the square method, each cell squares its Kelvin temperature, looks up the corresponding cell, and ends up with the (Celsius representation of the) square of its original Kelvin value.

But if we apply up after the temperatures have already been squared, the values will change as if the values had been incremented before they were squared! How is that even possible? What is the secret of this reverse-chronological method application?


If there was only one cell, anti-chronological application would not be possible because the original, unsquared value x would been lost by the time the up operation was applied. Our comonadic implementation, however, has a lot of cells: in particular, one unit above the current cell, there is a cell whose unsquared value was x+1. Since the square operation affects all cells uniformly, that cell now contains the squared value (x+1)2. Therefore, it is no surprise that when the up operation offsets the two sheets of paper by one unit, the old x2 value scrolls out of view and gets replaced by (x+1)2 instead.

Ironically, Gabriel's original implementation of up' was applying its operation in the correct chronological order.

    >>> up' (t, f) = (t + 1, f)

But later on, while trying to shoehorn his object into the comonadic mold, he proved that the correct definition should have been as follows.

    >>> up' (t, f) = (t, \t' → f (t' + 1))

While that is indeed a correct comonadic implementation, this is not a correct object-oriented implementation because it applies its operation in reverse chronological order. In particular, notice that it modifies the input of f; since f applies all the other operations which have been accumulated so far, applying the new operation to its input has the effect of inserting that operation before all the others.

Visiting the neighbours

Why do comonads apply their operations in reverse chronological order? Is that ever useful?

In fact, this reversed order business is not the full story. In a typical comonad, the cell labels and their contents would not both be temperatures. Instead, if we were trying to implement a one-dimentional cellular automaton such as rule 30, we would label the cells with integers and their contents with colors.

Wolfram's rule 30 automaton, from his
controversial book A New Kind of Science.

Now that the input and output types are distinct, we can see that \t' → f (t' + 1) is not even attempting to modify the contents of the current cell, neither chrono- nor anti-chronologically. Instead, it is modifying the index of the cell on which f applies its computations, thereby allowing us to observe which color has been computed for a neighbouring cell. This is, of course, the first step to compute the color which the current cell will take on the next step.

Once we have wrapped the steps necessary to compute our next color into a function of type w Color → Color, we can extend this function to all the cells in our simulation, modifying the entire row at once by modifying f.

Objects are monads

We have seen that comonads are not objects, but are instead neighbourhoods in which cellular automata can evolve. But that is only half of the question. If comonads are not objects, then what are objects? Is there another way to represent objects in Haskell?

There are many facets to modern object-oriented programming, but the aspect which I think Gabriel was aiming for in his post is objects as encapsulated data plus a bunch of methods to modify this data. To me, those features don't sound comonadic at all; rather, I would implement them using a State monad!

    >>> type Thermostat a = State Double a
    >>>
    >>> getKelvin  = get
    >>> getCelsius = fmap (- 273.15) getKelvin
    >>> 
    >>> toString = do c ← getCelsius
    >>>               return (show c ++ " Celsius")
    >>> 
    >>> up   = modify (+1)
    >>> down = modify (-1)

Since we're using monads, the syntax for calling a few of those methods one after the other is just do-notation, which is even shorter than the this # notation advocated by Gabriel.

    >>> up3 :: Thermostat String
    >>> up3 = do up
    >>>          up
    >>>          up
    >>>          toString

Notice the type of the above code: Thermostat String doesn't mean that there are many different kinds of thermostats, and that this particular code is using or producing a "string thermostat". Rather, the above code is an object-oriented computation having access to a single Thermostat object and producing a single String value.

Okay, so using one monad, we managed to represent a computation manipulating one object. How would we manipulate two thermostats? Using two monads? Yes! Using more than one monad at once is precisely what monad transformers are about.

    >>> type ThermostatT m a = StateT Double m a
    >>> obj1 = id
    >>> obj2 = lift
    >>> 
    >>> up_both :: ThermostatT Thermostat String
    >>> up_both = do obj1 up
    >>>              obj2 up
    >>>              s1 ← obj1 toString
    >>>              s2 ← obj2 toString
    >>>              return (s1 ++ " and " ++ s1)

This time we have an object-oriented computation having access to two Thermostat objects, again producing a String value.

Duality

Since monads and comonads are duals, we would expect comonads to also have transformers. To figure out what comonad transformers are, let's spell out the similarities and differences of this duality.

First, monads are computations. Comonads are also computations, but of a different kind. The goal of both kinds of computations is to produce a value, but their means to obtain it are different. The main difference is that monads can cause side-effects, which are visible at the end of the computation, while comonads can observe neighbours, which need to be given before the computation can begin.

One of the characteristics of monadic-style computation is that there is a distinguished statement, return, which produces a given value without causing any side-effects. The equivalent characteristic for comonadic-style computation is that there is a distinguished observer, extract, which returns the current value without observing any neighbour.

In addition to this distinguished statement, which all monads share, each monad instance typically offers a number of monad-specific statements, each producing their own special side-effects. Correspondingly, each comonad instance typically provides a number of observers, each observing their own special kinds of neighbours. You can also observe the shape of the neighbourhood; for example, if your automaton runs on a finite grid, you can check whether the current cell lies on the boundary of the grid.

With monad transformers, it is possible to construct a combined monad in which the special statements of both component monads may be used. And now we have it: with comonad transformers, it must be possible to construct a combined comonad in which the special observers of both component comonads may be used.

For example, if we have a 1D row of cells, that is, a comonad in which left and right neighbours are available, and another 1D comonad in which up and down neighbours are available, then the combined comonad is the cartesian product of its components: a 2D grid in which all four direct neighbours are available.

If you are still curious about comonads, comonad transformers, and the above 2D grid trick, more details are available in my Haskell implementation of Conway's Game of Life.