We have permission to import material from the Haskell arrows page. See the talk page for details.
The factory and conveyor belt metaphor
In this tutorial, we shall present arrows from the perspective of stream processors, using a factory metaphor as support. Let's get our hands dirty right away.
You are a factory owner, and as before you own a set of processing machines. Processing machines are just a metaphor for functions; they accept some input and produce some output. Your goal is to combine these processing machines so that they can perform richer, and more complicated tasks. Monads allow you to combine these machines in a pipeline. Arrows allow you to combine them in more interesting ways. The result of this is that you can perform certain tasks in a less complicated and more efficient manner.
In a monadic factory, we took the approach of wrapping the outputs of our machines in containers. The arrow factory takes a completely different route: rather than wrapping the outputs in containers, we wrap the machines themselves. More specifically, in an arrow factory, we attach a pair of conveyor belts to each machine, one for the input and one for the output.
So given a function of type
b -> c, we can construct an equivalent
a arrow by attaching a
c conveyor belt to the machine. The equivalent arrow is of type
a b c, which we can pronounce as an arrow
Plethora of robots
We mentioned earlier that arrows give you more ways to combine machines together than monads did. Indeed, the arrow type class provides six distinct robots compared to the two you get with monads (>> and >>=).
The simplest robot is
arr with the type signature
arr :: (b -> c) -> a b c. In other words, the arr robot takes a processing machine of type
b -> c, and adds conveyor belts to form an
a arrow from
The next, and probably the most important, robot is
(>>>). This is basically the arrow equivalent to the monadic bind robot
(>>=). The arrow version of bind
(>>>) puts two arrows into a sequence. That is, it connects the output conveyor belt of the first arrow to the input conveyor belt of the second one.
What we get out of this is a new arrow. One consideration to make, though is what input and output types our arrows may take. Since we're connecting output and the input conveyor belts of the first and second arrows, the second arrow must accept the same kind of input as what the first arrow outputs. If the first arrow is of type
a b c, the second arrow must be of type
a c d. Here is the same diagram as above, but with things on the conveyor belts to help you see the issue with types.
|What is the type of the combined arrow?|
Up to now, our arrows can only do the same things that monads can. Here is where things get interesting! The arrows type class provides functions which allow arrows to work with pairs of input. As we will see later on, this leads us to be able to express parallel computation in a very succinct manner. The first of these functions, naturally enough, is
If you are skimming this tutorial, it is probably a good idea to slow down at least in this section, because the
first robot is one of the things that makes arrows truly useful.
Given an arrow
first robot attaches some conveyor belts and extra machinery to form a new, more complicated arrow. The machines that bookend the input arrow split the input pairs into their component parts, and put them back together. The idea behind this is that the first part of every pair is fed into the
f, whilst the second part is passed through on an empty conveyor belt. When everything is put back together, we have same pairs that we fed in, except that the first part of every pair has been replaced by an equivalent output from
Now the question we need to ask ourselves is that of types. Say that the input tuples are of type
(b,d) and the input arrow is of type
a b c (that is, it is an arrow from
c). What is the type of the output? Well, the arrow converts all
cs, so when everything is put back together, the type of the output must be
|What is the type of the
If you understand the
first robot, the
second robot is a piece of cake. It does the same exact thing, except that it feeds the second part of every input pair into the given arrow
f instead of the first part.
What makes the
second robot interesting is that it can be derived from the previous robots! Strictly speaking, the only robots you need for arrows are
first. The rest can be had "for free".
One of the selling points of arrows is that you can use them to express parallel computation. The
(***) robot is just the right tool for the job. Given two arrows,
(***) combines them into a new arrow using the same bookend-machines we saw in the previous two robots
Conceptually, this isn't very much different from the robots
second. As before, our new arrow accepts pairs of inputs. It splits them up, sends them on to separate conveyor belts, and puts them back together. The only difference here is that, rather than having one arrow and one empty conveyor belt, we have two distinct arrows. But why not?
The final robot in the Arrow class is very similar to the
(***) robot, except that the resulting arrow accepts a single input and not a pair. Yet, the rest of the machine is exactly the same. How can we work with two arrows, when we only have one input to give them?
The answer is simple: we clone the input and feed a copy into each machine!
addA f g = f &&& g >>> arr (\ (y, z) -> y + z)
Functions are arrows
Now that we have presented the 6 arrow robots, we would like to make sure that you have a more solid grasp of them by walking through a simple implementation of the Arrow class. As in the monadic world, there are many different types of arrows. What is the simplest one you can think of? Functions.
Put concretely, the type constructor for functions
(->) is an instance of
instance Arrow (->) where arr f = f f >>> g = g . f first f = \(x,y) -> (f x, y)
Now let's examine this in detail:
arr- Converting a function into an arrow is trivial. In fact, the function already is an arrow.
(>>>)- we want to feed the output of the first function into the input of the second function. This is nothing more than function composition.
first- this is a little more elaborate. Given a function
f, we return a function which accepts a pair of inputs
(x,y), and runs
And that, strictly speaking, is all we need to have a complete arrow, but the arrow typeclass also allows you to make up your own definition of the other three robots, so let's have a go at that:
first f = \(x,y) -> (f x, y) -- for comparison's sake second f = \(x,y) -> ( x, f y) -- like first f *** g = \(x,y) -> (f x, g y) -- takes two arrows, and not just one f &&& g = \x -> (f x, g x) -- feed the same input into both functions
And that's it! Nothing could be simpler.
Note that this is not the official instance of functions as arrows. You should take a look at the haskell library if you want the real deal.
The arrow notation
In the companion Arrow tutorial, we introduced the
-< notation. How does this tie in with all the arrow robots we just presented? Sadly, it's a little bit less straightforward than monad do-notation, but let's have a look.
- proc (arrow abstraction) is a kind of lambda, except that it constructs an arrow instead of a function.
- -< (arrow application) feeds the value of an expression into an arrow.
addA :: Arrow a => a b Int -> a b Int -> a b Int addA f g = proc x -> do y <- f -< x z <- g -< x returnA -< y + z addA :: Arrow a => a b Int -> a b Int -> a b Int addA f g = arr (\ x -> (x, x)) >>> first f >>> arr (\ (y, x) -> (x, y)) >>> first g >>> arr (\ (z, y) -> y + z) addA :: Arrow a => a b Int -> a b Int -> a b Int addA f g = f &&& g >>> arr (\ (y, z) -> y + z)
||This section is a stub. You can help Haskell by expanding it.|
||This section is a stub. You can help Haskell by expanding it.|
It turns out that any monad can be made into an arrow. We'll go into that later on, but for now, FIXME: transition
At this point in the tutorial, you should have a strong enough grasp of the arrow machinery that we can start to meaningfully tackle the question of what arrows are good for.
Arrows were originally motivated by an efficient parser design found by Swierstra & Duponcheel.
To describe the benefits of their design, let's examine exactly how monadic parsers work.
If you want to parse a single word, you end up with several monadic parsers stacked end to end. Taking Parsec as an example, the parser string "word" can also be viewed as
word = do char 'w' >> char 'o' >> char 'r' >> char 'd' return "word"
Each character is tried in order, if "worg" is the input, then the first three parsers will succeed, and the last one will fail, making the entire string "word" parser fail.
If you want to parse one of two options, you create a new parser for each and they are tried in order. The first one must fail and then the next will be tried with the same input.
ab = do char 'a' <|> char 'b' <|> char 'c'
To parse "c" successfully, both 'a' and 'b' must have been tried.
one = do char 'o' >> char 'n' >> char 'e' return "one" two = do char 't' >> char 'w' >> char 'o' return "two" three = do char 't' >> char 'h' >> char 'r' >> char 'e' >> char 'e' return "three" nums = do one <|> two <|> three
With these three parsers, you can't know that the string "four" will fail the parser nums until the last parser has failed.
If one of the options can consume much of the input but will fail, you still must descend down the chain of parsers until the final parser fails. All of the input that can possibly be consumed by later parsers must be retained in memory in case one of them does consume it. That can lead to much more space usage than you would naively expect, this is often called a space leak.
The general pattern of monadic parsers is that each option must fail or one option must succeed.
So what's better?
Swierstra & Duponcheel (1996) noticed that a smarter parser could immediately fail upon seeing the very first character. For example, in the nums parser above, the choice of first letter parsers was limited to either the letter 'o' for "one" or the letter 't' for both "two" and "three". This smarter parser would also be able to garbage collect input sooner because it could look ahead to see if any other parsers might be able to consume the input, and drop input that could not be consumed. This new parser is a lot like the monadic parsers with the major difference that it exports static information. It's like a monad, but it also tells you what it can parse.
There's one major problem. This doesn't fit into the monadic interface. Monads are (a -> m b), they're based around functions only. There's no way to attach static information. You have only one choice, throw in some input, and see if it passes or fails.
The monadic interface has been touted as a general purpose tool in the functional programming community, so finding that there was some particularly useful code that just couldn't fit into that interface was something of a setback. This is where Arrows come in. John Hughes's Generalising monads to arrows proposed the arrows abstraction as new, more flexible tool.
Static and dynamic parsers
Let us examine Swierstra & Duponcheel's parser in greater detail, from the perspective of arrows. The parser has two components: a fast, static parser which tells us if the input is worth trying to parse; and a slow, dynamic parser which does the actual parsing work.
data Parser s a b = P (StaticParser s) (DynamicParser s a b) data StaticParser s = SP Bool [s] newtype DynamicParser s a b = DP ((a,[s]) -> (b,[s]))
The static parser consists of a flag, which tells us if the parser can accept the empty input, and a list of possible starting characters. For example, the static parser for a single character would be as follows:
spCharA :: Char -> StaticParser Char spCharA c = SP False [c]
It does not accept the empty string (
False) and the list of possible starting characters consists only of
||The rest of this section needs to be verified|
The dynamic parser needs a little more dissecting : what we see is a function that goes from
(b,[s]). It is useful to think in terms of sequencing two parsers : Each parser consumes the result of the previous parser (
a), along with the remaining bits of input stream (
[s]), it does something with
a to produce its own result
b, consumes a bit of string and returns that. Ooof. So, as an example of this in action, consider a dynamic parser
(Int,String) -> (Int,String), where the
Int represents a count of the characters parsed so far. The table below shows what would happen if we sequence a few of them together and set them loose on the string "cake" :
|after first parser||1||ake|
|after second parser||2||ke|
|after third parser||3||e|
So the point here is that a dynamic parser has two jobs : it does something to the output of the previous parser (informally,
a -> b), and it consumes a bit of the input string, (informally,
[s] -> [s]), hence the type
DP ((a,[s]) -> (b,[s])). Now, in the case of a dynamic parser for a single character, the first job is trivial. We ignore the output of the previous parser. We return the character we have parsed. And we consume one character off the stream :
dpCharA :: Char -> DynamicParser Char Char Char dpCharA c = DP (\(_,x:xs) -> (x,xs))
This might lead you to ask a few questions. For instance, what's the point of accepting the output of the previous parser if we're just going to ignore it? The best answer we can give right now is "wait and see". If you're comfortable with monads, consider the bind operator
(>>=). While bind is immensely useful by itself, sometimes, when sequencing two monadic computations together, we like to ignore the output of the first computation by using the anonymous bind
(>>). This is the same situation here. We've got an interesting little bit of power on our hands, but we're not going to use it quite yet.
The next question, then, shouldn't the dynamic parser be making sure that the current character off the stream matches the character to be parsed? Shouldn't
x == c be checked for? No. And in fact, this is part of the point; the work is not necessary because the check would already have been performed by the static parser.
Anyway, let us put this together. Here is our S+D style parser for a single character:
charA :: Char -> Parser Char Char Char charA c = P (SP False [c]) (DP (\(_,x:xs) -> (x,xs)))
Arrow combinators (robots)
Up to this point, we have explored two somewhat independent trains of thought. On the one hand, we've taken a look at some arrow machinery, the combinators/robots from above, although we don't exactly know what it's for. On the other hand, we have introduced a type of parser using the Arrow class. We know that the goal is to avoid space leaks and that it somehow involves separating a fast static parser from its slow dynamic part, but we don't really understand how that ties in to all this arrow machinery. In this section, we will attempt to address both of these gaps in our knowledge and merge our twin trains of thought into one. We're going to implement the Arrow class for
Parser s, and by doing so, give you a glimpse of what makes arrows useful. So let's get started:
instance Arrow (Parser s) where
One of the simplest things we can do is to convert an arbitrary function into a parsing arrow. We're going to use "parse" in the loose sense of the term: our resulting arrow accepts the empty string, and only the empty string (its set of first characters is
). Its sole job is to take the output of the previous parsing arrow and do something with it. Otherwise, it does not consume any input.
arr f = P (SP True ) (DP (\(b,s) -> (f b,s)))
first combinator is relatively straightforward. Recall the conveyor belts from above. Given a parser, we want to produce a new parser that accepts a pair of inputs
(b,d). The first part of the input
b, is what we actually want to parse. The second part is passed through completely untouched:
first (P sp (DP p)) = (P sp (DP (\((b,d),s) -> let (c, s') = p (b,s) in ((c,d),s'))))
On the other hand, the implementation of
(>>>) requires a little more thought. We want to take two parsers, and return a combined parser incorporating the static and dynamic parsers of both arguments:
(P (SP empty1 start1) (DP p1)) >>> (P (SP empty2 start2) (DP p2)) = P (SP (empty1 && empty2) (if not empty1 then start1 else start1 `union` start2)) (DP (p2.p1))
Combining the dynamic parsers is easy enough; we just do function composition. Putting the static parsers together requires a little bit of thought. First of all, the combined parser can only accept the empty string if both parsers do. Fair enough, now how about the starting symbols? Well, the parsers are supposed to be in a sequence, so the starting symbols of the second parser shouldn't really matter. If life were simple, the starting symbols of the combined parser would only be
start1. Alas, life is NOT simple, because parsers could very well accept the empty input. If the first parser accepts the empty input, then we have to account for this possibility by accepting the starting symbols from both the first and the second parsers.
So what do arrows buy us in all this?
If you look back at our Parser type and blank out the static parser section, you might notice that this looks a lot like the arrow instance for functions.
arr f = \(b, s) -> (f b, s) first p = \((b, d), s) -> let (c, s') = p (b, s) in ((c, d), s')) p1 >>> p2 = p2 . p1
There's the odd s variable out for the ride, which makes the definitions look a little strange, but you can roughly see the outline of the conveyor belts and computing machines. Actually, what you see here is roughly the arrow instance for the State monad (let f :: b -> c, p :: b -> State s c and . actually be <=<.
That's fine, but we could have done that with bind in classic monadic style, and first would have just been an odd helper function that you could have easily pattern matched. But remember, our Parser type is not just the dynamic parser; it also contains the static parser.
arr f = SP True  first sp = sp (SP empty1 start1) >>> (SP empty2 start2) = (SP (empty1 && empty2) (if not empty1 then start1 else start1 `union` start2))
This is not at all a function, it's just pushing around some data types. But the arrow metaphor works for it too, and we wrap it up with the same names. And when we combine the two types, we get a two-for-one deal; the static parser data structure goes along for the ride along with the dynamic parser. The Arrow interface lets us transparently simultaneously compose and manipulate the two parsers as a unit, which we can then run as a traditional, unified function.
Monads can be arrows too
- The real flexibility with arrows comes with the ones that aren't monads, otherwise it's just a clunkier syntax -- Philippa Cowderoy
It turns out that all monads can be made into arrows. Here's a central quote from the original arrows papers:
Just as we think of a monadic type m a as representing a 'computation delivering an a '; so we think of an arrow type a b c, (that is, the application of the parameterised type a to the two parameters b and c) as representing 'a computation with input of type b delivering a c'; arrows make the dependence on input explicit.
One way to look at arrows is the way the English language allows you to noun a verb, for example, "I had a chat with them" versus "I chatted with them." Arrows are much like that, they turn a function from a to b into a value. This value is a first class transformation from a to b.
||This section is a stub. You can help Haskell by expanding it.|
Arrows in practice
Arrows are a relatively new abstraction, but they already found a number of uses in the Haskell world
- Hughes' arrow-style parsers were first described in his 2000 paper, but a usable implementation wasn't available until May 2005. Einar Karttunen wrote an implementation called PArrows that approaches the features of standard Haskell parser combinator library, Parsec.
- The Fudgets library for building graphical interfaces FIXME: complete this paragraph
- Yampa - FIXME: talk briefly about Yampa
- The Haskell XML Toolbox (HXT) uses arrows for processing XML. There is a Wiki page in the Haskell Wiki with a somewhat Gentle Introduction to HXT.
- Netwire (page) - the FRP library for a game development. It has an arrow interface as well as an applicative one.
- Swierstra, Duponcheel. Deterministic, error correcting parser combinators. 
This module uses text from An Introduction to Arrows by Shae Erisson, originally written for The Monad.Reader 4