Getting set up
Variables and functions
Truth values
Type basics
Lists and tuples
Type basics II
Next steps
Building vocabulary
Simple input and output
edit this chapter

Recursion
List processing
Type declarations
Pattern matching
Control structures
More on functions
Higher order functions
Using GHCi effectively
edit this chapter

Modules
Standalone programs
Indentation
More on datatypes
Other data structures
Classes and types
The Functor class
edit this chapter

Maybe:List
do Notation
IO:State
edit this chapter

Monoids
Applicative Functors
Arrow tutorial
Understanding arrows
Continuation passing style (CPS)
Zippers
Mutable objects
Concurrency
edit this chapter

## Fun with Types

Polymorphism basics
Existentially quantified types
Phantom types
Datatype algebra
Type constructors & Kinds
edit this chapter

## Wider Theory

Denotational semantics
Equational reasoning
Program derivation
Category theory
The Curry-Howard isomorphism
fix and recursion
edit this chapter

Introduction
Step by Step Examples
Graph reduction
Laziness
Time and space profiling
Strictness
Algorithm complexity
Data structures
Parallelism
edit this chapter

## Libraries Reference

The Hierarchical Libraries
Lists:Arrays:Maybe:Maps
IO:Random Numbers
edit this chapter

## General Practices

Debugging
Testing
Using the Foreign Function Interface (FFI)
Generic Programming : Scrap your boilerplate
edit this chapter

Graphical user interfaces (GUI)
Databases
Web programming
Working with XML
Using Regular Expressions
Parsing Mathematical Expressions
Writing a Basic Type Checker
edit this chapter

# Getting set up

This chapter will explore how to install the programs you'll need to start coding in Haskell.

Haskell is a programming language, i.e. a language in which humans can express how computers should behave. It's like writing a cooking recipe: you write the recipe and the computer executes it.

To write Haskell programs, you need a program called a Haskell compiler. A compiler is a program that takes code written in Haskell and translates it into machine code, a more primitive language that the computer understands. Using the above analogy, the compiler is the oven that bakes your batter (code) into a cookie (executable file), and it's difficult to get the recipe from an executable once it's compiled.

If you're just trying out Haskell, or are averse to downloading and installing the full compiler, you might like to try Hugs, the lightweight Haskell interpreter (it also happens to be portable). You might also like to play around with TryHaskell, an interpreter hosted online. Note that all instructions will be for GHC.

Note

UNIX users:

If you are a person who prefers to compile from source: This might be a bad idea with GHC, especially if it's the first time you install it. GHC is itself mostly written in Haskell, so trying to bootstrap it by hand from source is very tricky. Besides, the build takes a very long time and consumes a lot of disk space. If you are sure that you want to build GHC from the source, see Building and Porting GHC at the GHC homepage.

## Very first steps

For that, you will use the program called GHCi (the 'i' stands for 'interactive'). Depending on your operating system, perform the following steps:

• On Windows: Click Start, then Run, then type 'cmd' and hit Enter, then type ghci and hit Enter once more.
• On MacOS: Open the application "Terminal" found in the "Applications/Utilities" folder, type the letters ghci into the window that appears and hit the Enter key.
• On Linux: Open a terminal and run the ghci program.

You should get output that looks something like the following:

   ___         ___ _
/ _ \ /\  /\/ __(_)
/ /_\// /_/ / /  | |      GHC Interactive, version 6.6, for Haskell 98.
/ /_\\/ __  / /___| |      http://www.haskell.org/ghc/
\____/\/ /_/\____/|_|      Type :? for help.

Prelude>


The first bit is GHCi's logo. It then informs you it's loading the base package, so you'll have access to most of the built-in functions and modules that come with GHC. Finally, the Prelude> bit is known as the prompt. This is where you enter commands, and GHCi will respond with their results.

Prelude> 2 + 2
4
Prelude> 5 + 4 * 3
17
Prelude> 2 ^ 5
32


The operators are similar to what they are in other languages: + is addition, * is multiplication, and ^ is exponentiation (raising to the power of, or $a ^ b$). Note from the second example that Haskell follows standard order of operations.

Now you know how to use Haskell as a calculator. A key idea of the Haskell language is that it will always be like a calculator, except that it will become really powerful when we calculate not only with numbers, but also with other objects like characters, lists, functions, trees and even other programs (if you aren't familiar with these yet, don't worry).

GHCi is a very powerful development environment. As we progress, we'll learn how we can load source files into GHCi, and evaluate different bits of them.

The next chapter will introduce some of the basic concepts of Haskell. Let's dive into that and have a look at our first Haskell functions.

# Variables and functions

 Print version (Solutions) Haskell Basics edit this chapter

(All the examples in this chapter can be typed into a Haskell source file and evaluated by loading that file into GHC or Hugs. Remember not to type the prompts at the beginning of input. If there are prompts at the beginning, then you can type it in an environment like GHCi. If not, then you should put it in a file and run it.)

## Variables

We've already seen how to use the GHCi program as a calculator. Of course, this is only practical for very short calculations. For longer calculations and for writing Haskell programs, we need to keep track of intermediate results.

Intermediate results can be stored in variables, to which we refer by their name. A variable contains a value, which is substituted for the variable name when you use a variable. For instance, consider the following calculation

ghci> 3.1416 * 5^2
78.53999999999999


This is the area of a circle with radius 5, according to the formula $A = \pi r^2$. It is very cumbersome to type in the digits of $\pi \approx 3.1416$, or even to remember them at all. In fact, an important aspect of programming, if not the whole point, is to delegate mindless repetition and rote memorization to a machine. In this case, Haskell has already defined a variable named pi that stores over a dozen digits of $\pi$ for us.

ghci> pi
3.141592653589793
ghci> pi * 5^2
78.53981633974483


Notice that the variable pi and its value, 3.141592653589793, can be used interchangeably in calculations.

Now, we want to define our own variables to help us in our calculations. This is done in a Haskell source file, which contains Haskell code.

Create a new file called Varfun.hs in your favourite text editor [1] (the file extension .hs stands for "Haskell") and type/paste in the following definition:

r = 5.0


Make sure that there are no spaces at the beginning of the line because Haskell is a whitespace sensitive language (more about indentation later).

Now, open GHCi, move to the directory (folder) where you saved your file with the :cd YourDirectoryHere command, and use the :load YourFileHere (or :l YourFileHere) command:

Prelude> :cd c:\myDirectory
Compiling Main             ( Varfun.hs, interpreted )
*Main>


Loading a Haskell source file will make all its definitions available in the GHCi prompt. Source files generally include modules (units of storage for code) to organize them or indicate where the program should start running (the Main module) when you use many files. In this case, because you did not indicate any Main module, it created one for you.

If GHCi gives an error like Could not find module 'Varfun.hs', you probably are in the wrong directory.

Now you can use the newly defined variable r in your calculations.

*Main> r
5.0
*Main> pi * r^2
78.53981633974483


So, to calculate the area of a circle of radius 5, we simply define r = 5.0 and then type in the well-known formula $\pi r^2$ for the area of a circle. There is no need to write the numbers out every time; that's very convenient!

Since this was so much fun, let's add another definition: Change the contents of the source file to

r = 5.0
area = pi * r ^ 2


Save the file and type the :reload (or :r) command in GHCi to load the new contents (note that this is a continuation of the last session):

*Main> :reload
Compiling Main             ( Varfun.hs, interpreted )
*Main>


Now we have two variables r and area available

*Main> area
78.53981633974483
*Main> area / r
15.707963267948966


Note

It is also possible to define variables directly at the GHCi prompt, without a source file. Skipping the details, the syntax for doing so uses the let keyword (a word with a special meaning) and looks like:

Prelude> let area = pi * 5 ^ 2


Although we will occasionally do such definitions for expediency in the examples, it will quickly become obvious, as we move into slightly more complex tasks, that this practice is not really convenient. That is why we are emphasizing the use of source files from the very start.

Note

To experienced programmers: GHC can also be used as a compiler (that is, you could use GHC to convert your Haskell files into a stand-alone program that can be run without running the interpreter). How to do so will be explained much later.

Before we continue, it is good to understand that it is possible to include text in a program without having it treated as code. This is achieved by use of comments. In Haskell, a comment can be started with -- and continues until the end of the line:

x = 5     -- The variable x is 5.
y = 6     -- The variable y is 6.
-- z = 7


In this case, x and y are defined, but z is not. Comments can also go anywhere using the alternative syntax {- ... -}:

x = {- Do this just because you can. -} 5


Comments are generally used for explaining parts of a program that may be somewhat confusing to readers otherwise.

## Variables in imperative languages

If you are already familiar with imperative programming languages like C, you will notice that variables in Haskell are quite different from variables as you know them. We now explain why and how.

If you have no programming experience, you might like to skip this section and continue reading with Functions.

Unlike in imperative languages, variables in Haskell do not vary. Once defined, their value never changes; they are immutable. For instance, the following code does not work:

r = 5
r = 2


The variables in functional programming languages are more related to variables in mathematics than changeable locations in a computer's memory. As in Haskell, you would definitely never see a variable used this way in a math classroom (at least not in the same problem). The compiler will give an error due to "multiple declarations of r". People more familiar with imperative programming, which involves explicitly telling the computer what to do, may be accustomed to read this as first setting r = 5 and then changing it to r = 2, but in functional programming languages, the program is in charge of figuring out what to do with memory.

Here's another example of a major difference from imperative languages

r = r + 1


Instead of "incrementing the variable r", this is actually a recursive definition of r in terms of itself (we will explain recursion in detail later on; just remember that this is radically different from imperative languages).

Because you don't need to worry about changing values, variables can be defined in any order. For example, the following fragments of code do exactly the same thing:

  y = x * 2 x = 3   x = 3 y = x * 2 

We can write things in any order that we want, there is no notion of "x being declared before y" or the other way around. This is also why you can't declare something more than once; it would be ambiguous otherwise. Of course, using y will still require a value for x, but this is unimportant until you need a specific numeric value.

By now, you might be wondering how you can actually do anything at all in Haskell where variables don't change. But trust us; as we hope to show you in the rest of this book, you can write every program under the sun without ever changing a single variable! In fact, variables that don't change make life so much easier because it makes programs much more predictable. It's a key feature of purely functional programming, which requires a very different approach from imperative programming and requires a different mindset.

## Functions

Now, suppose that we have multiple circles with different radii whose areas we want to calculate. For instance, to calculate the area of another circle with radius 3, we would have to include new variables r2 and area2[2] in our source file:

r  = 5
area  = pi*r^2
r2 = 3
area2 = pi*r2^2


Clearly, this is unsatisfactory because we are repeating the formula for the area of a circle verbatim. To eliminate this mindless repetition, we would prefer to write it down only once and then apply it to different radii. That's exactly what functions allow us to do.

A function takes an argument value (or parameter) and gives a result value, like a variable, that takes its place. (If you are already familiar with mathematical functions, they are essentially the same.) Defining functions in Haskell is simple: It is like defining a variable, except that we take note of the function argument that we put on the left hand side. For instance, the following is the definition of a function area which depends on a argument which we name r:

area r = pi * r^2


The syntax here is important to note: the function name comes first (in our example, that's "area"), followed by a space and then the argument ("r" in the example). Following the = sign, the function definition is a formula that uses the argument in context with other already defined terms.

Now, we can plug in different values for the argument in a call to the function. Load this definition into GHCi and try the following calls:

*Main> area 5
78.53981633974483
*Main> area 3
28.274333882308138
*Main> area 17
907.9202768874502


Thus, we can call this function with different radii to calculate the area of the corresponding circles.

You likely know functions from mathematics already. Our function here is defined mathematically as

$A(r) = \pi \cdot r^2$

In mathematics, the parameter is enclosed between parentheses, as in $A(5) = 78.54$ or $A(3) = 28.27$. While the Haskell code will still work with parentheses, they are normally omitted. As Haskell is a functional language, we will call a lot of functions, and whenever possible we want to minimize extra symbols.

Parentheses are still used for grouping expressions (any code that gives a value) to be evaluated together. Note how the following expressions are parsed differently:

area (5 + 3)    -- area (5 + 3)
area 5 + 3      -- (area 5) + 3


This shows that function calls take precedence over operators like + the same way multiplication is done before addition in mathematics.

### Evaluation

Let us try to understand what exactly happens when you enter an expression into GHCi. After you press the enter key, GHCi will evaluate the expression you have given. This means that it will replace each function with its definition and calculate the results until a single value is left. For example, the evaluation of area 5 proceeds as follows:

   area 5
=>    { replace the left-hand side  area r = ...  by the right-hand side  ... = pi * r^2 }
pi * 5^2
=>    { replace  pi  by its numerical value }
3.141592653589793 * 5^2
=>    { apply exponentiation (^) }
3.141592653589793 * 25
=>    { apply multiplication (*) }
78.53981633974483


As this shows, to apply or call a function means to replace the left-hand side of its definition by its right-hand side. As a last step, GHCi prints the final result on the screen.

Here are some more functions:

double x    = 2*x
quadruple x = double (double x)
square x    = x*x
half   x     = x / 2

Exercises
• Explain how GHCi evaluates quadruple 5.
• Define a function that subtracts 12 from half its argument.

### Multiple parameters

Of course, functions can also have more than one argument. For example, here is a function for calculating the area of a rectangle given its length and its width:

areaRect l w = l * w

*Main> areaRect 5 10
50


Another example that calculates the area of a right triangle $\left(A = \frac{bh}{2}\right)$:

areaRightTriangle b h = (b * h) / 2

*Main> areaRightTriangle 3 9
13.5


As you can see, multiple arguments are separated by spaces. That's also why you sometimes have to use parentheses to group expressions. For instance, to quadruple a value x, you can't write

quadruple x = double double x


because that would mean to apply a function named double to the two arguments double and x: functions can be arguments to other functions (and you will see why later). Instead, you have to put parentheses around the argument:

quadruple x = double (double x)


Arguments are always passed in the order given. For example:

subtract x y = x - y

*Main> subtract 10 5
5
*Main> subtract 5 10
-5


Here, subtract 10 5 evaluates to 10 - 5, but subtract 5 10 evaluates to 5 - 10 because the order changes.

Exercises
• Write a function to calculate the volume of a box.
• Approximately how many stones are the famous pyramids at Giza made up of? Use GHCi for your calculations.

### Remark on combining functions

It goes without saying that you can use functions that you have already defined to define new functions, just like you can use the predefined functions like addition (+) or multiplication (*) (operators are defined as functions in Haskell). For example, to calculate the area of a square, we can reuse our function that calculates the area of a rectangle

areaRect l w = l * w
areaSquare s = areaRect s s

*Main> areaSquare 5
25


After all, a square is just a rectangle with equal sides.

This principle may seem innocent enough, but it is really powerful, in particular when we start to calculate with other objects instead of numbers.

Exercises
• Write a function to calculate the volume of a cylinder. The volume of a cylinder is the area of the base, which is a circle (you already programmed this function in this chapter, so reuse it) multiplied by the height.

## Local definitions

### where clauses

When defining a function, it is not uncommon to define intermediate results that are local to the function. For instance, consider Heron's formula $A = \sqrt{s(s-a)(s-b)(s-c)}$ for calculating the area of a triangle with sides a, b, and c:

heron a b c = sqrt (s*(s-a)*(s-b)*(s-c))
where
s = (a+b+c) / 2


The variable s is half the perimeter of the triangle and it would be tedious to write it out four times in the argument of the square root function sqrt.

It would be wrong to just write the definitions in sequence

heron a b c = sqrt (s*(s-a)*(s-b)*(s-c))  -- s is not defined here
s = (a+b+c) / 2                           -- a, b, and c are not defined here


because the variables a, b, c are only available in the right-hand side of the function heron, but the definition of s as written here is not part of the right-hand side of heron. To make it part of the right-hand side, we have to use the where keyword.

Note that both the where and the local definitions are indented by 4 spaces, to distinguish them from subsequent definitions. Here is another example that shows a mix of local and top-level definitions:

areaTriangleTrig  a b c = c * height / 2   -- use trigonometry
where
cosa   = (b^2 + c^2 - a^2) / (2*b*c)
sina   = sqrt (1 - cosa^2)
height = b*sina
areaTriangleHeron a b c = result           -- use Heron's formula
where
result = sqrt (s*(s-a)*(s-b)*(s-c))
s      = (a+b+c)/2


### Scope

If you look closely at the previous example, you'll notice that we have used the variable names a, b, c twice, once for each of the area functions. How does that work?

Fortunately, the following fragment of code does not contain any unpleasant surprises:

Prelude> let r = 0
Prelude> let area r = pi * r ^ 2
Prelude> area 5
78.53981633974483


An "unpleasant surprise" here would have been getting 0 for the area because of the let r = 0 definition getting in the way. That does not happen because when you defined r the second time you are talking about a different r. This is something that happens in real life as well. How many people do you know that have the name John? What's interesting about people named John is that most of the time, you can talk about "John" to your friends, and depending on the context, your friends will know which John you are referring to. Programming has something similar to context, called scope.

We won't explain the technicalities behind scope (at least not now), just know that the value of a parameter is strictly what you pass in when you call the function, regardless of what the variable was called in the function's definition.

## Summary

1. Variables store values. In fact, they store any arbitrary Haskell expressions.
2. Variables do not change.
4. Functions can accept more than one parameter.

We also learned that comments allow non-code text to be stored in a source file.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Haskell Basics edit this chapter Haskell edit book structure

# Truth values

 Print version Haskell Basics edit this chapter

## Equality and other comparisons

So far we have seen how to use the equals sign to define variables and functions in Haskell. Writing

r = 5


in a source file will cause occurrences of r to be replaced by 5 in all places where it makes sense to do so according to the scope of the definition. Similarly,

f x = x + 3


causes occurrences of f followed by a number (which is taken as f's argument) to be replaced by that number plus three.

In Mathematics, however, the equals sign is also used in a subtly different and equally important way. For instance, consider this simple problem:

Example: Solve the following equation:

$x+3=5$

When we look at a problem like this one, our immediate concern is not the ability to represent the value $5$ as $x+3$, or vice-versa. Instead, we read the $x+3=5$ equation as a proposition, which says that some number $x$ gives 5 as result when added to 3. Solving the equation means finding which, if any, values of $x$ make that proposition true. In this case, using elementary algebra we can convert the equation into $x=5-3$ and finally to $x=2$, which is the solution we were looking for. The fact that it makes the equation true can be verified by replacing $x$ with 2 in the original equation, leading us to $2+3=5$, which is of course true.

The ability of comparing values to see if they are equal turns out to be extremely useful in programming. Haskell allows us to write such tests in a very natural way that looks just like an equation. The main difference is that, since the equals sign is already used for defining things, we use a double equals sign, ==. To see it at work, you can start GHCi and enter the proposition we wrote above like this:

Prelude> 2 + 3 == 5
True


GHCi returns "True" lending further confirmation of $2 + 3$ being equal to 5. As 2 is the only value that satisfies the equation, we would expect to obtain different results with other numbers.

Prelude> 7 + 3 == 5
False


Nice and coherent. Another thing to point out is that nothing stops us from using our own functions in these tests. Let us try it with the function f we mentioned at the start of the module:

Prelude> let f x = x + 3
Prelude> f 2 == 5
True


Just as expected, since f 2 is just 2 + 3.

In addition to tests for equality, we can just as easily compare two numerical values to see which one is larger. Haskell provides a number of tests including: < (less than), > (greater than), <= (less than or equal to) and >= (greater than or equal to), which work just like == (equal to). For a simple application, we could use < alongside the area function from the previous module to see whether a circle of a certain radius would have an area smaller than some value.

Prelude> let area r = pi * r^2
Prelude> area 5 < 50
False


## Boolean values

At this point, GHCi might look like some kind of oracle (or not) which can tell you if propositions are true or false. That's all fine and dandy, but how could that help us to write programs? And what is actually going on when GHCi "answers" such "questions"?

To understand that, we will start from a different but related question. If we enter an arithmetical expression in GHCi the expression gets evaluated, and the resulting numerical value is displayed on the screen:

Prelude> 2 + 2
4


If we replace the arithmetical expression with an equality comparison, something similar seems to happen:

Prelude> 2 == 2
True


But what is that "True" that gets displayed? It certainly does not look like a number. We can think of it as something that tells us about the veracity of the proposition 2 == 2. From that point of view, it makes sense to regard it as a value – except that instead of representing some kind of count, quantity, etc. it stands for the truth of a proposition. Such values are called truth values, or boolean values[1]. Naturally, there are only two possible boolean values – True and False.

### An introduction to types

When we say True and False are values, we are not just making an analogy. Boolean values have the same status as numerical values in Haskell, and indeed you can manipulate them just as well. One trivial example would be equality tests on truth values:

Prelude> True == True
True
Prelude> True == False
False


True is indeed equal to True, and True is not equal to False. Now, quickly: can you answer whether 2 is equal to True?

Prelude> 2 == True

<interactive>:1:0:
No instance for (Num Bool)
arising from the literal 2' at <interactive>:1:0
Possible fix: add an instance declaration for (Num Bool)
In the first argument of (==)', namely 2'
In the expression: 2 == True
In the definition of it': it = 2 == True


The correct answer is you can't, because the question just does not make sense. It is impossible to compare a number with something that is not a number, or a boolean with something that is not a boolean. Haskell incorporates that notion, and the ugly error message we got is, in essence, stating exactly that. Ignoring all of the obfuscating clutter (which we will get to understand eventually) what the message tells us is that, since there was a number (Num) on the left side of the ==, some kind of number was expected on the right side. But a boolean value (Bool) is not a number, and so the equality test exploded into flames.

The general concept, therefore, is that values have types, and these types define what we can or cannot do with the values. In this case, for instance, True is a value of type Bool, just like False (as for the 2, while there is a well-defined concept of number in Haskell the situation is slightly more complicated, so we will defer the explanation for a little while). Types are a very powerful tool because they provide a way to regulate the behaviour of values with rules which make sense, making it easier to write programs that work correctly. We will come back to the topic of types many times as they are very important to Haskell, starting with the very next module of this book.

## Infix operators

What we have seen so far leads us to the conclusion that an equality test like 2 == 2 is an expression just like 2 + 2, and that it also evaluates to a value in pretty much the same way. That fact is actually given a passing mention on the ugly error message we got on the previous example:

In the expression: 2 == True


Therefore, when we type 2 == 2 in the prompt and GHCi "answers" True it is just evaluating an expression. But there is a deeper truth involved in this process. A hint is provided by the very same error message:

In the first argument of (==)', namely 2'


GHCi called 2 the first argument of (==). In the previous module we used the term argument to describe the values we feed a function with so that it evaluates to a result. It turns out that == is just a function, which takes two arguments, namely the left side and the right side of the equality test. The only special thing about it is the syntax: Haskell allows two-argument functions with names composed only of non-alphanumeric characters to be used as infix operators, that is, placed between their arguments. The only caveat is that if you wish to use such a function in the "standard" way (writing the function name before the arguments, as a prefix operator) the function name must be enclosed in parentheses. So the following expressions are completely equivalent:

Prelude> 4 + 9 == 13
True
Prelude> (==) (4 + 9) 13
True


Writing the expression in this alternative style further drives the point that (==) is a function with two arguments just like areaRect in the previous module was. What's more, the same considerations apply to the other relational operators we mentioned (<, >, <=, >=) and to the arithmetical operators (+, *, etc.) – all of them are just functions. This generality is an illustration of one of the strengths of Haskell – there are few "special cases", and that helps to keep things simple. In general, we could say that all tangible things in Haskell are either values, variables or functions.[2]

## Boolean operations

One nice and useful way of seeing both truth values and infix operators in action are the boolean operations, which allows us to manipulate truth values as in logic propositions. Haskell provides us three basic functions for that purpose:

• (&&) performs the and operation. Given two boolean values, it evaluates to True if both the first and the second are True, and to False otherwise.
Prelude> (3 < 8) && (False == False)
True
Prelude> (&&) (6 <= 5) (1 == 1)
False

• (||) performs the or operation. Given two boolean values, it evaluates to True if either the first or the second are True (or if both are true), and to False otherwise.
Prelude> (2 + 2 == 5) || (2 > 0)
True
Prelude> (||) (18 == 17) (9 >= 11)
False

• not performs the negation of a boolean value; that is, it converts True to False and vice-versa.
Prelude> not (5 * 2 == 10)
False


One relational operator we didn't mention so far in our discussions about comparison of values is the not equal to operator. It is also provided by Haskell as the (/=) function, but if we had to implement it a very natural way of doing so would be:

x /= y = not (x == y)


Note that it is perfectly legal syntax to write the operators infix, even when defining them. Another detail to note is that completely new operators can be created out of ASCII symbols (basically, those that are found on the keyboard).

## Guards

Earlier on in this module we proposed two questions about the operations involving truth values: what was actually going on when we used them and how they could help us in the task of writing programs. While we now have a sound initial answer for the first question, the second one could well look a bit nebulous to you at this point, as we did little more than testing one-line expressions here. We will tackle this issue by introducing a feature that relies on boolean values and operations and allows us to write more interesting and useful functions: guards.

To show how guards work, we are going to implement the absolute value function. The absolute value of a number is the number with its sign discarded[3]; so if the number is negative (that is, smaller than zero) the sign is inverted; otherwise it remains unchanged. We could write the definition as:

$|x| = \begin{cases} x, & \mbox{if } x \ge 0 \\ -x, & \mbox{if } x < 0. \end{cases}$

The key feature of the definition is that the actual expression to be used for calculating $|x|$ depends on a set of propositions made about $x$. If $x \ge 0$ we use the first expression, but if $x < 0$ we use the second one instead. If we are going to implement the absolute value function in Haskell we need a way to express this decision process. That is exactly what guards help us to do. Using them, the implementation could look like this:[4]

Example: The abs function.

abs x
| x < 0     = 0 - x
| otherwise = x


Remarkably, the above code is almost as readable as the corresponding mathematical definition. In order to see how the guard syntax fits with the rest of the Haskell constructs, let us dissect the components of the definition:

• We start just like in a normal function definition, providing a name for the function, abs, and saying it will take a single parameter, which we will name x.
• Instead of just following with the = and the right-hand side of the definition, we entered a line break, and, following it, the two alternatives, placed in separate lines.[5] These alternatives are the guards proper. An important observation is that the whitespace is not there just for aesthetic reasons, but it is necessary for the code to be parsed correctly.
• Each of the guards begins with a pipe character, |. After the pipe, we put an expression which evaluates to a boolean (also called a boolean condition or a predicate), which is followed by the rest of the definition – the equals sign and the right-hand side which should be used if the predicate evaluates to True.
• The otherwise deserves some additional explanation. If none of the preceding predicates evaluate to True, the otherwise guard will be deployed by default. In this case, if x is not smaller than zero, it must be greater than or equal to zero, so the final predicate could have just as easily been x >= 0; otherwise is used here for the sake of convenience and readability.

Note

There is no syntactical magic behind otherwise. It is defined alongside the default variables and functions of Haskell as simply

otherwise = True


This definition makes for a catch-all guard since evaluation of the guard predicates is sequential, and so the always true otherwise predicate will only be reached if none of the other ones evaluates to True (that is, assuming you place it as the last guard!). In general it is a good idea to always provide an otherwise guard, as if none of the predicates is true for some input a rather ugly runtime error will be produced.

Note

You might be wondering why we wrote 0 - x and not simply -x to denote the sign inversion. Truth is, we could have written the first guard as

    | x < 0    = -x


and it would have worked just as well. The only issue is that this way of expressing sign inversion is actually one of the few "special cases" in Haskell, in that this - is not a function that takes one argument and evaluates to 0 - x, but just a syntactical abbreviation. While very handy, this shortcut occasionally conflicts with the usage of (-) as an actual function (the subtraction operator), which is a potential source of annoyance (for one of several possible issues, try writing three minus minus four without using any parentheses for grouping). In any case, the only reason we wrote 0 - x explicitly on the example was so that we could have an opportunity to make this point clear in this brief digression.

### where and Guards

where clauses are particularly handy when used with guards. For instance, consider this function, which computes the number of (real) solutions for a quadratic equation, $ax^2 + bx + c = 0$:

numOfSolutions a b c
| disc > 0  = 2
| disc == 0 = 1
| otherwise = 0
where
disc = b^2 - 4*a*c


The where definition is within the scope of all of the guards, sparing us from repeating the expression for disc.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

# Type basics

 Print version (Solutions) Haskell Basics edit this chapter

Types in programming are a way of grouping similar values into categories. In Haskell, the type system is a powerful way of ensuring there are fewer mistakes in your code.

## Introduction

Programming deals with different sorts of entities. For example, consider adding two numbers together:

$2 + 3$

What are 2 and 3? We can quite simply describe them as numbers. And what about the plus sign in the middle? That's certainly not a number, but it stands for an operation which we can do with two numbers – namely, addition.

Similarly, consider a program that asks you for your name and then greets you with a "Hello" message. Neither your name nor the word Hello are numbers. What are they then? We might refer to all words and sentences and so forth as text. It's normal in programming to use a slightly more esoteric word: String, which is short for "string of characters".

In Haskell, the rule is that all type names have to begin with a capital letter. We shall adhere to this convention henceforth.

Databases illustrate clearly the concept of types. For example, say we had a table in a database to store details about a person's contacts; a kind of personal telephone book. The contents might look like this:

 First Name Last Name Telephone number Address Sherlock Holmes 743756 221B Baker Street London Bob Jones 655523 99 Long Road Street Villestown

The fields in each entry contain values. Sherlock is a value as is 99 Long Road Street Villestown as well as 655523. As we've said, types are a way of categorizing data, so let us see how we could classify the values in this example. The first three fields seem straightforward enough. "First Name" and "Last Name" contain text, so we say that the values are of type String, while "Telephone Number" is clearly a number.

At first glance one may be tempted to classify address as a String. However, the semantics behind an innocent address are quite complex. There are a whole lot of human conventions that dictate how we interpret it. For example, if the beginning of the address text contains a number it is likely the number of the house. If not, then it's probably the name of the house – except if it starts with "PO Box", in which case it's just a postal box address and doesn't indicate where the person lives at all.

Clearly, there's more going on here than just text, as each part of the address has its own meaning. In principle there is nothing wrong with saying addresses are Strings, but when we describe something as a String, all that we are saying is that it is a sequence of letters, numbers, etc. Recognizing something as a specialized type, say, Address, is far more meaningful. If we know something is an Address, we instantly know much more about the piece of data – for instance, that we can interpret it using the "human conventions" that give meaning to addresses.

In retrospect, we might also apply this rationale to the telephone numbers. It could be a good idea to speak in terms of a TelephoneNumber type. Then, if we were to come across some arbitrary sequence of digits which happened to be of type TelephoneNumber we would have access to a lot more information than if it were just a Number – for instance, we could start looking for things such as area and country codes on the initial digits.

Another reason not to consider the telephone numbers as just Numbers is that doing arithmetics with them makes no sense. What is the meaning and expected effect of, say, adding 1 to a TelephoneNumber? It would not allow calling anyone by phone. That's a good reason for using a more specialized type than Number. Also, each digit comprising a telephone number is important; it's not acceptable to lose some of them by rounding it or even by omitting leading zeroes.

### Why types are useful

So far, it seems that all what we've done was to describe and categorize things, and it may not be obvious why all of this talk would be so important for writing actual programs. Starting with this module, we will explore how Haskell uses types to the programmer's benefit, allowing us to incorporate the semantics behind, say, an address or a telephone number seamlessly in the code.

## Using the interactive :type command

The best way to explore how types work in Haskell is from GHCi. The type of any expression can be checked with the immensely useful :type (or :t) command. Let us test it on the boolean values from the previous module:

Example: Exploring the types of boolean values in GHCi

Prelude> :type True
True :: Bool
Prelude> :type False
False :: Bool
Prelude> :t (3 < 5)
(3 < 5) :: Bool


Usage of :type is straightforward: enter the command into the prompt followed by whatever you want to find the type of. On the third example, we use :t, which we will be using from now on. GHCi will then print the type of the expression. The symbol ::, which will appear in a couple other places, can be read as simply "is of type", and indicates a type signature.

:type reveals that truth values in Haskell are of type Bool, as illustrated above for the two possible values, True and False, as well as for a sample expression that will evaluate to one of them. It is worthy to note at this point that boolean values are not just for value comparisons. Bool captures in a very simple way the semantics of a yes/no answer, and so it can be useful to represent any information of such kind – say, whether a name was found in a spreadsheet, or whether a user has toggled an on/off option.

### Characters and strings

Now let us try :t on something new. Literal characters are entered by enclosing them with single quotation marks. For instance, this is the single letter H:

Example: Using the :type command in GHCi on a literal character

Prelude> :t 'H'
'H' :: Char


Literal character values, then, have type Char (short for "character"). Single quotation marks, however, only work for individual characters. If we need to enter actual text – that is, a string of characters – we use double quotation marks instead:

Example: Using the :t command in GHCi on a literal string

Prelude> :t "Hello World"
"Hello World" :: [Char]


Why did we get Char again? The difference is in the square brackets. [Char] means a number of characters chained together, forming a list. That is what text strings are in Haskell – lists of characters.[6]

Exercises
1. Try using :type on the literal value "H" (notice the double quotes). What happens? Why?
2. Try using :type on the literal value 'Hello World' (notice the single quotes). What happens? Why?

A nice thing to be aware of is that Haskell allows for type synonyms, which work pretty much like synonyms in human languages (words that mean the same thing – say, 'fast' and 'quick'). In Haskell, type synonyms are alternative names for types. For instance, String is defined as a synonym of [Char], and so we can freely substitute one with the other. Therefore, to say:

"Hello World" :: String


is also perfectly valid, and in many cases a lot more readable. From here on we'll mostly refer to text values as String, rather than [Char].

## Functional types

So far, we have seen how values (strings, booleans, characters, etc.) have types and how these types help us to categorize and describe them. Now, the big twist, and what makes the Haskell's type system truly powerful: Not only values, but functions have types as well[7]. Let's look at some examples to see how that works.

### Example: not

Consider not, that negates boolean values (changing True to False and vice-versa). To figure out the type of a function we consider two things: the type of values it takes as its input and the type of value it returns. In this example, things are easy. not takes a Bool (the Bool to be negated), and returns a Bool (the negated Bool). The notation for writing that down is:

Example: Type signature for not

not :: Bool -> Bool


You can read this as "not is a function from things of type Bool to things of type Bool".

Using :t on a function will work just as expected:

Prelude> :t not
not :: Bool -> Bool


The description of a function's type is in terms of the types of argument(s) it takes and gives.

### Example: chr and ord

Text presents a problem to computers. Once everything is reduced to its lowest level, all a computer knows how to deal with are 1s and 0s: computers work in binary. As working with binary numbers isn't at all convenient, humans have come up with ways of making computers store text. Every character is first converted to a number, then that number is converted to binary and stored. Hence, a piece of text, which is just a sequence of characters, can be encoded into binary. Normally, we're only interested in how to encode characters into their numerical representations, because the computer generally takes care of the conversion to binary numbers without our intervention.

The easiest way of converting characters to numbers is simply to write all the possible characters down, then number them. For example, we might decide that 'a' corresponds to 1, then 'b' to 2, and so on. This is exactly what a thing called the ASCII standard is: 128 of the most commonly-used characters, numbered. Of course, it would be a bore to sit down and look up a character in a big lookup table every time we wanted to encode it, so we've got two functions that can do it for us, chr (pronounced 'char') and ord[8]:

Example: Type signatures for chr and ord

chr :: Int  -> Char
ord :: Char -> Int


We already know what Char means. The new type on the signatures above, Int, amounts to integer numbers, and is one of quite a few different types of numbers.[9] The type signature of chr tells us that it takes an argument of type Int, an integer number, and evaluates to a result of type Char. The converse is the case with ord: It takes things of type Char and returns things of type Int. With the info from the type signatures, it becomes immediately clear which of the functions encodes a character into a numeric code (ord) and which does the decoding back to a character (chr).

To make things more concrete, here are a few examples of function calls to chr and ord. Notice that the two functions aren't available by default; so before trying them in GHCi you need to use the :module Data.Char (or :m Data.Char) command to load the Data.Char module, where they are defined.

Example: Function calls to chr and ord

Prelude> :m Data.Char
Prelude Data.Char> chr 97
'a'
Prelude Data.Char> chr 98
'b'
Prelude Data.Char> ord 'c'
99


### Functions with more than one argument

The style of type signatures we have been using works fine enough for functions of one argument. But what would be the type of a function like this one?

Example: A function with more than one argument

xor p q = (p || q) && not (p && q)


(xor is the exclusive-or function, which evaluates to True if either one or the other argument is True, but not both; and False otherwise.)

The general technique for forming the type of a function that accepts more than one argument is simply to write down all the types of the arguments in a row, in order (so in this case p first then q), then link them all with ->. Finally, add the type of the result to the end of the row and stick a final -> in just before it.[10] In this example, we have:

1. Write down the types of the arguments. In this case, the use of (||) and (&&) gives away that p and q have to be of type Bool:
Bool                   Bool
^^ p is a Bool         ^^ q is a Bool as well

2. Fill in the gaps with ->:
Bool -> Bool

3. Add in the result type and a final ->. In our case, we're just doing some basic boolean operations so the result remains a Bool.
Bool -> Bool -> Bool
^^ We're returning a Bool
^^ This is the extra -> that got added in


The final signature, then, is:

Example: The signature of xor

xor :: Bool -> Bool -> Bool


### Real-World Example: openWindow

A library is a collection of common code used by many programs.

As you'll learn in the Haskell in Practice section of the course, one popular group of Haskell libraries are the GUI (Graphical User Interface) ones. These provide functions for dealing with all the parts of Windows, Linux, or Mac OS you're familiar with: opening and closing application windows, moving the mouse around, etc. One of the functions from one of these libraries is called openWindow, and you can use it to open a new window in your application. For example, say you're writing a word processor, and the user has clicked on the 'Options' button. You need to open a new window which contains all the options that they can change. Let's look at the type signature for this function[11]:

Example: openWindow

openWindow :: WindowTitle -> WindowSize -> Window


Don't panic! Here are a few more types you haven't come across yet. But don't worry, they're quite simple. All three of the types there, WindowTitle, WindowSize and Window are defined by the GUI library that provides openWindow. As we saw when constructing the types above, because there are two arrows, the first two types are the types of the parameters, and the last is the type of the result. WindowTitle holds the title of the window (which typically appears in a title bar at the very top of the window), and WindowSize specifies how big the window should be. The function then returns a value of type Window which represents the actual window.

One key point illustrated by this example, as well as the chr/ord one is that, even if you have never seen the function or don't know how it actually works, a type signature can immediately give you a good general idea of what the function is supposed to do. For that reason, a very useful habit to acquire is testing every new function you meet with :t. If you start doing so right now you'll not only learn about the standard library Haskell functions quite a bit quicker but also develop a useful kind of intuition. Curiosity pays off. :)

Exercises

Finding types for functions is a basic Haskell skill that you should become very familiar with. What are the types of the following functions?

1. The negate function, which takes an Int and returns that Int with its sign swapped. For example, negate 4 = -4, and negate (-2) = 2
2. The (||) function, pronounced 'or', that takes two Bools and returns a third Bool which is True if either of the arguments were, and False otherwise.
3. A monthLength function which takes a Bool which is True if we are considering a leap year and False otherwise, and an Int which is the number of a month; and returns another Int which is the number of days in that month.

For any functions hereafter involving numbers, you can just pretend the numbers are Ints.

1. f x y = not x && y
2. g x = (2*x - 1)^2

## Type signatures in code

Now we've explored the basic theory behind types as well as how they apply to Haskell. The key way in which type signatures are used is for annotating functions in source files. Let us see what that looks like for xor function from an earlier example:

Example: A function with its signature

xor :: Bool -> Bool -> Bool
xor p q = (p || q) && not (p && q)


That is all we have to do, really. Signatures are placed just before the corresponding functions, for maximum clarity.

The signatures we add in this way serve a dual role. They inform the type of the functions both to human readers of the code and to the compiler/interpreter.

### Type inference

We just said that type signatures tell the interpreter (or compiler) what the function type is. However, up to now you wrote perfectly good Haskell code without any signatures, and it was accepted by GHC/GHCi. That shows that in general it is not mandatory to add type signatures. But that doesn't mean that if you don't add them Haskell simply forgets about types altogether! Instead, when you didn't tell Haskell the types of your functions and variables it figured them out through a process called type inference. In essence, the compiler performs inference by starting with the types of things it knows and then working out the types of the rest of the values. Let's see how that works with a general example.

Example: Simple type inference

-- We're deliberately not providing a type signature for this function
isL c = c == 'l'


isL is a function that takes an argument c and returns the result of evaluating c == 'l'. If we don't provide a type signature the compiler, in principle, does not know the type of c, nor the type of the result. In the expression c == 'l', however, it does know that 'l' is a Char. Since c and 'l' are being compared with equality with (==) and both arguments of (==) must have the same type[12], it follows that c must be a Char. Finally, since isL c is the result of (==) it must be a Bool. And thus we have a signature for the function:

Example: isL with a type

isL :: Char -> Bool
isL c = c == 'l'


And, indeed, if you leave out the type signature the Haskell compiler will discover it through this process. You can verify that by using :t on isL with or without a signature.

So, if type signatures are optional in most cases[13] why should we care so much about them? Here are a few reasons:

• Documentation: type signatures make your code easier to read. With most functions, the name of the function along with the type of the function is sufficient to guess what the function does. Of course, you should always comment your code properly too, but having the types clearly stated helps a lot, too.
• Debugging: if you annotate a function with a type signature and then make a typo in the body of the function which changes the type of a variable the compiler will tell you, at compile-time, that your function is wrong. Leaving off the type signature could have the effect of allowing your function to compile, and the compiler would assign it an erroneous type. You wouldn't know until you ran your program that it was wrong.

To understand better how signatures can help documentation, let us have a glance at a somewhat more realistic example. The piece of code quoted below is a tiny module (modules are the typical way of preparing a library), and this way of organizing code is not too different from what you might find, say, when reading source code for the libraries bundled with GHC.

Note

Do not go crazy trying to understand how the functions here actually work; that is beside the point as we still have not covered any of the features being used. Just keep reading the text and play along.

Example: Module with type signatures

module StringManip where

import Data.Char

uppercase, lowercase :: String -> String
uppercase = map toUpper
lowercase = map toLower

capitalize :: String -> String
capitalize x =
let capWord []     = []
capWord (x:xs) = toUpper x : xs
in unwords (map capWord (words x))


This tiny library provides three string manipulation functions. uppercase converts a string to upper case, lowercase to lower case, and capitalize capitalizes the first letter of every word. Each of these functions takes a String as argument and evaluates to another String. What is relevant to our discussion here is that, even if we do not understand how these functions work, looking at the type signatures allows us to immediately know the types of the arguments and return values. That information, when paired with sensible function names, can make it a lot easier to figure out how we can use the functions.

Note that when functions have the same type we have the option of writing just one signature for all of them, by separating their names with commas, as it was done with uppercase and lowercase.

### Types prevent errors

The role of types in preventing errors is central to typed languages. When passing expressions around you have to make sure the types match up like they did here. If they don't, you'll get type errors when you try to compile; your program won't pass the typecheck. This is really how types help you to keep your programs bug-free. To take a very trivial example:

Example: A non-typechecking program

"hello" + " world"


Having that line as part of your program will make it fail to compile, because you can't add two strings together! In all likelihood the intention was to use the similar-looking string concatenation operator, which joins two strings together into a single one:

Example: Our erroneous program, fixed

"hello" ++ " world"


An easy typo to make, but because you use Haskell, it was caught when you tried to compile. You didn't have to wait until you ran the program for the bug to become apparent.

This was only a simple example. However, the idea of types being a system to catch mistakes works on a much larger scale too. In general, when you make a change to your program, you'll change the type of one of the elements. If this change isn't something that you intended, or has unforeseen consequences, then it will show up immediately. A lot of Haskell programmers remark that once they have fixed all the type errors in their programs, and their programs compile, that they tend to "just work": run the first time with only minor problems. Run-time errors, where your program goes wrong when you run it rather than when you compile it, are much rarer in Haskell than in other languages. This is a huge advantage of having a strong type system like Haskell does.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Haskell Basics edit this chapter Haskell edit book structure

# Lists and tuples

Lists and tuples are the two most fundamental ways of manipulating several values together, by grouping them into a single value.

 Print version (Solutions) Haskell Basics edit this chapter

## Lists

Functions are one of the two major building blocks of any Haskell program. The other is the list. So let's switch over to the interpreter and build lists:

Prelude> let numbers = [1,2,3,4]
Prelude> let truths  = [True, False, False]
Prelude> let strings = ["here", "are", "some", "strings"]


The square brackets delimit the list, and individual elements are separated by commas. The only important restriction is that all elements in a list must be of the same type. Trying to define a list with mixed-type elements results in a typical type error:

Prelude> let mixed = [True, "bonjour"]

<interactive>:1:19:
Couldn't match Bool' against [Char]'
Expected type: Bool
Inferred type: [Char]
In the list element: "bonjour"
In the definition of mixed': mixed = [True, "bonjour"]


### Building lists

In addition to specifying the whole list at once using square brackets and commas, you can build them up piece by piece using the (:) operator. This process is often referred to as consing[14].

Example: Consing something on to a list

Prelude> let numbers = [1,2,3,4]
Prelude> numbers
[1,2,3,4]
Prelude> 0:numbers
[0,1,2,3,4]


When you cons something on to a list (something:someList), what you get back is another list. Therefore you can keep on consing for as long as you wish. It is important to note that the cons operator evaluates from right to left. Another (more general) way to think of it is that it takes the first value to its left and the whole expression to its right.

Example: Consing lots of things to a list

Prelude> 1:0:numbers
[1,0,1,2,3,4]
Prelude> 2:1:0:numbers
[2,1,0,1,2,3,4]
Prelude> 5:4:3:2:1:0:numbers
[5,4,3,2,1,0,1,2,3,4]


In fact, this is how lists are actually built, by consing all elements to the empty list, []. The commas-and-brackets notation is just syntactic sugar, a more pleasant way to write code. So [1,2,3,4,5] is exactly equivalent to 1:2:3:4:5:[]

You will, however, want to watch out for a potential pitfall in list construction. Whereas something like True:False:[] is perfectly good Haskell, True:False is not:

Example: Whoops!

Prelude> True:False

<interactive>:1:5:
Couldn't match [Bool]' against Bool'
Expected type: [Bool]
Inferred type: Bool
In the second argument of (:)', namely False'
In the definition of it': it = True : False


True:False produces a familiar-looking type error message, which tells us that the cons operator (:) (which is really just a function) expected a list as its second argument but we gave it another Bool instead. (:) only knows how to stick things onto lists.[15]

So, when using cons, remember:

• The elements of the list must have the same type.
• You can only cons (:) something onto a list, not the other way around (you cannot cons a list onto an element). So, the final item on the right must be a list, and the items on the left must be independent elements, not lists.
Exercises
1. Would the following piece of Haskell work: 3:[True,False]? Why or why not?
2. Write a function cons8 that takes a list and conses 8 (at the beginning) on to it. Test it out on the following lists by doing:
1. cons8 []
2. cons8 [1,2,3]
3. cons8 [True,False]
4. let foo = cons8 [1,2,3]
5. cons8 foo
3. Adapt the above function in a way that 8 is at the end of the list
4. Write a function that takes two arguments, a list and a thing, and conses the thing onto the list. You should start out with:
let myCons list thing =

### Strings are just lists

As we briefly mentioned in the Type Basics module, strings in Haskell are just lists of characters. That means values of type String can be manipulated just like any other list. For instance, instead of entering strings directly as a sequence of characters enclosed in double quotation marks, they may also be constructed through a sequence of Char values, either linked with (:) and terminated by an empty list or using the commas-and-brackets notation.

Prelude>"hey" == ['h','e','y']
True
Prelude>"hey" == 'h':'e':'y':[]
True


Using double-quoted strings is just more syntactic sugar.

### Lists within lists

Lists can contain anything, just as long as they are all of the same type. Because lists are things too, lists can contain other lists! Try the following in the interpreter:

Example: Lists can contain lists

Prelude> let listOfLists = [[1,2],[3,4],[5,6]]
Prelude> listOfLists
[[1,2],[3,4],[5,6]]


Lists of lists can be pretty tricky sometimes, because a list of things does not have the same type as a thing all by itself; the type Int, for example, is different from [Int]. Let's sort through these implications with a few exercises:

Exercises
1. Which of these are valid Haskell and which are not? Rewrite in cons notation.
1. [1,2,3,[]]
2. [1,[2,3],4]
3. [[1,2,3],[]]
2. Which of these are valid Haskell, and which are not? Rewrite in comma and bracket notation.
1. []:[[1,2,3],[4,5,6]]
2. []:[]
3. []:[]:[]
4. [1]:[]:[]
5. ["hi"]:[1]:[]
3. Can Haskell have lists of lists of lists? Why or why not?
4. Why is the following list invalid in Haskell?
1. [[1,2],3,[4,5]]

Lists of lists can be useful because they allow one to express some kinds of complicated, structured data (two-dimensional matrices, for example). They are also one of the places where the Haskell type system truly shines. Human programmers, or at least this wikibook co-author, get confused all the time when working with lists of lists, and having restrictions of types often helps in wading through the potential mess.

## Tuples

### A different notion of many

Tuples are another way of storing multiple values in a single value. There are two key differences between tuples and lists:

• They have a fixed number of elements (immutable), and so you can't cons to a tuple. Therefore, it makes sense to use tuples when you know in advance how many values are to be stored. For example, we might want a type for storing 2D coordinates of a point. We know exactly how many values we need for each point (two – the x and y coordinates), so tuples are applicable.
• The elements of a tuple do not need to be all of the same type. For instance, in a phonebook application we might want to handle the entries by crunching three values into one: the name, phone number, and the address of each person. In such a case, the three values won't have the same type, so lists wouldn't help, but tuples would.

Tuples are created within parentheses with elements delimited by commas. Let's look at some sample tuples:

Example: Some tuples

(True, 1)
("Hello world", False)
(4, 5, "Six", True, 'b')


The first example is a tuple containing two elements: the first one is True, and the second is 1. The next example again has two elements: the first is "Hello world", and the second is False. The third example is a tuple consisting of five elements: the first is 4 (a number), the second is 5 (another number), the third is "Six" (a string), the fourth is True (a boolean value), and the fifth is 'b' (a character).

A quick note on nomenclature: In general you use n-tuple to denote a tuple of size n. 2-tuples (that is, tuples with 2 elements) are normally called pairs and 3-tuples triples. Tuples of greater sizes aren't actually all that common, but if you were to logically extend the naming system, you'd have quadruples, quintuples and so on, hence the general term tuple.

Exercises
1. Write down the 3-tuple whose first element is 4, second element is "hello" and third element is True.
2. Which of the following are valid tuples?
1. (4, 4)
2. (4, "hello")
3. (True, "Blah", "foo")
3. Lists can be built by consing new elements onto them: you cons a number onto a list of numbers, and get back a list of numbers. It turns out that there is no such way to build up tuples.
1. Why do you think that is?
2. Say for the sake of argument, that there was such a function. What would you get if you "consed" something on a tuple?

Tuples are also handy when you want to return more than one value from a function. In many languages, returning two or more things at once often requires wrapping them up in a single-purpose data structure, maybe one that only gets used in that function. In Haskell, you have the very convenient alternative of just returning them as a tuple.

### Tuples within tuples (and other combinations)

We can apply the same reasoning to tuples about storing lists within lists. Tuples are things too, so you can store tuples with tuples (within tuples up to any arbitrary level of complexity). Likewise, you could also have lists of tuples, tuples of lists, and all sorts of other combinations along the same lines.

Example: Nesting tuples and lists

((2,3), True)
((2,3), [2,3])
[(1,2), (3,4), (5,6)]


There is one bit of trickiness to watch out for, however. The type of a tuple is defined not only by its size, but by the types of objects it contains. For example, the tuples ("Hello",32) and (47,"World") are fundamentally different. One is of type (String,Int), whereas the other is (Int,String). This has implications for building up lists of tuples. We could very well have lists like [("a",1),("b",9),("c",9)], but having a list like [("a",1),(2,"b"),(9,"c")] is right out. Can you spot the difference?

Exercises
1. Which of these are valid Haskell, and why?
• 1:(2,3)
• (2,4):(2,3)
• (2,4):[]
• [(2,4),(5,5),('a','b')]
• ([2,4],[2,2])

## Retrieving values

Up to now we have seen how to put values into lists and tuples. If they are to be of any use, though, there must be a way of getting back the stored values! In this section, we will explore the surface of this issue.

Let us begin with pairs (that is, 2-tuples). A very common use for them is to store the (x, y) coordinates of a point: imagine you have a chess board, and want to specify a specific square. You could do this by labelling all the rows from 1 to 8, and similarly with the columns. Then, a pair (2, 5) could represent the square in row 2 and column 5. Say we want to define a function for finding all the pieces in a given row. One way of doing this would be to find the coordinates of all the pieces, then look at the row part and see whether it's equal to whatever row we're being asked to examine. This function would need, once it had the coordinate pair (x, y) of a piece, to extract the x (the row coordinate). To do that there are two functions, fst and snd, that retrieve[16] the first and second elements out of a pair, respectively. Let's see some examples:

Example: Using fst and snd

Prelude> fst (2, 5)
2
Prelude> fst (True, "boo")
True
Prelude> snd (5, "Hello")
"Hello"


Note that these functions only work on pairs. Why? Yet again, it has to do with types. Pairs and triples (and quadruples, etc.) have necessarily different types, and fst and snd only accept pairs as arguments.

As for lists, the functions head and tail are roughly analogous to fst and snd, in that they disassemble a list by taking apart what (:) joined: head evaluates to the first element of the list, while tail gives the rest of the list.

Example: Using head and tail

Prelude> 2:[7,5,0]
[2,7,5,0]
2
Prelude> tail [2,7,5,0]
[7,5,0]


Note

An important caveat: if we apply head, or tail, to an empty list...

Prelude> head []


... it blows up, as an empty list has no first element, nor any other elements at all.

### Pending questions

The four functions introduced here do not appear to be enough to fully solve the problem we started this section with. While fst and snd provide a satisfactory solution for pairs, what about tuples with three or more elements? And with lists, it would be natural to wonder if we can't do any better than just breaking them after the first element - head and tail just don't seem to cut it. For the moment, we will have to leave these questions pending, but don't worry - the issues are perfectly solvable. Once we do some necessary groundwork we will return to this subject, considering it in detail and, in particular, dedicating a number of chapters to list manipulation. There, we will understand how separating head and tail allows us to do anything we want with lists.

Exercises
1. Use a combination of fst and snd to extract the 4 from the tuple (("Hello", 4), True).
2. Normal chess notation is somewhat different to ours: it numbers the rows from 1-8 and the columns a-h; and the column label is customarily given first. Could we label a specific point with a character and a number, like ('a', 4)? What important difference with lists does this illustrate?
3. Write a function which returns the head and the tail of a list as the first and second elements of a tuple.
4. Use head and tail to write a function which gives the fifth element of a list. Then, make a critique of it, pointing out any annoyances and pitfalls you notice.

## Polymorphic types

As you may have found out already by playing with :t, the type of a list depends on the types of its elements, and is denoted by enclosing it in square brackets:

Prelude>:t [True, False]
[True, False] :: [Bool]
Prelude>:t ["hey", "my"]
["hey", "my"] :: [[Char]]


Therefore, lists of Bool have different types than lists of [Char] (that is, strings), of Int and so on. Since we have seen that functions only accept arguments of the types specified in the type of the function, that leads to some practical complication. For example, imagine a function that finds the length of a list. But since [Int], [Bool] and [String] are different types it seems we would need separate functions for each case – lengthInts :: [Int] -> Int, as well as a lengthBools :: [Bool] -> Int, as well as a lengthStrings :: [String] -> Int, as well as a...

That would be horribly annoying, and frustrating too, because intuitively it would seem that counting how many things there are in a list should be independent of what the things actually are. Fortunately, it does not work like that: there is a single function length, which works on all lists. But how can that possibly work? As usual, checking the type of length provides a good hint that there is something different going on...

Example: Our first polymorphic type

Prelude>:t length
length :: [a] -> Int


The a in the square brackets is not a type – remember that type names always start with uppercase letters. Instead, it is a type variable. When Haskell sees a type variable, it allows any type to take its place. This is exactly what we want. In type theory (a branch of mathematics), this is called polymorphism: functions or values with only a single type are called monomorphic, and things that use type variables to admit more than one type are polymorphic.

It is important to note that, in a single type signature, all cases of a certain type variable must be of the same type. For example,

f :: a -> a


means that f takes an argument of any type and gives something of the same type as the argument, as opposed to

f :: a -> b


which means that f takes an argument of any type and gives an argument of another type, which is not necessarily the same type, but can be.

### Example: fst and snd

As we saw, you can use the fst and snd functions to extract parts of pairs. By this time you should already be developing the habit of wondering "what type is that function?" about every function you come across. Let's consider the cases of fst and snd . These two functions take a pair as their argument and return one element of this pair. First of all, the type of a pair depends on the type of its elements, just as with lists, so the functions need to be polymorphic. Also it is important to keep in mind that pairs, and tuples in general, don't have to be homogeneous with respect to types; their different parts can be different types. So if we were to say:

fst :: (a, a) -> a


That would mean fst would only work if the first and second part of the pair given as input had the same type. So what is the correct type? Simply:

Example: The types of fst and snd

fst :: (a, b) -> a
snd :: (a, b) -> b


b is a second type variable, which stands for a type which may or may not be equal to the one which replaces a.

Note that if you knew nothing about fst and snd other than the type signatures you might guess that they return the first and second parts of a pair, respectively. Although that is correct, it is not necessarily true as all the signatures say is that they just have to return something with the same type of the first and second parts of the pair.

Exercises

Give type signatures for the following functions:

1. The solution to the third exercise of the previous section ("... a function which returns the head and the tail of a list as the first and second elements of a tuple").
2. The solution to the fourth exercise of the previous section ("... a function which gives the fifth element of a list").
3. h x y z = chr (x - 2) (remember we discussed chr in the previous chapter).

## Summary

We have introduced two new notions in this chapter, lists and tuples. Let us sum up the key similarities and differences between them:

1. Lists are defined by square brackets and commas : [1,2,3].
• They can contain anything as long as all the candidate elements of the list are of the same type
• They can also be built by the cons operator, (:), but you can only cons things onto lists
2. Tuples are defined by parentheses and commas : ("Bob",32)
• They can contain anything, even things of different types
• Their length is encoded in their type. That is, two tuples with different lengths will have different types.
3. Lists and tuples can be combined in any number of ways: lists within lists, tuples with lists, etc, but their criteria must still be fulfilled for the combinations to be valid.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Haskell Basics edit this chapter Haskell edit book structure

# Type basics II

 Print version Haskell Basics edit this chapter

Up to now we have shrewdly avoided number types in our examples. In one exercise, we even went as far as asking you to "pretend" the arguments to (+) had to be of type Int. So, from what are we hiding?

The main theme of this module will be how numerical types are handled in Haskell. While doing so, we will introduce some important features of the type system. Before diving into the text, though, pause for a moment and consider the following question: what should be the type of the function (+)[17]?

## The Num class

As far as everyday Mathematics is concerned, there are very few restrictions on which kind of numbers we can add together. $2 + 3$ (two natural numbers), $(-7) + 5.12$ (a negative integer and a rational number), $\frac{1}{7} + \pi$ (a rational and an irrational)... all of these are valid – indeed, any two real numbers can be added together. In order to capture such generality in the simplest way possible we would like to have a very general Number type in Haskell, so that the signature of (+) would be simply

(+) :: Number -> Number -> Number


That design, however, does not fit well with the way computers perform arithmetic. While integer numbers in programs can be quite straightforwardly handled as sequences of binary digits in memory, that approach does not work for non-integer real numbers[18], thus making it necessary for a more involved encoding to support them: floating point numbers. While floating point provides a reasonable way to deal with real numbers in general, it has some inconveniences (most notably, loss of precision) which make using the simpler encoding worthwhile for integer values. We are thus left with at least two different ways of storing numbers, one for integers and another one for general real numbers, which should correspond to different Haskell types. Furthermore, computers are only able to perform operations like (+) on a pair of numbers if they are in the same format. That should put an end to our hopes of using a universal Number type – or even having (+) working with both integers and floating-point numbers...

It is easy, however, to see reality is not that bad. We can use (+) with both integers and floating point numbers:

Prelude>3 + 4
7
Prelude>4.34 + 3.12
7.46


When discussing lists and tuples, we saw that functions can accept arguments of different types if they are made polymorphic. In that spirit, one possible type signature for (+) that would account for the facts above would be:

(+) :: a -> a -> a


(+) would then take two arguments of the same type a (which could be integers or floating-point numbers) and evaluate to a result of type a. There is a problem with that solution, however. As we saw before, the type variable a can stand for any type at all. If (+) really had that type signature we would be able to add up two Bool, or two Char, which would make no sense – and is indeed impossible. Rather, the actual type signature of (+) takes advantage of a language feature that allows us to express the semantic restriction that a can be any type as long as it is a number type:

(+) :: (Num a) => a -> a -> a


Num is a typeclass - a group of types which includes all types which are regarded as numbers[19]. The (Num a) => part of the signature restricts a to number types – or, more accurately, instances of Num.

## Numeric types

But what are the actual number types – the instances of Num that a stands for in the signature? The most important numeric types are Int, Integer and Double:

• Int corresponds to the vanilla integer type found in most languages. It has fixed precision, and thus maximum and minimum values (in 32-bit machines the range goes from -2147483648 to 2147483647).
• Integer also is used for integer numbers, but unlike Int it supports arbitrarily large values – at the cost of some efficiency.
• Double is the double-precision floating point type, and what you will want to use for real numbers in the overwhelming majority of cases (there is also Float, the single-precision counterpart of Double, which in general is not an attractive option due to more loss of precision).

These types are available by default in Haskell, and are the ones you will generally deal with in everyday tasks.

### Polymorphic guesswork

There is one thing we haven't explained yet, though. If you tried the examples of addition we mentioned at the beginning you know that something like this is perfectly valid:

Prelude> (-7) + 5.12
-1.88


Here, it seems we are adding two numbers of different types – an integer and a non-integer. Shouldn't the type of (+) make that impossible?

To answer that question we have to see what the types of the numbers we entered actually are:

Prelude> :t (-7)
(-7) :: (Num a) => a


And, lo and behold, (-7) is neither Int nor Integer! Rather, it is a polymorphic constant, which can "morph" into any number type if need be. The reason for that becomes clearer when we look at the other number...

Prelude> :t 5.12
5.12 :: (Fractional t) => t


5.12 is also a polymorphic constant, but one of the Fractional class, which is more restrictive than Num – every Fractional is a Num, but not every Num is a Fractional (for instance, Ints and Integers are not).

When a Haskell program evaluates (-7) + 5.12, it must settle for an actual type for the numbers. It does so by performing type inference while accounting for the class specifications. (-7) can be any Num, but there are extra restrictions for 5.12, so its type will define what (-7) will become. Since there is no other clues to what the types should be, 5.12 will assume the default Fractional type, which is Double; and, consequently, (-7) will become a Double as well, allowing the addition to proceed normally and return a Double[20].

There is a nice quick test you can do to get a better feel of that process. In a source file, define

x = 2


then load the file in GHCi and check the type of x. Then, change the file to add a y variable,

x = 2
y = x + 3


reload it and check the types of x and y. Finally, modify y to

x = 2
y = x + 3.1


and see what happens with the types of both variables.

### Monomorphic trouble

The sophistication of the numerical types and classes occasionally leads to some complications. Consider, for instance, the common division operator (/). It has the following type signature:

(/) :: (Fractional a) => a -> a -> a


Restricting a to fractional types is a must because the division of two integer numbers in general will not result in an integer. Nevertheless, we can still write something like

Prelude> 4 / 3
1.3333333333333333


because the literals 4 and 3 are polymorphic constants and therefore assume the type Double at the behest of (/). Suppose, however, we want to divide a number by the length of a list[21]. The obvious thing to do would be using the length function:

Prelude> 4 / length [1,2,3]


Unfortunately, that blows up:

<interactive>:1:0:
No instance for (Fractional Int)
arising from a use of /' at <interactive>:1:0-17
Possible fix: add an instance declaration for (Fractional Int)
In the expression: 4 / length [1, 2, 3]
In the definition of it': it = 4 / length [1, 2, 3]


As usual, the problem can be understood by looking at the type signature of length:

length :: [a] -> Int


The result of length is not a polymorphic constant, but an Int; and since an Int is not a Fractional it can't fit the signature of (/).

There is a handy function which provides a way of escaping from this problem. Before following on with the text, try to guess what it does only from the name and signature:

fromIntegral :: (Integral a, Num b) => a -> b


fromIntegral takes an argument of some Integral type (like Int or Integer) and makes it a polymorphic constant. By combining it with length we can make the length of the list fit into the signature of (/):

Prelude> 4 / fromIntegral (length [1,2,3])
1.3333333333333333


While this complication may look spurious at first, this way of doing things makes it easier to be rigorous when manipulating numbers. If you define a function that takes an Int argument you can be entirely sure that it will never be converted to an Integer or a Double unless you explicitly tell the program to do so (for instance, by using fromIntegral). As a direct consequence of the refinement of the type system, there is a surprising diversity of classes and functions for dealing with numbers in Haskell.

## Classes beyond numbers

There are many other use cases for typeclasses beyond arithmetic. For example, the type signature of (==) is:

(==) :: (Eq a) => a -> a -> Bool


Like (+) or (/), (==) is a polymorphic function. It compares two values of the same type, which must belong to the class Eq, and returns a Bool. Eq is simply the class of types of values which can be compared for equality, and includes all of the basic non-functional types[22].

Typeclasses are a very general language feature which adds a lot to the power of the type system. Later in the book we will return to this topic to see how to use them in custom ways, which will allow you to appreciate their usefulness in its fullest extent.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

# Building vocabulary

 Print version Haskell Basics edit this chapter

This chapter will be somewhat different from the surrounding ones. Think of it as an interlude, where the main goal is not to introduce new features, but to present important advice for studying (and using!) Haskell. Here, we will discuss the importance of acquiring a vocabulary of functions and how this book, along with other resources, can help you with that. First, however, we need to make a few quick points about function composition.

## Function composition

Function composition is a really simple concept. It just means applying one function to a value and then applying another function to the result. Consider these two very simple functions:

Example: Simple functions

f x = x + 3
square x = x^2


We can compose them in two different ways, depending on which one we apply first:

Prelude> square (f 1)
16
Prelude> square (f 2)
25
Prelude> f (square 1)
4
Prelude> f (square 2)
7


The parentheses around the inner function are necessary; otherwise, the interpreter would think that you were trying to get the value of square f, or f square; and both have no meaning.

The composition of two functions is a function in its own right. If applying f and then square, or vice-versa, to a number were a frequent, meaningful or otherwise important operations in a program, a very natural next step would be defining:

Example: Composed functions

squareOfF x = square (f x)

fOfSquare x = f (square x)


There is a second, nifty way of writing composed functions. It uses (.), the function composition operator, and is as simple as putting a period between the two functions:

Example: Composing functions with (.)

squareOfF x = (square . f) x

fOfSquare x = (f . square) x


Note that functions are still applied from right to left, so that g(f(x)) == (g . f) x [23].

## The need for a vocabulary

Function composition allows us to define complicated functions using simpler ones as building blocks. One of the key qualities of Haskell is how simple it is to write composed functions, no matter if the base functions are written by ourselves or by someone else[24], and the extent that helps us in writing simple, elegant and expressive code.

In order to use function composition, though, we first need to have functions to compose. While naturally the functions we ourselves write will always be available, every installation of GHC comes with a vast assortment of libraries (that is, packaged code), which provide functions for various common tasks. For that reason, it is vital for effective Haskell programming to develop some familiarity with the essential libraries or, at least, knowing how to find useful functions in them.

We can look at this issue from a different perspective. We have gone through a substantial portion of the Haskell syntax already; and, by the time we are done with the Recursion chapter (which is just around the corner), we could, in principle, write pretty much any list manipulation program we want; and, lists being such an important data structure, that corresponds to a very wide set of useful programs. Doing so with our current set of tools, however, would be terribly inefficient; not only due to the lack of some more advanced language features but, crucially, because we would end up rewriting large parts of the standard libraries. Rather, it is far preferable to have the libraries dealing as much as possible with trivial, or well-known and already solved, problems while we dedicate our brain cells to writing programs that solve the problems we are truly interested in. And, even in the latter case, many features of the libraries can help immensely with developing our own algorithms[25].

## Prelude and the hierarchical libraries

It is time to mention a few basic facts about the standard libraries. First and foremost, there is Prelude, which is the core library loaded by default in every Haskell program. Alongside with the basic types and other essential functionality, it provides a set of ubiquitous and extremely useful functions. We will refer to Prelude and its functions all the time throughout these introductory chapters.

Alongside with Prelude, there are the hierarchical libraries, which provide a much wider range of functionality. Although they are provided by default with GHC, they are not loaded automatically like Prelude. Rather, they are distributed as modules, which must be imported into your program. Later on we will actually explain how that works; but for now all you need to know is that, if we mention that, for instance, the function permutations is in the module Data.List, you just have to add a line import Data.List to the top of your source file, and permutations, alongside with the rest of Data.List, will be available.

Example: Importing a module in a source file

import Data.List

testPermutations = permutations "Prelude"


For quick GHCi tests, just enter :m +Data.List at the command line to load that module.

Prelude> :m +Data.List
Prelude Data.List> :t permutations
permutations :: [a] -> [[a]]


## One exhibit

Before continuing, let us see one (slightly histrionic, we admit) example of what familiarity with a few basic functions from Prelude can bring us[26]. Suppose we need a function which takes a string composed of words separated by spaces and returns that string with the order of the words reversed, so that "Mary had a little lamb" becomes "lamb little a had Mary". Now, we can solve that problem using exclusively what we have seen so far about Haskell, plus a few insights that can be acquired by studying the Recursion chapter. Here is what it might look like:

Example: There be dragons

monsterRevWords :: String -> String
monsterRevWords input = rejoinUnreversed (divideReversed input)
where
divideReversed s = go1 [] s
where
go1 divided [] = divided
go1 [] (c:cs)
| testSpace c = go1 [] cs
| otherwise   = go1 [[]] (c:cs)
go1 (w:ws) [c]
| testSpace c = (w:ws)
| otherwise   = ((c:w):ws)
go1 (w:ws) (c:c':cs)
| testSpace c =
if testSpace c'
then go1 (w:ws) (c':cs)
else go1 ([c']:w:ws) cs
| otherwise =
if testSpace c'
then go1 ((c:w):ws) (c':cs)
else go1 ((c:w):ws) (c':cs)
testSpace c = c == ' '
rejoinUnreversed [] = []
rejoinUnreversed [w] = reverseList w
rejoinUnreversed strings = go2 (' ' : reverseList newFirstWord) (otherWords)
where
revStrings = reverseList strings
otherWords = tail revStrings
go2 rejoined ([]:[]) = rejoined
go2 rejoined ([]:(w':ws')) = go2 (rejoined) ((' ':w'):ws')
go2 rejoined ((c:cs):ws) = go2 (c:rejoined) (cs:ws)
reverseList [] = []
reverseList w = go3 [] w
where
go3 rev [] = rev
go3 rev (c:cs) = go3 (c:rev) cs


There are too many problems with this thing; so let us consider just three of them:

• If we claimed that monsterRevWords does what is expected, you could either take our word for it, test it exhaustively on all sorts of possible inputs or attempt to understand it and get an awful headache (please don't).
• Furthermore, if we write a function this ugly and have to fix a bug or slightly modify it later on[27], we are set for an awful time.
• Finally, there is at least one easy to spot potential problem: if you have another glance at the definition, about halfway down there is a testSpace helper function which checks if a character is a space or not. The test, however, only includes the common space character (that is, ' '), and not other whitespace characters (tabs, newlines, etc.)[28].

If, however, we are armed merely with knowledge of the following Prelude functions:

• words, which reliably breaks down a string in whitespace delimited words, returning a list of strings;
• reverse, which reverses a list (incidentally, that is exactly what the reverseList above does); and
• unwords, which does the opposite of words;

then function composition means our problem is instantly solved.

Example: revWords done the Haskell way

revWords :: String -> String
revWords input = (unwords . reverse . words) input


Short, simple, readable and, since Prelude is reliable, bug-free[29]. The point here is: any time some program you are writing begins to look like monsterRevWords, look around and reach for your toolbox - the libraries.

## Acquiring vocabulary

After the stern warnings above, you might have predicted we will continue with the book by diving deep into the standard libraries. That is not the route we will follow, however - at least not in the first part of the book. The main reason for that is the Beginner's Track is meant to cover most of the Haskell language functionality in a readable and reasonably compact account, and a linear, systematic study of the libraries would in all likelihood have us sacrificing either one attribute or the other.

In any case, even if we will not stop with the programme to investigate them, the libraries will remain close at hand as we advance in the course (that also means there is no need for you to pause in your way through the book just to study the libraries on your own!). In this final section, we will give some advice on how you can use this book to learn them and which other resources are available.

### With this book

• For starters, once we enter Elementary Haskell, you will notice several of the exercises - mainly, among those about list processing - involve writing equivalent definitions for Prelude functions. For each of these exercises you do, one more function will be added to your repertoire.
• Furthermore, every now and then we will introduce a "new" library function; maybe within an example, or just with a mention in passing. Whenever we do so, take a minute to test the function and do some experiments. The idea is to extend that curiosity about types we mentioned in Type basics to the functions themselves.
• While the first few chapters are quite tightly-knit, later parts of the book are more independent from each other. That is specially true of the third track, Haskell in Practice. There, among other things you can find chapters on the Hierarchical libraries; and most of their content can be understood soon after having completed Elementary Haskell.
• As we reach the later parts of the Beginner's track, and specially when we get to monads, the concepts we will discuss will naturally lead to exploration of important parts of the hierarchical libraries.

### Other resources

• First and foremost, there is the documentation. While it is probably too dry to be really useful right now, soon enough it will prove invaluable. You can read not only the Prelude specification on-line but also the GHC hierarchical libraries documentation, with nice navigation and source code just one click away.
• A fun way of searching through the documentation is provided by the Hoogle, a Haskell search engine which covers the libraries distributed with GHC. (There is also Hayoo!; it includes a wide range of libraries which are not installed by default).
• Finally, we will when appropriate give pointers to other useful learning resources, specially when we move towards intermediate and advanced topics.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

# Next steps

 Print version (Solutions) Haskell Basics edit this chapter

This chapter has two main goals. We will dedicate the bulk of the text to introducing pattern matching, which is an absolutely fundamental feature of Haskell. Besides, we will also mention two new pieces of syntax: if expressions and let bindings, which will prove handy in upcoming Simple input and output chapter. In our context, though, they can be safely thought of as minor points; for, even though if and let are convenient sometimes, there is little in what they do that can't be achieved with the syntax we already know. There is plenty of time for you to master them; and furthermore there will be more examples a little ahead in the book.

## if / then / else

Haskell syntax supports garden-variety conditional expressions of the form if... then... (else ...). For instance, consider a function that returns (-1) if its argument is less than 0; 0 if its argument is 0; and 1 if its argument is greater than 0. There is a predefined function which does that job already (it is called signum); for the sake of illustration, though, let's define a version of our own:

Example: The signum function.

mySignum x =
if x < 0
then -1
else if x > 0
then 1
else 0


You can experiment with this as:

*Main> mySignum 5
1
*Main> mySignum 0
0
*Main> mySignum (5-10)
-1
*Main> mySignum (-1)
-1


The parentheses around "-1" in the last example are required; if missing, the system will think you are trying to subtract 1 from mySignum, which is ill-typed.

In an if/then/else construct, first the condition (in this case x < 0) is evaluated. If it results True, the whole construct evaluates to the then expression; otherwise (if the condition is False), the construct evaluates to the else expression. All of that is pretty intuitive. If you have programmed in an imperative language before, however, it might seem surprising to know that we always need to have both a then and an else clause. That must be so because the construct has to result in a value in both cases - and a value of the same type in both cases, by the way.

if / then / else function definitions like the one above can be easily rewritten with the guards syntax presented in past modules:

Example: From if to guards

mySignum x
| x < 0     = -1
| x > 0     = 1
| otherwise = 0


And conversely, the absolute value function in Truth values can be rendered as:

Example: From guards to if

abs x =
if x < 0
then -x
else x


## Introducing pattern matching

Suppose we are writing a program which tracks statistics from a racing competition in which racers receive points based on their classification in each race, the scoring rules being:

• 10 points for the winner of the race;
• 6 for the second-placed;
• 4 for the third-placed;
• 3 for the fourth-placed;
• 2 for the fifth-placed;
• 1 for the sixth-placed; and
• no points for other racers.

We can write a simple function which takes a classification (represented by an integer number: 1 for first place, etc.[30]) and returns how many points were earned. One possible solution uses if/then/else:

Example: Computing points with if/then/else

pts :: Int -> Int
pts x =
if x == 1
then 10
else if x == 2
then 6
else if x == 3
then 4
else if x == 4
then 3
else if x == 5
then 2
else if x == 6
then 1
else 0


Yuck! Admittedly, it wouldn't look this hideous had we used guards instead of if/then/else, but it still would be tedious to write (and read!) all those equality tests. We can do better, though:

Example: Computing points with a piece-wise definition

pts :: Int -> Int
pts 1 = 10
pts 2 = 6
pts 3 = 4
pts 4 = 3
pts 5 = 2
pts 6 = 1
pts _ = 0


Much better. However, even though defining pts in this style (which we will arbitrarily call piece-wise definition from now on) shows to a reader of the code what the function does in an awesomely clear way, the syntax looks odd given what we have seen of Haskell so far. Why are there seven equations for pts? What are those numbers doing in their left-hand sides? Where has the x gone?

The example above is our first encounter with a key feature of Haskell called pattern matching. When we call the second version of pts, the argument is matched against the numbers on the left side of each of the equations, which in turn are the patterns. This matching is done in the order we wrote the equations; so first of all the argument is matched against the 1 in the first equation. If the argument is indeed 1, we have a match and the first equation is used; and so pts 1 evaluates to 10 as expected. Otherwise, the other equations are tried in order following the same procedure. The final one, though, is rather different: the _ is a special pattern that might be read as "whatever": it matches with anything; and therefore if the argument doesn't match any of the previous patterns pts will return zero.

As for why there is no x or any other variable standing for the argument, it is simply because we don't need that to write the definitions. All possible return values are constants; and since the point of specifying a variable name on the left side is using it to write the right side, the x is unnecessary in our function.

There is, however, an obvious way to make pts even more concise. The points given to a racer decrease regularly from third place to sixth place, at a rate of one point per position. After noticing that, we can eliminate three of the seven equations as follows:

Example: Mixing styles

pts :: Int -> Int
pts 1 = 10
pts 2 = 6
pts x
| x <= 6    = 7 - x
| otherwise = 0


The first thing to point out here is that we can mix both styles of definitions. In fact, when we write pts x in the left side of an equation we are using pattern matching too! As a pattern, the x (or any other variable name) matches anything just like _; the only difference being that it also gives us a name to use on the right side (which, in this case, is necessary to write 7 - x).

Exercises
We cheated a little when moving from the second version of pts to the third one: they do not do exactly the same thing. Can you spot what the difference is?

Please do the exercise above before continuing: it is quick to do and really important.

Beyond integers, pattern matching works with values of various other types. One handy example are booleans. For instance, the (||) logical-or operator we met in Truth values could be defined as:

Example: (||)

(||) :: Bool -> Bool -> Bool
False || False = False
_     || _     = True


Or:

Example: (||), done another way

(||) :: Bool -> Bool -> Bool
True  || _ = True
False || y = y


When matching two or more arguments at once, the equation will only be used if all of them match.

To conclude this section, let us discuss a few things that might go wrong when using pattern matching:

• If we put a pattern which matches anything (such as the final patterns in each of the pts example) before the more specific ones the latter will be ignored. GHC(i) will typically warn us that "Pattern match(es) are overlapped" in such cases.
• If no patterns match, an error will be triggered. Generally, it is a good idea to ensure the patterns cover all cases, in the same way that the otherwise guard is not mandatory but highly recommended.
• Finally, while you can play around with various ways of (re)defining (&&)[31], here is one version that will not work:
(&&) :: Bool -> Bool -> Bool
x && x = x -- oops!
_ && _ = False

The program won't test whether the arguments are equal just because we happened to use the same name for both. As far as the matching goes, we could just as well have written _ && _ in the first case. And even worse: since we gave the same name to both arguments, GHC(i) will refuse the function due to "Conflicting definitions for x'".

## Tuple and list patterns

While the examples above show that pattern matching helps in writing more elegant code, that does not explain why it is so important. We will begin to grasp why it matters so much by considering the problem of writing a definition for fst, the function which extracts the first element of a pair. At this point, that appears to be an impossible task, as the only way of accessing the first value of the pair is by using fst itself... The following function, however, does the same thing as fst (confirm it in GHCi):

Example: A definition for fst

fst' :: (a, b) -> a
fst' (x, _) = x


It's magic! Instead of using a regular variable in the left side of the equation, we specified the argument with the pattern of the 2-tuple - that is, (,) - filled with a variable and the _ pattern. Then the variable was automatically associated with the first component of the tuple, and we used it to write the right side of the equation. The definition of snd is, of course, analogous.

Furthermore, the trick demonstrated above can be done with lists as well. Here are the actual definitions of head and tail:

Example: head, tail and patterns

head             :: [a] -> a

tail             :: [a] -> [a]
tail (_:xs)      =  xs
tail []          =  error "Prelude.tail: empty list"


The only essential change in relation to the previous example was replacing (,) with the pattern of the cons operator (:). These functions also have an equation using the pattern of the empty list, []; however, since empty lists have no head or tail there is nothing to do other than use error to print a prettier error message.

Summarizing, the real power of pattern matching comes from how it can be used to access the parts of a complex value. Pattern matching on lists, in particular, will be extensively deployed in Recursion and the chapters that follow it. Later on, we will also dedicate a whole chapter to explore what is behind such a seemingly magical feature.

## let bindings

To conclude this chapter, a brief word about let bindings, which are an alternative to where clauses for making local declarations. For instance, take the problem of finding the roots of a polynomial of the form $ax^2+bx+c$ (or, in other words, the solution to a second degree equation - think back of your middle school math courses). Its solutions are given by:

$x = \frac {-b \pm \sqrt{b^2-4ac}} {2a}$.

We could write the following function to compute the two values of $x$:

roots a b c =
((-b + sqrt(b*b - 4*a*c)) / (2*a),
(-b - sqrt(b*b - 4*a*c)) / (2*a))


Writing the sqrt(b*b - 4*a*c) term in both cases is annoying, though; we can use a local binding instead, using either where or, as will be demonstrated below, a let declaration:

roots a b c =
let sdisc = sqrt (b*b - 4*a*c)
in  ((-b + sdisc) / (2*a),
(-b - sdisc) / (2*a))


We put the let keyword before the declaration, and then use in to signal we are returning to the "main" body of the function. It is possible to put multiple declarations inside a single let...in block - just make sure they are indented the same amount, otherwise there will be syntax errors:

roots a b c =
let sdisc = sqrt (b*b - 4*a*c)
twice_a = 2*a
in  ((-b + sdisc) / twice_a,
(-b - sdisc) / twice_a)


Note

Still on indentation, the Indentation chapter has a full account of indentation rules; so if doubts about that arise as you code you might want to give it a glance.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

# Simple input and output

 Print version (Solutions) Haskell Basics edit this chapter

So far this tutorial has discussed functions that return values, which is well and good. But how do we write "Hello world"? To give you a first taste of it, here is a small variant of the "Hello world" program:

Example: Hello! What is your name?

main = do
name <- getLine
putStrLn ("Hello, " ++ name ++ ", how are you?")


At the very least, what should be clear is that dealing with input and output (IO) in Haskell is not a lost cause! Pure functional languages have always had a problem with input and output because IO requires side effects. Pure functions always have to return the same results for the same arguments. But how can such a function "getLine" return the same value every time it is called?

Before we give the solution, let's take a step back and think about the difficulties inherent in such a task.

Any IO library should provide a host of functions, containing (at a minimum) operations like:

• print a string to the screen
• read a string from a keyboard
• write data to a file
• read data from a file

There are two issues here. Let's first consider the initial two examples and think about what their types should be. Certainly the first procedure should take a String argument and produce something, but what should it produce? It could produce a unit (), since there is essentially no return value from printing a string. The second operation, similarly, should return a String, but it doesn't seem to require an argument.

We want both of these operations to be pure functions, but they are, by definition, not pure. The item that reads a string from the keyboard cannot be a function, as it will not return the same String every time. If the first function simply returns () every time, then referential transparency tells us we should have no problem replacing it with a function f _ = (), but clearly this does not have the desired result because the function has a side effect: it prints the argument.

## Actions

The breakthrough for solving this problem came when Philip Wadler realized that monads would be a good way to think about IO computations. In fact, monads are able to express much more than just the simple operations described above; we can use them to express a variety of constructions like concurrence, exceptions, IO, non-determinism and much more. Moreover, there is nothing special about them; they can be defined within Haskell with no special handling from the compiler (though compilers often choose to optimize monadic operations). Monads also have a somewhat undeserved reputation of being difficult to understand. So we're going to leave things at that – knowing simply that IO somehow makes use of monads without necessarily understanding the gory details behind them (they really aren't so gory). So for now, we can forget that monads even exist.

As pointed out before, we cannot think of things like "print a string to the screen" or "read data from a file" as functions, since they are not (in the pure mathematical sense). Therefore, we give them another name: actions. Not only do we give them a special name; we give them a special type to complement it. One particularly useful action is putStrLn, which prints a string to the screen. This action has type:

putStrLn :: String -> IO ()


As expected, putStrLn takes a string argument. What it returns is of type IO (). This means that this function is actually an action (that is what the IO means). Furthermore, when this action is evaluated (or "run") , the result will have type ().

Note

Actually, this type means that putStrLn is an action "within the IO monad", but we will gloss over this for now.

You can probably already guess the type of getLine:

getLine :: IO String


This means that getLine is an IO action that, when run, will have type String.

The question immediately arises: "how do you 'run' an action?". This is something that is left up to the compiler. You cannot actually run an action yourself; instead, a program is, itself, a single action that is run when the compiled program is executed. Thus, the compiler requires that the main function have type IO (), which means that it is an IO action that returns nothing. The compiled code then executes this action.

However, while you are not allowed to run actions yourself, you are allowed to combine actions. There are two ways to go about this. The one we will focus on in this chapter is the do notation, which provides a convenient means of putting actions together, and allows us to get useful things done in Haskell without having to understand what really happens. Lurking behind the do notation is the more explicit approach using the (>>=) operator, but we will not be ready to cover this until the chapter Understanding monads.

Note

Do notation is just syntactic sugar for (>>=). If you have experience with higher order functions, it might be worth starting with the latter approach and coming back here to see how do notation gets used.

Let's consider the following name program:

main = do
name <- getLine
putStrLn ("Hello, " ++ name ++ ", how are you?")


We can consider the do notation as a way to combine a sequence of actions. Moreover, the <- notation is a way to get the value out of an action. So, in this program, we're sequencing three actions: a putStrLn, a getLine and another putStrLn. The putStrLn action has type String -> IO (), so we provide it a String, and the fully applied action has type IO (). This is something that we are allowed to run as a program.

Exercises

Write a program which asks the user for the base and height of a right angled triangle, calculates its area and prints it to the screen. The interaction should look something like:

The base?
3.3
The height?
5.4
The area of that triangle is 8.91

Hint: you can use the function read to convert user strings like "3.3" into numbers like 3.3 and function show to convert a number into string.

### Left arrow clarifications

#### <- is optional

While we are allowed to get a value out of certain actions like getLine, we certainly are not obliged to do so. For example, we could very well have written something like this:

Example: executing getLine directly

main = do
getLine
putStrLn ("Hello, how are you?")


Clearly, that isn't very useful: the whole point of prompting the user for his or her name was so that we could do something with the result. That being said, it is conceivable that one might wish to read a line and completely ignore the result. Omitting the <- will allow for that; the action will happen, but the data won't be stored anywhere.

In order to get the value out of the action, we write name <- getLine, which basically means "run getLine, and put the results in the variable called name."

#### <- can be used with any action but the last

On the flip side, there are also very few restrictions on which actions can have values obtained from them. Consider the following example, where we put the results of each action into a variable (except the last... more on that later):

Example: putting all results into a variable

main = do
name <- getLine
putStrLn ("Hello, " ++ name ++ ", how are you?")


The variable x gets the value out of its action, but that isn't very interesting because the action returns the unit value (). So while we could technically get the value out of any action, it isn't always worth it. But wait, what about that last action? Why can't we get a value out of that? Let's see what happens when we try:

Example: getting the value out of the last action

main = do
name <- getLine
y <- putStrLn ("Hello, " ++ name ++ ", how are you?")


Whoops!

YourName.hs:5:2:
The last statement in a 'do' construct must be an expression


This is a much more interesting example, but it requires a somewhat deeper understanding of Haskell than we currently have. Suffice it to say, whenever you use <- to get the value of an action, Haskell is always expecting another action to follow it. So the very last action better not have any <-s.

### Controlling actions

Normal Haskell constructions like if/then/else can be used within the do notation, but you need to be somewhat careful. For instance, in a simple "guess the number" program, we have:

doGuessing num = do
guess <- getLine
then do putStrLn "Too low!"
doGuessing num
else if (read guess) > num
then do putStrLn "Too high!"
doGuessing num
else do putStrLn "You Win!"


If we think about how the if/then/else construction works, it essentially takes three arguments: the condition, the "then" branch, and the "else" branch. The condition needs to have type Bool, and the two branches can have any type, provided that they have the same type. The type of the entire if/then/else construction is then the type of the two branches.

In the outermost comparison, we have (read guess) < num as the condition. This clearly has the correct type. Let's just consider the "then" branch. The code here is:

          do putStrLn "Too low!"
doGuessing num


Here, we are sequencing two actions: putStrLn and doGuessing. The first has type IO (), which is fine. The second also has type IO (), which is fine. The type result of the entire computation is precisely the type of the final computation. Thus, the type of the "then" branch is also IO (). A similar argument shows that the type of the "else" branch is also IO (). This means the type of the entire if/then/else construction is IO (), which is just what we want.

Note

In this code, the last line is else do putStrLn "You Win!". This is somewhat overly verbose. In fact, else putStrLn "You Win!" would have been sufficient, since do is only necessary to sequence actions. Since we have only one action here, it is superfluous.

It is incorrect to think to yourself "Well, I already started a do block; I don't need another one," and hence write something like:

    do if (read guess) < num
then putStrLn "Too low!"
doGuessing num
else ...


Here, since we didn't repeat the do, the compiler doesn't know that the putStrLn and doGuessing calls are supposed to be sequenced, and the compiler will think you're trying to call putStrLn with three arguments: the string, the function doGuessing and the integer num. It will certainly complain (though the error may be somewhat difficult to comprehend at this point).

Note

If you are using the If-structure keep in mind that the else branch is mandatory. Otherwise you get compile errors!

Exercises

Write a program that asks the user for his or her name. If the name is one of Simon, John or Phil, tell the user that you think Haskell is a great programming language. If the name is Koen, tell them that you think debugging Haskell is fun (Koen Classen is one of the people who works on Haskell debugging); otherwise, tell the user that you don't know who he or she is.

(As far as syntax goes there are a few different ways to do it; write at least a version using if / then / else.)

## Actions under the microscope

Actions may look easy up to now, but they are actually a common stumbling block for new Haskellers. If you have run into trouble working with actions, you might consider looking to see if one of your problems or questions matches the cases below. It might be worth skimming this section now, and coming back to it when you actually experience trouble.

One temptation might be to simplify our program for getting a name and printing it back out. Here is one unsuccessful attempt:

Example: Why doesn't this work?

main =
do putStrLn "What is your name? "
putStrLn ("Hello " ++ getLine)


Ouch!

YourName.hs:3:26:
Couldn't match expected type [Char]'
against inferred type IO String'


Let us boil the example above down to its simplest form. Would you expect this program to compile?

Example: This still does not work

main =
do putStrLn getLine


For the most part, this is the same (attempted) program, except that we've stripped off the superfluous "What is your name" prompt as well as the polite "Hello". One trick to understanding this is to reason about it in terms of types. Let us compare:

 putStrLn :: String -> IO ()
getLine  :: IO String


We can use the same mental machinery we learned in Type basics to figure how everything went wrong. Simply put, putStrLn is expecting a String as input. We do not have a String, but something tantalisingly close, an IO String. This represents an action that will give us a String when it's run. To obtain the String that putStrLn wants, we need to run the action, and we do that with the ever-handy left arrow, <-.

Example: This time it works

main =
do name <- getLine
putStrLn name


Working our way back up to the fancy example:

main =
do putStrLn "What is your name? "
name <- getLine
putStrLn ("Hello " ++ name)


Now the name is the String we are looking for and everything is rolling again.

### Mind your expression types too

Fine, so we've made a big deal out of the idea that you can't use actions in situations that don't call for them. The converse of this is that you can't use non-actions in situations that DO expect actions. Say we want to greet the user, but this time we're so excited to meet them, we just have to SHOUT their name out:

Example: Exciting but incorrect. Why?

import Data.Char (toUpper)

main =
do name <- getLine
loudName <- makeLoud name
putStrLn ("Hello " ++ loudName ++ "!")
putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName)

makeLoud :: String -> String
makeLoud s = map toUpper s


This goes wrong...

Couldn't match expected type IO' against inferred type []'
Expected type: IO t
Inferred type: String
In a 'do' expression: loudName <- makeLoud name


This is quite similar to the problem we ran into above: we've got a mismatch between something that is expecting an IO type, and something which does not produce one. This time, the cause is our use of the left arrow <-; we're trying to left arrow a value of makeLoud name, which really isn't left arrow material. It's basically the same mismatch we saw in the previous section, except now we're trying to use regular old String (the loud name) as an IO String, when those clearly are not the same thing. The latter is an action, something to be run, whereas the former is just an expression minding its own business. Note that we cannot simply use loudName = makeLoud name because a do sequences actions, and loudName = makeLoud name is not an action.

So how do we extricate ourselves from this mess? We have a number of options:

• We could find a way to turn makeLoud into an action, to make it return IO String. But this is not desirable, because the whole point of functional programming is to cleanly separate our side-effecting stuff (actions) from the pure and simple stuff. For example, what if we wanted to use makeLoud from some other, non-IO, function? An IO makeLoud is certainly possible (how?), but missing the point entirely.
• We could use return to promote the loud name into an action, writing something like loudName <- return (makeLoud name). This is slightly better, in that we are at least leaving the makeLoud function itself nice and IO-free, whilst using it in an IO-compatible fashion. But it's still moderately clunky, because by virtue of left arrow, we're implying that there's action to be had -- how exciting! -- only to let our reader down with a somewhat anticlimactic return
• Or we could use a let binding...

It turns out that Haskell has a special extra-convenient syntax for let bindings in actions. It looks a little like this:

Example: let bindings in do blocks.

main =
do name <- getLine
let loudName = makeLoud name
putStrLn ("Hello " ++ loudName ++ "!")
putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName)


If you're paying attention, you might notice that the let binding above is missing an in. This is because let bindings in do blocks do not require the in keyword. You could very well use it, but then you'd have to make a mess of your do blocks. For what it's worth, the following two blocks of code are equivalent.

sweet unsweet
 do name <- getLine
let loudName = makeLoud name
putStrLn ("Hello " ++ loudName ++ "!")
putStrLn (
"Oh boy! Am I excited to meet you, "
++ loudName)

 do name <- getLine
let loudName = makeLoud name
in  do putStrLn ("Hello " ++ loudName ++ "!")
putStrLn (
"Oh boy! Am I excited to meet you, "
++ loudName)

Exercises
1. Why does the unsweet version of the let binding require an extra do keyword?
2. Do you always need the extra do?
3. (extra credit) Curiously, let without in is exactly how we wrote things when we were playing with the interpreter in the beginning of this book. Why can you omit the in keyword in the interpreter, when you'd have to put it in when typing up a source file?

At this point, you should have the fundamentals needed to do some fancier input/output. Here are some IO-related topics you may want to check in parallel with the main track of the course.

• You could continue the sequential track, by learning more about types and eventually monads.
• Alternately: you could start learning about building graphical user interfaces in the GUI chapter
• For more IO-related functionality, you could also consider learning more about the System.IO library
 Print version Solutions to exercises Haskell Basics edit this chapter Haskell edit book structure

# Recursion

 Print version (Solutions) Elementary Haskell edit this chapter

Recursion plays a central role in Haskell (and computer science and mathematics in general): recursion is the idea of defining a function in terms of itself. A function defined in this way is said to be recursive.

Recursion is merely a form of repetition, but sometimes it is taught in a confusing or obscure way. It is simple enough to understand as long as you separate the meaning of a recursive function from its behaviour.

Recursion, like other forms of repetition, require a stopping or termination condition. Like an infinite loop, a badly designed recursive function might lead to infinite regress; but if done properly it won't.

Generally speaking, a recursive definition has two parts. First, there are one or more base cases which express termination conditions, and the value of the function when the termination condition holds; it is essential that the base case does not call the function being defined. The recursive case is more general, and defines the function in terms of a 'simpler' call to itself. If we think of a function as being the solution to a computational problem, the recursive case expresses the relationship between the solution to a given problem and the solution of a somewhat smaller or simpler problem of the same kind. To make this idea more concrete, let's look at a few examples.

## Numeric recursion

### The factorial function

In mathematics, especially combinatorics, there is a function used fairly frequently called the factorial function[32]. It takes a single non-negative integer as an argument, finds all the positive integers less than or equal to "n", and multiplies them all together. For example, the factorial of 6 (denoted as $6!$) is $1 \times 2 \times 3 \times 4 \times 5 \times 6 = 720$. This is an interesting function for us, because it is a candidate to be written in a recursive style.

The idea is to look at the factorials of adjacent numbers:

Example: Factorials of consecutive numbers

Factorial of 6 = 6 × 5 × 4 × 3 × 2 × 1
Factorial of 5 =     5 × 4 × 3 × 2 × 1


Notice how we've lined things up. You can see here that the $6!$ involves the $5!$. In fact, $6!$ is just $6 \times 5!$. Let's look at another example:

Example: Factorials of consecutive numbers

Factorial of 4 = 4 × 3 × 2 × 1
Factorial of 3 =     3 × 2 × 1
Factorial of 2 =         2 × 1
Factorial of 1 =             1


Indeed, we can see that the factorial of any number is just that number multiplied by the factorial of the number one less than it. There's one exception to this: if we ask for the factorial of 0, we don't want to multiply 0 by the factorial of -1 In fact, we just say the factorial of 0 is 1 (we define it to be so. It just is, okay[33]?). So, 0 is the base case for the recursion: when we get to 0 we can immediately say that the answer is 1, without using recursion. We can summarize the definition of the factorial function as follows:

• The factorial of 0 is 1.
• The factorial of any other number is that number multiplied by the factorial of the number one less than it.

We can translate this directly into Haskell:

Example: Factorial function

factorial 0 = 1
factorial n = n * factorial (n-1)


This defines a new function called factorial. The first line says that the factorial of 0 is 1, and the second one says that the factorial of any other number n is equal to n times the factorial of n-1. Note the parentheses around the n-1: without them this would have been parsed as (factorial n) - 1; function application (applying a function to a value) will happen before anything else does (we say that function application binds more tightly than anything else).

If this seems confusing to you, try to separate the meaning of the definition from the behaviour of the computer while executing a recursive function. The examples above demonstrate a very simple relationship between factorial of a number, n, and the factorial of a slightly smaller number, n-1. This relationship only needs to be understood at a single abstract level.

But understanding the relationship is only one side of the issue. We do need to understand how recursive functions behave. Think of a function call as delegation. The instructions for a recursive function delegate a sub-task. It just so happens that the delegate function uses the same instructions as the delegator. It's just the input data that changes. The only really confusing thing about the behaviour of a recursive function is the fact that each function call uses the same parameter names, and it can be tricky for a person to keep track of where they are.

Let's look at what happens when you execute factorial 3:

• 3 isn't 0, so we calculate the factorial of 2
• 2 isn't 0, so we calculate the factorial of 1
• 1 isn't 0, so we calculate the factorial of 0
• 0 is 0, so we return 1.
• To complete the calculation for factorial 3, we multiply the current number, 1, by the factorial of 0, which is 1, obtaining 1 (1 × 1).
• To complete the calculation for factorial 3, we multiply the current number, 2, by the factorial of 1, which is 1, obtaining 2 (2 × 1 × 1).
• To complete the calculation for factorial 3, we multiply the current number, 3, by the factorial of 2, which is 2, obtaining 6 (3 × 2 × 1 × 1).

We can see how the result of the recursive call is calculated first, then combined using multiplication. Once you see how it can work, you rarely need to "unwind" the recursion like this when reading or composing recursive functions. Compilers have to implement the behaviour, but programmers can work at the abstract level, e.g., the relationship between factorial n and factorial (n-1).

(Note that we end up with the one appearing twice, since the base case is 0 rather than 1; but that's okay since multiplying by one has no effect. We could have designed factorial to stop at 1 if we had wanted to, but it's conventional, and often useful, to have the factorial of 0 defined.)

One more thing to note about the recursive definition of factorial: the order of the two declarations (one for factorial 0 and one for factorial n) is important. Haskell decides which function definition to use by starting at the top and picking the first one that matches. In this case, if we had the general case (factorial n) before the 'base case' (factorial 0), then the general n would match anything passed into it – including 0. So factorial 0 would match the general n case, the compiler would conclude that factorial 0 equals 0 * factorial (-1), and so on to negative infinity. Definitely not what we want. The lesson here is that one should always list multiple function definitions starting with the most specific and proceeding to the most general.

Exercises
• What is factorial 5?
• What about factorial 1000? If you have a scientific calculator (that isn't your computer), try it there first. Does Haskell give you what you expected?
• What about factorial (-1)? Why does this happen?
2. The double factorial of a number n is the product of every other number from 1 (or 2) up to n. For example, the double factorial of 8 is 8 × 6 × 4 × 2 = 384, and the double factorial of 7 is 7 × 5 × 3 × 1 = 105. Define a doublefactorial function in Haskell.

### A quick aside

This section is aimed at people who are used to more imperative-style languages like e.g. C.

Loops are the bread and butter of imperative languages. For example, the idiomatic way of writing a factorial function in an imperative language would be to use a for loop, like the following (in C):

Example: The factorial function in an imperative language

int factorial(int n) {
int res;
for (res = 1; n > 1; n--)
res *= n;
return res;
}


This isn't directly possible in Haskell, since changing the value of the variables res and n (a destructive update) would not be allowed. However, you can always translate a loop into an equivalent recursive form. The idea is to make each loop variable in need of updating into a parameter of a recursive function. For example, here is a direct 'translation' of the above loop into Haskell:

Example: Using recursion to simulate a loop

factorial n = factorialWorker n 1 where
factorialWorker n res | n > 1     = factorialWorker (n - 1) (res * n)
| otherwise = res


Obviously this is not the shortest or most elegant way to implement factorial in Haskell (translating directly from an imperative paradigm into Haskell like this rarely is), but it can be nice to know that this sort of translation is always possible.

Another thing to note is that you shouldn't be worried about poor performance through recursion with Haskell. In general, functional programming compilers include a lot of optimisation for recursion, including an important one called tail-call optimisation; remember too that Haskell is lazy – if a calculation isn't needed, it won't be done. We'll learn about these in later chapters.

### Other recursive functions

As it turns out, there is nothing particularly special about the factorial function; a great many numeric functions can be defined recursively in a natural way. For example, let's think about multiplication. When you were first introduced to multiplication (remember that moment? :)), it may have been through a process of 'repeated addition'. That is, 5 × 4 is the same as summing four copies of the number 5. Of course, summing four copies of 5 is the same as summing three copies, and then adding one more – that is, 5 × 4 = 5 × 3 + 5. This leads us to a natural recursive definition of multiplication:

Example: Multiplication defined recursively

mult n 0 = 0                      -- anything times 0 is zero
mult n 1 = n                      -- anything times 1 is itself
mult n m = (mult n (m - 1)) + n   -- recurse: multiply by one less, and add an extra copy


Stepping back a bit, we can see how numeric recursion fits into the general recursive pattern. The base case for numeric recursion usually consists of one or more specific numbers (often 0 or 1) for which the answer can be immediately given. The recursive case computes the result by recursively calling the function with a smaller argument and using the result in some manner to produce the final answer. The 'smaller argument' used is often one less than the current argument, leading to recursion which 'walks down the number line' (like the examples of factorial and mult above), but it doesn't have to be; the smaller argument could be produced in some other way as well.

Exercises
1. Expand out the multiplication 5 × 4 similarly to the expansion we used above for factorial 3.
2. Define a recursive function power such that power x y raises x to the y power.
3. You are given a function plusOne x = x + 1. Without using any other (+)s, define a recursive function addition such that addition x y adds x and y together.
4. (Harder) Implement the function log2, which computes the integer log (base 2) of its argument. That is, log2 computes the exponent of the largest power of 2 which is less than or equal to its argument. For example, log2 16 = 4, log2 11 = 3, and log2 1 = 0. (Small hint: read the last phrase of the paragraph immediately preceding these exercises.)

## List-based recursion

A lot of functions in Haskell turn out to be recursive, especially those concerning lists[34]. Let us begin by considering the length function, that finds the length of a list:

Example: The recursive definition of length

length :: [a] -> Int
length []     = 0
length (x:xs) = 1 + length xs


Let us explain the algorithm in English to clarify how it works. The type signature of length tells us that it takes any sort of list and produces an Int. The next line says that the length of an empty list is 0; and that, naturally, is the base case. The final line is the recursive case: if a list consists of a first element, x, and xs, the rest of the list, the length of the list is one plus the length of xs (as in the head/tail example in Haskell/Next steps, x and xs are set when the argument list matches the (:) pattern).

How about the concatenation function (++), which joins two lists together? (Some examples of usage are also given, as we haven't come across this function so far in this chapter.)

Example: The recursive (++)

Prelude> [1,2,3] ++ [4,5,6]
[1,2,3,4,5,6]
Prelude> "Hello " ++ "world" -- Strings are lists of Chars
"Hello world"

(++) :: [a] -> [a] -> [a]
[] ++ ys     = ys
(x:xs) ++ ys = x : xs ++ ys


This is a little more complicated than length but not too difficult once you break it down. The type says that (++) takes two lists and produces another. The base case says that concatenating the empty list with a list ys is the same as ys itself. Finally, the recursive case breaks the first list into its head (x) and tail (xs) and says that to concatenate the two lists, concatenate the tail of the first list with the second list, and then tack the head x on the front.

There's a pattern here: with list-based functions, the base case usually involves an empty list, and the recursive case involves passing the tail of the list to our function again, so that the list becomes progressively smaller.

Exercises

Give recursive definitions for the following list-based functions. In each case, think what the base case would be, then think what the general case would look like, in terms of everything smaller than it. (Note that all of these functions are available in Prelude, so you will want to give them different names when testing your definitions in GHCi.)

1. replicate :: Int -> a -> [a], which takes a count and an element and returns the list which is that element repeated that many times. E.g. replicate 3 'a' = "aaa". (Hint: think about what replicate of anything with a count of 0 should be; a count of 0 is your 'base case'.)
2. (!!) :: [a] -> Int -> a, which returns the element at the given 'index'. The first element is at index 0, the second at index 1, and so on. Note that with this function, you're recursing both numerically and down a list[35].
3. (A bit harder.) zip :: [a] -> [b] -> [(a, b)], which takes two lists and 'zips' them together, so that the first pair in the resulting list is the first two elements of the two lists, and so on. E.g. zip [1,2,3] "abc" = [(1, 'a'), (2, 'b'), (3, 'c')]. If either of the lists is shorter than the other, you can stop once either list runs out. E.g. zip [1,2] "abc" = [(1, 'a'), (2, 'b')].

Recursion is used to define nearly all functions to do with lists and numbers. The next time you need a list-based algorithm, start with a case for the empty list and a case for the non-empty list and see if your algorithm is recursive.

## Don't get TOO excited about recursion...

Although it's very important to have a solid understanding of recursion when programming in Haskell, one rarely has to write functions that are explicitly recursive. Instead, there are all sorts of standard library functions which perform recursion for you in various ways, and one usually ends up using those instead. For example, a much simpler way to implement the factorial function is as follows:

Example: Implementing factorial with a standard library function

factorial n = product [1..n]


Almost seems like cheating, doesn't it? :) This is the version of factorial that most experienced Haskell programmers would write, rather than the explicitly recursive version we started out with. Of course, the product function is using some list recursion behind the scenes[36], but writing factorial in this way means you, the programmer, don't have to worry about it.

## Summary

Recursion is the practice of defining a function in terms of the function itself. It nearly always comes in two parts: a base case and a recursive case. Recursion is especially useful for dealing with list- and number-based functions.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

 Print version (Solutions) Elementary Haskell edit this chapter

By now we have already met the basic tools for working with lists. We can build lists up from the cons operator (:) and the empty list [], and we can take them apart by using a combination of recursion and pattern matching. In this chapter and the next we will consider in more depth techniques for list processing and, while doing so, discover a bit of new notation and have a first taste of characteristic Haskell features such as infinite lists, list comprehensions and higher-order functions.

Note

Throughout this chapter you will read and write functions which sum, subtract and multiply elements of lists. For simplicity's sake we will pretend the list elements have to be of type Integer. However, by recalling the discussions on the Type basics II chapter, you will realize that assumption is not necessary at all! Therefore, we suggest that, as an exercise of sorts, you figure out what would the type signatures of such functions look like if we made them polymorphic, allowing for the list elements to have any type in the class Num. To check your signatures, just omit them temporarily, load the functions into GHCi, use :t and let type inference guide you.

## Rebuilding lists

We'll start by writing and analysing a function to double every element of a list of integers. For our current purposes, it will be enough that its type is so that it takes a list of Integers and evaluates to another list of Integers:

doubleList :: [Integer] -> [Integer]


Then we must write the function definition. As usual in such cases, we will go for a recursive definition:

doubleList [] = []
doubleList (n:ns) = (2 * n) : doubleList ns


Here, the base case is for an empty list; and it just evaluates to an empty list. As for the general case, doubleList builds up a new list by using (:). The first element of this new list is twice the head of the argument, and the rest of the result is obtained by a recursive call – the application of doubleList to the tail. If the tail happens to be an empty list, the base case will be invoked and recursion stops[37].

By understanding the recursive definition we can picture what actually happens when we evaluate an expression such as

doubleList [1,2,3,4]


We can work it out longhand by substituting the argument into the function definition, just like schoolbook algebra:

doubleList 1:[2,3,4] = (1*2) : doubleList (2 : [3,4])
= (1*2) : (2*2) : doubleList (3 : [4])
= (1*2) : (2*2) : (3*2) : doubleList (4 : [])
= (1*2) : (2*2) : (3*2) : (4*2) : doubleList []
= (1*2) : (2*2) : (3*2) : (4*2) : []
= 2 : 4 : 6 : 8 : []
= [2, 4, 6, 8]


The bottom line is that, in effect, we rebuilt the original list replacing every element by its double.

One important thing to notice is that in this longhand evaluation exercise the moment at which we choose to evaluate the multiplications does not affect the result[38]. Had we done them immediately after each recursive call of doubleList it would have made no difference. This reflects an important property of Haskell: it is a pure functional programming language. Because evaluation order can never change the result, it is mostly left to the compiler to decide when to actually evaluate things. Since Haskell is also a lazy evaluation language, evaluation is usually deferred until the value is really needed, but the compiler is free to evaluate things sooner if this will improve efficiency. From the programmer's point of view evaluation order rarely matters [39].

### Generalizing

Suppose that we needed, while solving a given problem, not only a function to double a list but also one that tripled it. In principle, we could follow the same strategy and define:

tripleList :: [Integer] -> [Integer]
tripleList [] = []
tripleList (n:ns) = (3 * n) : tripleList ns


Both doubleList and tripleList have very limited applicability. Every time we needed multiplying the elements of a list by 4, 8, 17 etc. we would need to write a new list-multiplying function, and all of them would do nearly the same thing. An obvious improvement would be generalizing our function to allow multiplication by any number. Doing so requires a function that takes an Integer multiplicand as well as a list of Integers. Here is a way to define it:

multiplyList :: Integer -> [Integer] -> [Integer]
multiplyList _ [] = []
multiplyList m (n:ns) = (m*n) : multiplyList m ns


This example deploys _ as a "don't care" pattern. The multiplicand is not used for the base case, so instead of giving it a name (like m, n or ns) it is explicitly ignored.

multiplyList solves our current problem for any integer number:

Prelude> multiplyList 17 [1,2,3,4]
[17,34,51,68]


In particular, it is trivial to rewrite our earlier functions in terms of multiplyList:

doubleList :: [Integer] -> [Integer]
doubleList xs = multiplyList 2 xs

tripleList :: [Integer] -> [Integer]
tripleList xs = multiplyList 3 xs

Exercises

Write the following functions and test them out. Don't forget the type signatures.

1. takeInt returns the first n items in a list. So takeInt 4 [11,21,31,41,51,61] returns [11,21,31,41].
2. dropInt drops the first n items in a list and returns the rest. so dropInt 3 [11,21,31,41,51] returns [41,51].
3. sumInt returns the sum of the items in a list.
4. scanSum adds the items in a list and returns a list of the running totals. So scanSum [2,3,4,5] returns [2,5,9,14].
5. diffs returns a list of the differences between adjacent items. So diffs [3,5,6,8] returns [2,1,2]. (Hints: one solution involves writing an auxiliary function which takes two lists and calculates the difference between corresponding. Alternatively, you might explore the fact that lists with at least two elements can be matched to a (x:y:ys) pattern.)
The first three functions are in Prelude, under the names take, drop, and sum.

## Generalizing even further

We just wrote a function which can multiply the elements of a list by any Integer. Not bad, but can we do any better?

Let us begin by rewriting multiplyList in a rather artificial manner:

multiplyList :: Integer -> [Integer] -> [Integer]
multiplyList _ [] = []
multiplyList m (n:ns) = (multiplyByM n) : multiplyList m ns
where
multiplyByM x = m * x


The point of this rewrite is making it clear that while multiplyList is much more useful than doubleList it still resembles its predecessor in one aspect. Early on, the problem was being constrained to multiplying the elements by 2 or some other hard-coded number; now, we could justifiably complain about being constrained to apply multiplyByM to the list elements. What if we wanted to do sumWithM instead; or, for that matter, something entirely different (say, computing the square of each element)? We would be back to square one, having to rewrite the recursive function.

A key functionality of Haskell will save the day. Since the solution can be quite surprising, we will approach it in a somewhat roundabout way. Consider the type signature of multiplyList:

multiplyList :: Integer -> [Integer] -> [Integer]


The first novelty is that the -> arrow in type signatures is right associative. That means we can read this signature as:

multiplyList :: Integer -> ( [Integer] -> [Integer] )


How should we understand that? It tells us multiplyList is a function that takes one Integer argument and evaluates to another function, which in turn takes a list of Integers and returns another list of Integers. In practice, that means we can rewrite the following definition...

evens = multiplyList 2 [1,2,3,4] -- [2,4,6,8]


...like this:

evens = (multiplyList 2) [1,2,3,4]


The expression within parentheses, (multiplyList 2), is just an [Integer] -> [Integer] function which is then applied to [1,2,3,4]. Now, (multiplyList 2) is of course the same as doubleList; and to drive the point home we can redefine that as:

doubleList :: [Integer] -> [Integer]
doubleList = multiplyList 2


The expression on the right side is perfectly well-formed and stands on its own, so writing the second argument of multiplyList, and thus the argument of doubleList, is strictly unnecessary[40].

The trickery above illustrates that functions in Haskell behave much like any other value - we can have them returned from other functions, and even define them without mentioning their arguments, as if they were a normal constant. What if we could have functions as arguments as well? That way, we could, based on the recursive skeleton of multiplyList, write a function that took, instead of the argument m, a function to replace multiplyByM. A function like this one:

applyToIntegers :: (Integer -> Integer) -> [Integer] -> [Integer]
applyToIntegers _ [] = []
applyToIntegers f (n:ns) = (f n) : applyToIntegers f ns


This function does indeed work, and so we can apply any Integer -> Integer function to the elements of a list of Integers. Now, since the multiplication operator (*) is just a function with two arguments, the same tricks apply to it; in fact, had we defined the following function it would do exactly what its name suggests:

multiplyByM m = ((*) m)


And, finally, we can rewrite multiplyList in terms of applyToIntegers:

multiplyList :: Integer -> [Integer] -> [Integer]
multiplyList m = applyToIntegers ((*) m)


Or, equivalently:

multiplyList :: Integer -> [Integer] -> [Integer]
multiplyList m list = applyToIntegers ((*) m) list


## The map function

While applyToIntegers has type (Integer -> Integer) -> [Integer] -> [Integer], there is nothing specific to integers in its algorithm. Therefore, we could define versions such as applyToChars, applyToStrings and applyToLists just by changing the type signature. That would be horribly wasteful, though: we did not climb all the way up to this point just to need a different function for each type! Furthermore, nothing prevents us from changing the signature to, for instance, (Integer -> String) -> [Integer] -> [String]; thus giving a function that takes a function Integer -> String and returns a function [Integer] -> [String] which applies the function originally passed as argument to each element of an Integer list.

The final step of generalization, then, is to make a fully polymorphic version of applyToIntegers, with signature (a -> b) -> [a] -> [b]. Such a function already exists in Prelude: it is called map and defined as:

map :: (a -> b) -> [a] -> [b]
map _ [] = []
map f (x:xs) = (f x) : map f xs


Using it, we can effortlessly implement functions as different as...

multiplyList :: Integer -> [Integer] -> [Integer]
multiplyList m = map ((*) m)


... and...

heads :: [[a]] -> [a]

Prelude> heads [[1,2,3,4],[4,3,2,1],[5,10,15]]
[1,4,5]


map solves the problem of applying a function to each element of a list[41] in a completely general way by allowing us to choose which function to use. Functions like map which take other functions as arguments are called higher-order functions, and they are extremely useful. In particular, we will meet some other important higher-order functions used for list processing in the next chapter.

Exercises
1. Use map to build functions that, given a list xs of Ints, return:
• A list that is the element-wise negation of xs.
• A list of lists of Ints xss that, for each element of xs, contains the divisors of xs. You can use the following function to get the divisors:
divisors p = [ f | f <- [1..p], p mod f == 0 ]

• The element-wise negation of xss.
2. Implement a Run Length Encoding (RLE) encoder and decoder.
• The idea of RLE is simple; given some input:
"aaaabbaaa"

compress it by taking the length of each run of characters:(4,'a'), (2, 'b'), (3, 'a')
• The concat and group functions might be helpful. In order to use group, you will need to import the Data.List module. You can access this by typing :m Data.List at the ghci prompt, or by adding import Data.List to your Haskell source code file.
• What is the type of your encode and decode functions?
• How would you convert the list of tuples (e.g. [(4,'a'), (6,'b')]) into a string (e.g. "4a6b")?
• (bonus) Assuming numeric characters are forbidden in the original string, how would you parse that string back into a list of tuples?

## Tips and Tricks

Before we carry on with more list processing, let us make a few miscellaneous useful observations about lists in Haskell.

### Dot Dot Notation

Haskell has a convenient shorthand for writing ordered lists of regularly-spaced integers. Some examples to illustrate it:

Code             Result
----             ------
[1..10]          [1,2,3,4,5,6,7,8,9,10]
[2,4..10]        [2,4,6,8,10]
[5,4..1]         [5,4,3,2,1]
[1,3..10]        [1,3,5,7,9]


The same notation can be used with characters as well, and even with floating point numbers - though the latter is not necessarily a good idea due to rounding errors. Try this:

[0,0.1 .. 1]


Additionally, the notation only works with sequences with fixed differences between consecutive elements. For instance, you should not write...

[0,1,1,2,3,5,8..100]


... and expect to magically get back the rest of the Fibonacci series[42].

### Infinite Lists

One of the most mind-bending things about Haskell lists is that they are allowed to be infinite. For example, the following generates the infinite list of integers starting with 1:

[1..]


(If you try this in GHCi, remember you can stop an evaluation with Ctrl-c).

The same effect could be achieved with a recursive function:

intsFrom n = n : intsFrom (n+1) -- note there is no base case!
positiveInts = intsFrom 1


This works because Haskell uses lazy evaluation: it never actually evaluates more than it needs at any given moment. In most cases an infinite list can be treated just like an ordinary one. The program will only go into an infinite loop when evaluation would actually require all the values in the list. Examples of this include sorting or printing the entire list. However:

evens = doubleList [1..]


will define "evens" to be the infinite list [2,4,6,8..]. And you can pass "evens" into other functions, and unless they actually need to evaluate the end of the list it will all just work.

Infinite lists are quite useful in Haskell. Often it's more convenient to define an infinite list and then take the first few items than to create a finite list. Functions that process two lists in parallel generally stop with the shortest, so making the second one infinite avoids having to find the length of the first. An infinite list is often a handy alternative to the traditional endless loop at the top level of an interactive program.

### A note about head and tail

Given the choice of using either the ( : ) pattern or head/tail to split lists, pattern matching is almost always preferable. While it may be tempting to use head and tail due to simplicity and terseness, it is all too easy to forget they fail on empty lists - and runtime crashes are never a good thing. While the Prelude function null :: [a] -> Bool provides a sane way of checking for empty lists without pattern matching (it returns True for empty lists and False otherwise), matching an empty list tends to be cleaner and clearer than the corresponding if-then-else expression.

Exercises
1. With respect to your solutions to the first set of exercises in this chapter, is there any difference between scanSum (takeInt 10 [1..]) and takeInt 10 (scanSum [1..])?
2. In the final block of exercises of the previous chapter you implemented the (!!) operator. Redo it, but this time using head and tail. (If by any chance you did it with head and tail back then, redo it with pattern matching.)
3. Write functions that when applied to lists give the last element of the list and the list with the last element dropped. That can be done either with pattern matching or by using head and tail only. We suggest you to try, and compare, both approaches.
This functionality is provided by Prelude through the last and init functions.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

# List processing

﻿

 Print version (Solutions) Elementary Haskell edit this chapter

## Folds

A fold applies a function to a list in a way similar to map, but accumulates a single result instead of a list.

Take, for example, a function like sum, which might be implemented as follows:

Example: sum

 sum :: [Integer] -> Integer
sum []     = 0
sum (x:xs) = x + sum xs


or product:

Example: product

 product :: [Integer] -> Integer
product []     = 1
product (x:xs) = x * product xs


or concat, which takes a list of lists and joins (concatenates) them into one:

Example: concat

 concat :: [[a]] -> [a]
concat []     = []
concat (x:xs) = x ++ concat xs


There is a certain pattern of recursion common to all of these examples. This pattern is known as a fold, possibly from the idea that a list is being "folded up" into a single value, or that a function is being "folded between" the elements of the list.

The Standard Prelude defines four fold functions: foldr, foldl, foldr1 and foldl1.

### foldr

The right-associative foldr folds up a list from the right, that is, it walks from the last to the first element of the list and applies the given function to each of the elements and the accumulator, the initial value of which has to be set:

foldr            :: (a -> b -> b) -> b -> [a] -> b
foldr f acc []     = acc
foldr f acc (x:xs) = f x (foldr f acc xs)


The first argument is a function with two arguments, the second is a "zero" value for the accumulator, and the third is the list to be folded.

For example, in sum, f is (+), and acc is 0, and in concat, f is (++) and acc is []. In many cases, like all of our examples so far, the function passed to a fold will have both its arguments be of the same type, but this is not necessarily the case in general.

What foldr f acc xs does is to replace each cons (:) in the list xs with the function f, and the empty list at the end with acc. That is,

a : b : c : []


becomes

f a (f b (f c acc))


This is perhaps most elegantly seen by picturing the list data structure as a tree:

  :                         f
/ \                       / \
a   :       foldr f acc   a   f
/ \    ------------->     / \
b   :                     b   f
/ \                       / \
c  []                     c   acc


It is fairly easy to see with this picture that foldr (:) [] is just the identity function on lists (that is, the function which returns its argument unmodified).

### foldl

The left-associative foldl processes the list in the opposite direction, starting at the left side with the first element, and proceeding to the last one step by step:

foldl            :: (a -> b -> a) -> a -> [b] -> a
foldl f acc []     =  acc
foldl f acc (x:xs) =  foldl f (f acc x) xs


So brackets in the resulting expression accumulate on the left. Our list above, after being transformed by foldl f z becomes:

f (f (f acc a) b) c


The corresponding trees look like:

  :                            f
/ \                          / \
a   :       foldl f acc      f   c
/ \    ------------->    / \
b   :                    f   b
/ \                  / \
c  []                acc a


Note

Technical Note: The left associative fold is tail-recursive, that is, it recurses immediately, calling itself. For this reason the compiler will optimise it to a simple loop, and it will then be much more efficient than foldr. However, Haskell is a lazy language, and so the calls to f will by default be left unevaluated, building up an expression in memory whose size is linear in the length of the list, exactly what we hoped to avoid in the first place. To get back this efficiency, there is a version of foldl which is strict, that is, it forces the evaluation of f immediately, called foldl'. Note the single quote character: this is pronounced "fold-ell-tick". A tick is a valid character in Haskell identifiers. foldl' can be found in the library Data.List (which can be imported by adding import Data.List to the beginning of a source file). As a rule of thumb you should use foldr on lists that might be infinite or where the fold is building up a data structure, and foldl' if the list is known to be finite and comes down to a single value. foldl (without the tick) should rarely be used at all, unless the list is not too long, or memory usage isn't a problem.

### foldr1 and foldl1

As previously noted, the type declaration for foldr makes it quite possible for the list elements and result to be of different types. For example, "read" is a function that takes a string and converts it into some type (the type system is smart enough to figure out which one). In this case we convert it into a float.

Example: The list elements and results can have different types

addStr :: String -> Float -> Float

sumStr :: [String] -> Float


If you substitute the types Float and String for the type variables a and b in the type of foldr you will see that this is type correct.

There is also a variant called foldr1 ("fold - arr - one") which dispenses with an explicit zero by taking the last element of the list instead:

foldr1           :: (a -> a -> a) -> [a] -> a
foldr1 f [x]     =  x
foldr1 f (x:xs)  =  f x (foldr1 f xs)
foldr1 _ []      =  error "Prelude.foldr1: empty list"


And foldl1 as well:

foldl1           :: (a -> a -> a) -> [a] -> a
foldl1 f (x:xs)  =  foldl f x xs
foldl1 _ []      =  error "Prelude.foldl1: empty list"


Note: There is additionally a strict version of foldl1 called foldl1' in the Data.List library.

Notice that in this case all the types have to be the same, and that an empty list is an error. These variants are occasionally useful, especially when there is no obvious candidate for z, but you need to be sure that the list is not going to be empty. If in doubt, use foldr or foldl'.

### folds and laziness

One good reason that right-associative folds are more natural to use in Haskell than left-associative ones is that right folds can operate on infinite lists, which are not so uncommon in Haskell programming. If the input function f only needs its first parameter to produce the first part of the output, then everything works just fine. However, a left fold must call itself recursively until it reaches the end of the input list; this is inevitable, because the recursive call is not made in an argument to f. Needless to say, this never happens if the input list is infinite and the program will spin endlessly in an infinite loop.

As a toy example of how this can work, consider a function echoes taking a list of integers and producing a list where if the number n occurs in the input list, then n replicated n times will occur in the output list. We will make use of the prelude function replicate: replicate n x is a list of length n with x the value of every element.

We can write echoes as a foldr quite handily:

echoes = foldr (\x xs -> (replicate x x) ++ xs) []


(Note: This is very compact thanks to the  \x xs ->  syntax. Instead of defining a function somewhere else and passing it to foldr we provided the definition in situ; x and xs being the arguments and the right-hand side of the definition being what is after the ->)

or, equally handily, as a foldl:

echoes = foldl (\xs x -> xs ++ (replicate x x)) []


but only the first definition works on an infinite list like [1..]. Try it! (If you try this in GHCi, remember you can stop an evaluation with Ctrl-c, but you have to be quick and keep an eye on the system monitor or your memory will be consumed in no time and your system will hang.)

As a final example, another thing that you might notice is that map itself can be implemented as a fold:

map f = foldr (\x xs -> f x : xs) []


Folding takes a little time to get used to, but it is a fundamental pattern in functional programming and eventually becomes very natural. Any time you want to traverse a list and build up a result from its members, you likely want a fold.

Exercises
1. Define the following functions recursively (like the definitions for sum, product and concat above), then turn them into a fold:
• and :: [Bool] -> Bool, which returns True if a list of Bools are all True, and False otherwise.
• or :: [Bool] -> Bool, which returns True if any of a list of Bools are True, and False otherwise.
2. Define the following functions using foldl1 or foldr1:
• maximum :: Ord a => [a] -> a, which returns the maximum element of a list (hint: max :: Ord a => a -> a -> a returns the maximum of two values).
• minimum :: Ord a => [a] -> a, which returns the minimum element of a list (hint: min :: Ord a => a -> a -> a returns the minimum of two values).
3. Use a fold (which one?) to define reverse :: [a] -> [a], which returns a list with the elements in reverse order.
Note that all of these are Prelude functions, so they will be always close at hand when you need them. (Also, that means you will want to use slightly different names for testing your answers in GHCi.)

## Scans

A "scan" is much like a cross between a map and a fold. Folding a list accumulates a single return value, whereas mapping puts each item through a function with no accumulation. A scan does both: it accumulates a value like a fold, but instead of returning a final value it returns a list of all the intermediate values.

The Standard Prelude contains four scan functions:

scanl   :: (a -> b -> a) -> a -> [b] -> [a]


This accumulates the list from the left, and the second argument becomes the first item in the resulting list. So scanl (+) 0 [1,2,3] = [0,1,3,6].

scanl1  :: (a -> a -> a) -> [a] -> [a]


This is the same as scanl, but uses the first item of the list as a zero parameter. It is what you would typically use if the input and output items are the same type. Notice the difference in the type signatures. scanl1 (+) [1,2,3] = [1,3,6].

scanr   :: (a -> b -> b) -> b -> [a] -> [b]
scanr1  :: (a -> a -> a) -> [a] -> [a]


These two functions are the exact counterparts of scanl and scanl1. They accumulate the totals from the right. So:

scanr (+) 0 [1,2,3] = [6,5,3,0]
scanr1 (+) [1,2,3] = [6,5,3]

Exercises
1. Write your own definition of scanr, first using recursion, and then using foldr. Do the same for scanl first using recursion then foldl.
2. Define the following functions:
• factList :: Integer -> [Integer], which returns a list of factorials from 1 up to its argument. For example, factList 4 = [1,2,6,24].

## filter

A very common operation performed on lists is filtering, which is generating a new list composed only of elements of the first list that meet a certain condition. One simple example of that would be taking a list of integers and making from it a list which only retains its even numbers.

retainEven :: [Int] -> [Int]
retainEven [] = []
retainEven (n:ns) =
-- mod n 2 computes the remainder for the integer division of n by 2, so if it is zero the number is even
if ((mod n 2) == 0)
then n : (retainEven ns)
else retainEven ns


That works fine, but it is a slightly verbose solution. It would be nice to have a more concise way to write the filter function. Also, it would certainly be very useful to be able to generalize the filtering operation – that is, make it capable of filtering a list using any boolean condition we'd like. To help us with both of these issues Prelude provides a filter function. filter has the following type signature:

filter :: (a -> Bool) -> [a] -> [a]


That means it evaluates to a list when given two arguments, namely an (a -> Bool) function which carries the actual test of the condition for the elements of the list and the list to be filtered. In order to write retainEven using filter, we need to state the condition as an auxiliary (a -> Bool) function, like this one:

isEven :: Int -> Bool
isEven n = ((mod n 2) == 0)


And then retainEven becomes simply:

retainEven ns = filter isEven ns


It can be made even more terse by writing it in point-free style:

retainEven = filter isEven


This is just like what we demonstrated before for map and the folds. Like filter, those take another function as argument; and using them point-free emphasizes this "functions-of-functions" aspect.

## List comprehensions

An additional tool for list processing is the list comprehension, a powerful, concise and expressive syntactic construct. One simple way we can use list comprehensions is as syntactic sugar for filtering. So, instead of using the Prelude filter, we could write retainEven like this:

retainEven es = [ n | n <- es , isEven n ]


This compact syntax may look a bit intimidating at first, but it is simple to break down. One possible way to read it would be:

• (Starting from the middle) Take the list es and draw (the "<-") each of its elements as a value n.
• (After the comma) For each drawn n test the boolean condition isEven n.
• (Before the vertical bar) If (and only if) the boolean condition is satisfied, prepend n to the new list being created (note the square brackets around the whole expression).

Thus if es is equal to [1,2,3,4], then we would get back the list [2,4]. 1 and 3 were not drawn because (isEven n) == False .

The real power of list comprehensions, though, comes from the fact they are easily extensible. Firstly, we can use as many tests as we wish (even zero!). Multiple conditions are written as a comma-separated list of expressions (which should evaluate to a Boolean, of course). For a simple example, suppose we want to modify retainEven so that only numbers larger than 100 are retained:

retainLargeEvens :: [Int] -> [Int]
retainLargeEvens es = [ n | n <- es , isEven n, n > 100 ]


Furthermore, we are not limited to using n as the element to be prepended when generating a new list. Instead, we could place any expression before the vertical bar (if it is compatible with the type of the list, of course). For instance, if we wanted to subtract one from every even number, all it would take is:

evensMinusOne es = [ n - 1 | n <- es , isEven n ]


In effect, that means the list comprehension syntax incorporates the functionality of map as well as of filter. Now that is conciseness! (and conciseness that does not sacrifice readability, in that.)

To further sweeten things, the left arrow notation in list comprehensions can be combined with pattern matching. For example, suppose we had a list of (Int, Int) tuples and we would like to construct a list with the first element of every tuple whose second element is even. Using list comprehensions, we might write it as follows:

firstForEvenSeconds :: [(Int, Int)] -> [Int]
firstForEvenSeconds ps = [ fst p | p <- ps, isEven (snd p) ] -- here, p is for pairs.


Patterns can make what the function is doing more obvious:

firstForEvenSeconds ps = [ x | (x,y) <- ps, isEven y ]


As in other cases, arbitrary expressions may be used before the |. If we wanted a list with the double of those first elements:

doubleOfFirstForEvenSeconds :: [(Int, Int)] -> [Int]
doubleOfFirstForEvenSeconds ps = [ 2 * x | (x,y) <- ps, isEven y ]


Note that the function code is actually shorter than its descriptive name.

There are even more possible tricks:

allPairs :: [(Int, Int)]
allPairs = [ (x, y) | x <- [1..4], y <- [5..8] ]


This comprehension draws from two lists, and generates all 4 * 4 = 16 possible (x, y) pairs with the first element in [1..4] and the second in [5..8]. In the final list of pairs, the first elements will be those generated with the first element of the first list (here, 1), then those with the second element of the first list, and so on. In this example, the full list is (linebreaks added for clarity):

Prelude> [(x,y)|x<-[1..4],y<-[5..8]]
[(1,5),(1,6),(1,7),(1,8),
(2,5),(2,6),(2,7),(2,8),
(3,5),(3,6),(3,7),(3,8),
(4,5),(4,6),(4,7),(4,8)]


Note that we didn't do any filtering here; but we could easily add a condition to restrict the combinations that go into the final list:

somePairs = [ (x, y) | x <- [1..4], y <- [5..8], x + y > 8 ]


This lists only has the pairs with the sum of elements larger than 8; starting with (1,8), then (2,7) and so forth.

Exercises
1. Write a returnDivisible :: Int -> [Int] -> [Int] function which filters a list of integers retaining only the numbers divisible by the integer passed as first argument. For integers x and n, x is divisible by n if (mod x n) == 0 (note that the test for evenness is a specific case of that).
• Write - using list comprehension syntax, a single function definition and no if, case and similar constructs - a [[Int]] -> [[Int]] which, from a list of lists of Int, returns a list of the tails of those lists using, as filtering condition, that the head of each [Int] must be larger than 5. Also, your function must not trigger an error when it meets an empty [Int], so you'll need to add an additional test to detect emptiness.
• Does order matter when listing the conditions for list comprehension? (You can find it out by playing with the function you wrote for the first part of the exercise.)
2. Over this section we've seen how list comprehensions are essentially syntactic sugar for filter and map. Now work in the opposite direction and define alternative versions of the filter and map using the list comprehension syntax.
3. Rewrite doubleOfFirstForEvenSeconds using filter and map instead of list comprehension.

 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

# Type declarations

﻿

 Print version (Solutions) Elementary Haskell edit this chapter

You're not restricted to working with just the types provided by default with the language. Haskell allows you to define new types. Reasons for doing so include that:

• It allows for code to be written in terms of the problem being solved, making programs easier to design, write and understand.
• It allows for handling related pieces of data together in ways more convenient and meaningful than say, simply putting and getting values from lists or tuples.
• It makes it possible to use two powerful features of Haskell, pattern matching and the type system, to their fullest extent by making them work with your custom types.

How these things are achieved and why they are so important will gradually become clear as you progress on this course.

Haskell has three basic ways to declare a new type:

• The data declaration, which defines new data types.
• The type declaration for type synonyms.
• The newtype declaration, which is a cross between the other two.

In this chapter, we will discuss the most essential way, data. We'll also mention type, which is a convenience feature. You'll find out about newtype later on, but don't worry too much about it; it's there mainly for optimisation.

## data and constructor functions

data is used to define new data types using existing ones as building blocks. Here's a data structure for elements in a simple list of anniversaries:

data Anniversary = Birthday String Int Int Int       -- name, year, month, day
| Wedding String String Int Int Int -- spouse name 1, spouse name 2, year, month, day


A type declaration like this has two effects:

• First, it declares a new data type Anniversary, which can be either a Birthday or a Wedding. A Birthday contains one string and three integers and a Wedding contains two strings and three integers. The definitions of the two possibilities are separated by the vertical bar. The text after the "--" are just comments, explaining to readers of the code what the fields actually mean.
• Moreover, it defines two constructor functions for Anniversary, called, appropriately enough, Birthday and Wedding. These functions provide a way to build a new Anniversary, be it a "Birthday" or a "Wedding".

Types defined by data declarations are often referred to as algebraic data types. We will eventually return to this topic to address the theory behind such a name and the practical implications of said theory.

As usual with Haskell the case of the first letter is important: type names and constructor functions must always start with capital letters. Other than this syntactic detail, constructor functions work pretty much like the "conventional" functions we met so far. In fact, if you use :t in ghci to query the type of, say, Birthday, you'll get:

*Main> :t Birthday
Birthday :: String -> Int -> Int -> Int -> Anniversary


Meaning it's just a function which takes one String and three Int as arguments and evaluates to an Anniversary. This anniversary will contain the four arguments we passed as specified by the Birthday constructor.

Calling the constructor functions is no different from calling other functions. For example, suppose we have John Smith born on 3rd July 1968:

johnSmith :: Anniversary
johnSmith = Birthday "John Smith" 1968 7 3


He married Jane Smith on 4th March 1987:

smithWedding :: Anniversary
smithWedding = Wedding "John Smith" "Jane Smith" 1987 3 4


These two anniversaries can, for instance, be put in a list:

anniversariesOfJohnSmith :: [Anniversary]
anniversariesOfJohnSmith = [johnSmith, smithWedding]


Or you could just as easily have called the constructors straight away when building the list (although the resulting code looks a bit cluttered).

anniversariesOfJohnSmith = [Birthday "John Smith" 1968 7 3, Wedding "John Smith" "Jane Smith" 1987 3 4]


## Deconstructing types

So far we've seen how to define a new type and create values of that type. If the new data types are to be of any use, however, we must have a way to access its contents, that is, the values we used at construction. For instance, one very basic operation with the anniversaries defined above would be extracting the names and dates they contain as a String. Let us see how a function which does that, showAnniversary, could be written and analyse what's going on in it (you'll see that, for the sake of code clarity, we used an auxiliary showDate function but let's ignore it for a moment and focus on showAnniversary).

showDate :: Int -> Int -> Int -> String
showDate y m d = show y ++ "-" ++ show m ++ "-" ++ show d

showAnniversary :: Anniversary -> String

showAnniversary (Birthday name year month day) =
name ++ " born " ++ showDate year month day

showAnniversary (Wedding name1 name2 year month day) =
name1 ++ " married " ++ name2 ++ " on " ++ showDate year month day


This example shows what is truly special about constructor functions: they can also be used to deconstruct the values they build. showAnniversary takes a single argument of type Anniversary. Instead of just providing a name for the argument on the left side of the definition, however, we specify one of the constructor functions and give names to each argument of the constructor – which correspond to the contents of the Anniversary. A more formal way of describing this "giving names" process is to say we are binding variables – "binding" is being used in the sense of assigning a variable to each of the values so that we can refer to them on the right side of the function definition.

For such an approach to be able to handle both "Birthday" and "Wedding" Anniversaries, we needed to provide two function definitions, one for each constructor. When showAnniversary is called, if the argument is a Birthday Anniversary the first version is used and the variables name, month, date and year are bound to its contents. If the argument is a Wedding Anniversary then the second version is used and the variables are bound in the same way. This process of using a different version of the function depending on the type of constructor used to build the Anniversary is pretty much like what happens when we use a case statement or define a function piece-wise.

An important observation on syntax is that the parentheses around the constructor name and the bound variables are mandatory; otherwise the compiler or interpreter would not take them as a single argument. Also, it is important to have it absolutely clear that the expression inside the parentheses is not a call to the constructor function, even though it may look just like one.

Exercises

Note: The solution of this exercise is given near the end of the chapter, so we recommend that you attempt it before getting there.
Reread the function definitions above, now having a closer look at the showDate helper function. In spite of us saying it was provided "for the sake of code clarity", there is a certain clumsiness in the way it is used. You have to pass three separate Int arguments to it, but these arguments are always linked to each other in that they are part of a single date. It would make no sense to do things like passing the year, month and day values of the Anniversary in a different order, or to pass the month value twice and omit the day.

• Could we use what we've seen in this chapter so far to reduce this clumsiness?
• Declare a Date type which is composed of three Int, corresponding to year, month and date. Then, rewrite showDate so that it uses the new data type, and picture the changes needed in showAnniversary and the Anniversary for them to make use of it as well.

## type for making type synonyms

As mentioned in the introduction of this module, code clarity is one of the motivations for using custom types. In that spirit, it could be nice to make it clear that the Strings in the Anniversary type are being used as names while still being able to manipulate them like ordinary Strings. The type declaration allows us to do that.

type Name = String


The code above says that a Name is now a synonym for a String. Any function that takes a String will now take a Name as well (and vice-versa: functions that take Name will accept any String). The right hand side of a type declaration can be a more complex type as well. For example, String itself is defined in the standard libraries as

type String = [Char]


We can do something similar for the list of anniversaries we made use of:

type AnniversaryBook = [Anniversary]


Since type synonyms are mostly a convenience feature for making the roles of types clearer or providing an alias to, for instance, a complicated list or tuple type, it is largely a matter of personal discretion to decide how they should be deployed. Abuse of synonyms might even, on occasion, make code confusing (for instance, picture a long program using multiple names for common types like Int or String simultaneously, all of course having the same functionality).

Incorporating the suggested type synonyms and the Date type we proposed in the exercise(*) of the previous section the code we've written so far looks like this:

((*) last chance to try that exercise without looking at the spoilers.)

type Name = String

data Anniversary =
Birthday Name Date
| Wedding Name Name Date

data Date = Date Int Int Int   -- Year, Month, Day

johnSmith :: Anniversary
johnSmith = Birthday "John Smith" (Date 1968 7 3)

smithWedding :: Anniversary
smithWedding = Wedding "John Smith" "Jane Smith" (Date 1987 3 4)

type AnniversaryBook = [Anniversary]

anniversariesOfJohnSmith :: AnniversaryBook
anniversariesOfJohnSmith = [johnSmith, smithWedding]

showDate :: Date -> String
showDate (Date y m d) = show y ++ "-" ++ show m ++ "-" ++ show d

showAnniversary :: Anniversary -> String
showAnniversary (Birthday name date) =
name ++ " born " ++ showDate date
showAnniversary (Wedding name1 name2 date) =
name1 ++ " married " ++ name2 ++ " on " ++ showDate date


Even in a simple example like this one, there is a very noticeable gain in terms of simplicity and clarity when compared to what would be needed to do the same task using only Ints, Strings, and corresponding lists.

A final observation on syntax: note that the Date type has a constructor function which is called Date as well. That is perfectly valid and indeed giving the constructor the same name of the type when there is just one constructor is good practice, as a simple way of making the role of the function obvious.

Note

After these initial examples, the mechanics of using constructor functions may look a bit unwieldy, particularly if you're familiar with analogous features in other languages. There are syntactical constructs that make dealing with constructors more convenient. These will be dealt with later on, when we return to the topic of constructors and data types to explore them in detail.

 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

# Pattern matching

 Print version (Solutions) Elementary Haskell edit this chapter

In the previous modules of this book we introduced and then made occasional reference to pattern matching, pointing out common uses. Now that we have developed some familiarity with the language, it is time to take a proper, deeper look at pattern matching. We will kick-start the discussion with a condensed description, which we will expanded upon throughout the chapter:

In pattern matching, we attempt to match values against patterns and, if so desired, bind variables to successful matches.

Note

Pattern matching on what?

Some languages like Perl and Python use the term pattern matching for matching regular expressions against strings. The pattern matching we are referring to in this chapter is something completely different. In fact, you're probably best off forgetting what you know about pattern matching for now.[43] Here, pattern matching is used in the same way as in other ML-like languages: to deconstruct values according to their type specification.

## Analysing pattern matching

Pattern matching is virtually everywhere. To pick but one example, consider a definition like that of map:

map _ []     = []
map f (x:xs) = f x : map f xs


Here, at surface level, there are four different patterns involved, two per equation. Let's explore each one in turn, beginning with the second equation:

• f is a pattern which matches anything at all, and binds the f variable to whatever is matched.
• (x:xs) is a pattern that matches a non-empty list which is formed by something (which gets bound to the x variable) which was cons'd (by the (:) function) onto something else (which gets bound to xs).
• [] is a pattern that matches the empty list. It doesn't bind any variables.
• _ is the pattern which matches anything at all, but doesn't do any binding.

In the (x:xs) pattern, x and xs can be seen as sub-patterns used to match the parts of the list. Just like f, they match anything - though it is evident that if there is a successful match and x has type a, xs will have type [a]. Finally, these considerations imply that xs will also match an empty list, and so a one-element list matches (x:xs).

From the above dissection, we can say pattern matching gives us a way to:

• recognize values. For instance, when map is called and the second argument matches [] the first equation for map is used instead of the second one.
• bind variables to the recognized values. In this case, the variables f, x, and xs are assigned to the values passed as arguments to map when the second equation is used, and so we can use these values through the variables in the right-hand side of =. As _ and [] show, binding is not an essential part of pattern matching, but just a side effect of using variable names as patterns.
• break down values into parts, as the (x:xs) pattern does by binding two variables to parts (head and tail) of a matched argument (the non-empty list).

## The connection with constructors

Regardless of the detailed analysis above, the process of using (:) to break down a list, as if we were undoing the effects of the (:) operator, may look a little too magical. It will not, however, work with any arbitrary operator. For example, one might think, by analogy to how (:) is used to break down a list, of defining a function which uses (++) to chop off the first three elements of a list:

dropThree ([x,y,z] ++ xs) = xs


However, that will not work. The function (++) is not allowed in patterns, and in fact most other functions that act on lists are not allowed as well. Which functions, then, are allowed?

In one word, constructors – the functions used to build values of algebraic data types. Let us consider a random example:

data Foo = Bar | Baz Int


Here Bar and Baz are constructors for the type Foo. And so you can use them for pattern matching Foo values, and bind variables to the Int value contained in a Foo constructed with Baz:

f :: Foo -> Int
f Bar     = 1
f (Baz x) = x - 1


That was exactly what was going on back when we defined showAnniversary and showDate in the Type declarations module. For instance:

data Date = Date Int Int Int   -- Year, Month, Day
showDate :: Date -> String
showDate (Date y m d) = show y ++ "-" ++ show m ++ "-" ++ show d


The (Date y m d) pattern in the left-hand side of the showDate definition matches a Date (built with the Date constructor) and binds the variables y, m and d to the contents of the Date value.

### Why does it work with lists?

As for lists, they are not different from data-defined algebraic data types as far as pattern matching is concerned. It works as if lists were defined with this data declaration (note that the following isn't actually valid syntax: lists are in reality deeply ingrained into Haskell):

data [a] = [] | a : [a]


So the empty list, [] and the (:) function are in reality constructors of the list datatype, and so you can pattern match with them. [] takes no arguments, and therefore no variables can be bound when it is used for pattern matching. (:) takes two arguments, the list head and tail, which may then have variables bound to them when the pattern is recognized.

Prelude> :t []
[] :: [a]
Prelude> :t (:)
(:) :: a -> [a] -> [a]


Furthermore, since [x, y, z] is just syntactic sugar for x:y:z:[], we can solve the dropThree problem using pattern matching alone:

dropThree :: [a] -> [a]
dropThree (_:_:_:xs) = xs
dropThree _          = []


The first pattern will match any list with at least three elements. The catch-all second definition provides a reasonable default[44] when lists fail to match the main pattern, and thus prevents runtime crashes due to pattern match failure.

Note

From the fact that we could write a dropThree function with bare pattern matching it doesn't follow that we should do so! Even though the solution is simple, it is still a waste of effort to code something this specific when we could just use Prelude and settle it with drop 3 xs instead. Mirroring what was said before about baking bare recursive functions, we might say: don't get too excited about pattern matching either...

### Tuple constructors

Analogous considerations are valid for tuples. Our access to their components via pattern matching...

fstPlusSnd :: (Num a) => (a, a) -> a
fstPlusSnd (x, y) = x + y

norm3D :: (Floating a) => (a, a, a) -> a
norm3D (x, y, z) = sqrt (x^2 + y^2 + z^2)


... is granted by the existence of tuple constructors. For pairs, the constructor is the comma operator, (,); for larger tuples there are (,,); (,,,) and so on. These operators are slightly unusual in that we can't use them infix in the regular way; so 5 , 3 is not a valid way to write (5, 3). All of them, however, can be used prefix, which is occasionally useful.

Prelude> (,) 5 3
(5,3)
Prelude> (,,,) "George" "John" "Paul" "Ringo"
("George","John","Paul","Ringo")


## Matching literal values

As discussed earlier in the book, a simple piece-wise function definition like this one

f :: Int -> Int
f 0 = 1
f 1 = 5
f 2 = 2
f _ = -1


is performing pattern matching as well, matching the argument of f with the Int literals 0, 1 and 2, and finally with _ . In general, numeric and character literals can be used in pattern matching[45]. They can also be used together with constructor patterns. For instance, this function

g :: [Int] -> Bool
g (0:[]) = False
g (0:xs) = True
g _ = False


will evaluate to False for the [0] list, to True if the list has 0 as first element and a non-empty tail and to False in all other cases. Also, lists with literal elements like [1,2,3], or even "abc" (which is equivalent to ['a','b','c']) can be used for pattern matching as well, since these forms are only syntactic sugar for the (:) constructor.

It costs nothing to emphasize that the above considerations are only valid for literal values, so the following will not work:

k = 1
--again, this won't work as expected
h :: Int -> Bool
h k = True
h _ = False

Exercises
1. Test the flawed h function above in GHCi, with arguments equal to and different from 1. Then, explain what goes wrong.
2. In this section about pattern matching with literal values we made no mention of the boolean values True and False, even though we can do pattern matching with them, as demonstrated in the Next steps chapter. Can you guess why we omitted them? (Hint: is there anything distinctive about the way we write boolean values?)

## Syntax tricks

### As-patterns

Sometimes, when matching a pattern with a value, it may be useful to bind a name also to the whole value being matched. As-patterns allow exactly this: they are of the form var@pattern and have the additional effect to bind the name var to the whole value being matched by pattern. For instance, here is a toy variation on the map theme:

contrivedMap :: ([a] -> a -> b) -> [a] -> [b]
contrivedMap f [] = []
contrivedMap f list@(x:xs) = f list x : contrivedMap f xs


contrivedMap passes to the parameter function f not only x but also the undivided list used as argument of each recursive call. Writing it without as-patterns would have been more cumbersome because we would have to needlessly reconstruct the original value of list, i.e. actually evaluate x:xs on the right side:

contrivedMap :: ([a] -> a -> b) -> [a] -> [b]
contrivedMap f [] = []
contrivedMap f (x:xs) = f (x:xs) x : contrivedMap f xs


### Introduction to records

For constructors with many elements, records provide useful syntactical help. Briefly, they are a way of naming values in a datatype, using the following syntax:

data Foo2 = Bar2 | Baz2 {barNumber::Int, barName::String}


Using records allows doing matching and binding only for the variables relevant to the function we're writing, making code much clearer:

h :: Foo2 -> Int
h Baz2 {barName=name} = length name
h Bar2 {} = 0


Also, the {} pattern can be used for matching a constructor regardless of the datatype elements even if you don't use records in the data declaration:

data Foo = Bar | Baz Int
g :: Foo -> Bool
g Bar {} = True
g Baz {} = False


The function g does not have to be changed if we modify the number or the type of elements of the constructors Bar or Baz.

There are even more potential advantages in using record syntax. We will cover records in more detail further down the road; in case you want to start using them right now it may be useful to have a look at the Named fields section of the More on datatypes chapter.

## Where we can use it

The short answer is that wherever you can bind variables, you can pattern match. Let us have a glance at such places we have seen before; a few more will be introduced in the following chapters.

### Equations

The most obvious use case is the left-hand side of function definition equations, which were the subject of our examples so far.

map _ []     = []
map f (x:xs) = f x : map f xs


In the map definition we're doing pattern matching on the left hand side of both equations, and also binding variables on the second one.

### let expressions and where clauses

Both let and where are ways of doing local variable bindings. As such, you can also use pattern matching in them. A simple example:

y =
let
(x:_) = map ((*) 2) [1,2,3]
in x + 5


Or, equivalently,

y = x + 5
where
(x:_) = map ((*) 2) [1,2,3]


Here, x will be bound to the first element of map ((*) 2) [1,2,3]. y, therefore, will evaluate to $2 + 5 = 7$.

### List comprehensions

After the | in list comprehensions you can pattern match. This is actually extremely useful, and adds a lot to the expressiveness of comprehensions. Let's see how that works with a slightly more sophisticated example. Prelude provides a Maybe type which has the following constructors:

data Maybe a = Nothing | Just a


It is typically used to hold values resulting from an operation which may or may not succeed; if the operation succeeds, the Just constructor is used and the value is passed to it; otherwise Nothing is used[46]. The utility function catMaybes (which is available from Data.Maybe library module) takes a list of Maybes (which may contain both "Just" and "Nothing" Maybes), and retrieves the contained values by filtering out the Nothing values and getting rid of the Just wrappers of the Just x. Writing it with list comprehensions is very straightforward:

catMaybes :: [Maybe a] -> [a]
catMaybes ms = [ x | Just x <- ms ]


Another nice thing about using a list comprehension for this task is that if the pattern match fails (that is, it meets a Nothing) it just moves on to the next element in ms, thus avoiding the need of explicitly handling constructors we are not interested in with alternate function definitions[47].

### do blocks

Within a do block like the ones we used in the Simple input and output chapter, we can pattern match with the left-hand side of the left arrow variable bindings:

putFirstChar = do
(c:_) <- getLine
putStrLn [c]


Furthermore, the let bindings in do blocks are, as far as pattern matching is concerned, just the same as the "real" let expressions.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

# Control structures

﻿

 Print version (Solutions) Elementary Haskell edit this chapter

Haskell offers several ways of expressing a choice between different values. We explored some of them through the Haskell Basics chapter. This section will bring together what we have seen thus far, discuss some finer points and introduce a new control structure.

## if and guards revisited

We have already met these constructs. The syntax for if expressions is:

if <condition> then <true-value> else <false-value>


<condition> is an expression which evaluates to a boolean. If the <condition> is True then the <true-value> is returned, otherwise the <false-value> is returned. Note that in Haskell if is an expression (which is converted to a value) – and not a statement (which is executed) like in many imperative languages[48]. As a consequence, in Haskell the else is mandatory. Since if is an expression, it must evaluate to a result whether the condition is true or false, and the else ensures this. Furthermore, <true-value> and <false-value> must evaluate to the same type, which will be the type of the whole if expression.

When if expressions are split across multiple lines, they are usually indented by aligning elses with thens, rather than with ifs. A common style looks like this:

describeLetter :: Char -> String
describeLetter c =
if c >= 'a' && c <= 'z'
then "Lower case"
else if c >= 'A' && c <= 'Z'
then "Upper case"
else "Not an ASCII letter"


Guards and top-level if expressions are mostly interchangeable. The example above, for instance, does look neater with guards:

describeLetter :: Char -> String
describeLetter c
| c >= 'a' && c <= 'z' = "Lower case"
| c >= 'A' && c <= 'Z' = "Upper case"
| otherwise            = "Not an ASCII letter"


Remember that otherwise it is just an alias to True, and thus the last guard is a catch-all, playing the role of the final else of the if expression.

Guards are evaluated in the order they appear. Consider a set up similar to the following:

f (pattern1) | predicate1 = w
| predicate2 = x

f (pattern2) | predicate3 = y
| predicate4 = z


Here, the argument of f will be pattern-matched against pattern1. If it succeeds, then we proceed to the first set of guards: if predicate1 evaluates to True, then w is returned. If not, then predicate2 is evaluated; and if it is true x is returned. Again, if not, then we proceed to the next case and try to match the argument against pattern2, repeating the guards procedure with predicate3 and predicate4. (And of course if neither pattern matches or neither predicate is true for the matching pattern there will be a runtime error. Regardless of the chosen control structure, it is important to ensure all cases are covered.)

### Embedding if expressions

A handy consequence of if constructs being expressions is that they can be placed anywhere a Haskell expression could be, allowing us to write code like this:

g x y = (if x == 0 then 1 else sin x / x) * y


Note that we wrote the if expression without line breaks for maximum terseness. Unlike if expressions, guard blocks are not expressions; and so a let or a where definition is the closest we can get to this style when using them. Needless to say, one-liner if expressions more complicated than the one in this example would be annoying to read, making let and where attractive options in such cases.

## case expressions

One control structure we haven't talked about yet are case expressions. They are to piece-wise function definitions what if expressions are to guards. Take this simple piece-wise definition:

f 0 = 18
f 1 = 15
f 2 = 12
f x = 12 - x


It is equivalent to - and, indeed, syntactic sugar for:

f x =
case x of
0 -> 18
1 -> 15
2 -> 12
_ -> 12 - x


Whatever definition we pick, the same happens when f is called: The argument x is matched against all of the patterns in order; and on the first match the expression on the right-hand side of the corresponding equal sign (in the piece-wise version) or arrow (in the case version) is evaluated. Note that in this case expression there is no need to write x in the pattern; the wildcard pattern _ gives the same effect[49].

Indentation is important when using case. The cases must be indented further to the right than the of keyword, and all cases must have the same indentation. For the sake of illustration, here are two other valid layouts for a case expression:

f x = case x of
0 -> 18
1 -> 15
2 -> 12
_ -> 12 - x


f x = case x of 0 -> 18
1 -> 15
2 -> 12
_ -> 12 - x


Since the left hand side of any case branch is just a pattern, it can also be used for binding, exactly like in piece-wise function definitions[50]:

describeString :: String -> String
describeString str =
case str of
(x:xs) -> "The first character of the string is: " ++ [x] ++ "; and " ++
"there are " ++ show (length xs) ++ " more characters in it."
[]     -> "This is an empty string."


This function describes some properties of str using a human-readable string. Of course, you could do that with an if-statement (with a condition of null str to pick the empty string case), but using a case binds variables to the head and tail of our list, which is convenient for what we are doing.

Finally, just like if expressions (and unlike piece-wise definitions), case expressions can be embedded anywhere another expression would fit:

data Colour = Black | White | RGB Int Int Int

describeBlackOrWhite :: Colour -> String
describeBlackOrWhite c =
"This colour is"
++ case c of
Black           -> " black"
White           -> " white"
RGB 0 0 0       -> " black"
RGB 255 255 255 -> " white"
_               -> "... uh... something else"
++ ", yeah?"


The case block above fits in as any string would. Writing describeBlackOrWhite this way makes let/where unnecessary, though the resulting definition is not very readable.

Exercises
Use a case statement to implement a fakeIf function which could be used as a replacement to the familiar if expressions.

## Controlling actions, revisited

On the final part of this chapter we will take advantage of having just introduced case expressions to introduce a few extra points about control structures while revisiting the discussions in the "Simple input and output" chapter. There, on the Controlling actions section, we used this function to show how to execute actions conditionally within a do block using if expressions:

doGuessing num = do
guess <- getLine
then do putStrLn "Too low!"
doGuessing num
else if (read guess) > num
then do putStrLn "Too high!"
doGuessing num
else do putStrLn "You Win!"


We can write the same doGuessing function using a case statement. To do this, we first introduce the Prelude function compare, which takes two values of the same type (in the Ord class) and returns a value of type Ordering, namely one of GT, LT, EQ, depending on whether the first is greater than, less than or equal to the second.

doGuessing num = do
guess <- getLine
case compare (read guess) num of
LT -> do putStrLn "Too low!"
doGuessing num
GT -> do putStrLn "Too high!"
doGuessing num
EQ -> putStrLn "You Win!"


The dos after the ->s are necessary on the first two options, because we are sequencing actions within each case.

### A note about return

Now, we are going to dispel a possible source of confusion. In a typical imperative language - say, C - an implementation of doGuessing might look like this (if you don't know C, don't worry with the details and just follow the if-else chain):

void doGuessing(int num) {
if (guess == num) {
printf("You win!\n");
return ();
}

// we won't get here if guess == num
if (guess < num) {
printf("Too low!\n");
doGuessing(num);
} else {
printf("Too high!\n");
doGuessing(num);
}
}


This doGuessing first tests the equality case, which does not lead to a new call of doGuessing, with an if that has no accompanying else. If the guess was right, a return statement is used to exit the function at once, skipping the other cases. Now, going back to Haskell, action sequencing in do blocks looks a lot like imperative code, and furthermore there actually is a return in Prelude. Then, knowing that case statements (unlike if statements) do not force us to cover all cases, one might be tempted to write a literal translation of the C code above (try running it if you are curious)...

doGuessing num = do
guess <- getLine
case compare (read guess) num of
EQ -> do putStrLn "You win!"
return ()

-- we don't expect to get here if guess == num
then do print "Too low!";
doGuessing num
else do print "Too high!";
doGuessing num


... but it won't work! If you guess correctly, the function will first print "You win!," but it will not exit at the return (); following to the if expression instead and checking whether guess is less than num. Of course it is not, so the else branch is taken, and it will print "Too high!" and then ask you to guess again. Things aren't any better with an incorrect guess: it will try to evaluate the case statement and get either LT or GT as the result of the compare. In either case, it won't have a pattern that matches, and the program will fail immediately with an exception (as usual, the incomplete case alone should be enough to raise suspicion).

The problem here is that return is not at all equivalent to the C (or Java etc.) statement with the same name. For our immediate purposes, we can say that return is a function [51] , and that return () evaluates to an action which does nothing - return does not affect the control flow at all. In the correct guess case, the case statement evaluates to return (), an action of type IO (), and execution just follows along normally.

The bottom line is that while actions and do blocks resemble imperative code, they must be dealt with on their own terms - Haskell terms.

Exercises
1. Redo the "Haskell greeting" exercise in Simple input and output/Controlling actions, this time using a case statement.
2. What does the following program print out? And why?
main =
do x <- getX
putStrLn x

getX =
do return "My Shangri-La"
return "beneath"
return "the summer moon"
return "I will"
return "return"
return "again"


## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

# More on functions

﻿

 Print version (Solutions) Elementary Haskell edit this chapter

As functions are absolutely essential to functional programming, there are several nice features that make using functions easier.

## let and where revisited

First, a few extra words about let and where, which are useful to make local function definitions. A function like addStr from the List processing chapter...

addStr :: Float -> String -> Float

sumStr :: [String] -> Float


... can be rewritten using local bindings in order to reduce clutter on the top level of the program (which makes a lot of sense assuming addStr is only used as part of sumStr). We can do that either with a let binding...

sumStr =


... or with a where clause...

sumStr = foldl addStr 0.0


... and the difference appears to be only one of style - bindings coming either before or after the rest of the definition. The relation between let and where, however, is similar to the one between if and guards, in that a let...in construct is an expression while a where clause isn't. That means we can embed a let binding, but not a where clause, in a complex expression in this way:

f x =
if x > 0
then (let lsq = (log x) ^ 2 in tan lsq) * sin x
else 0


The expression within the outer parentheses is self-contained, and evaluates to the tangent of the square of the logarithm of x. Note that the scope of lsq does not extend beyond the parentheses; and therefore changing the then-branch to

        then (let lsq = (log x) ^ 2 in tan lsq) * (sin x + lsq)


wouldn't work - we would have to drop the parentheses around the let.

Still, where clauses can be incorporated into case expressions:

describeColour c =
"This colour "
++ case c of
Black -> "is black"
White -> "is white"
RGB red green blue -> " has an average of the components of " ++ show av
where av = (red + green + blue) div 3
++ ", yeah?"


In this example, the indentation of the where clause sets the scope of the av variable so that it only exists as far as the RGB red green blue case is concerned. Placing it at the same indentation of the cases would make it available for all cases. Here is an example with guards:

doStuff :: Int -> String
doStuff x
| x < 3     = report "less than three"
| otherwise = report "normal"
where
report y = "the input is " ++ y


Note that since there is one equals sign for each guard there is no place we could put a let expression which would be in scope of all guards, as the where clause is. Here we have a situation in which where is particularly convenient.

## Anonymous Functions - lambdas

An alternative to creating a private named function like addStr is to create an anonymous function, also known as a lambda function. For example, sumStr could have been defined like this:

sumStr = foldl (\x str -> x + read str) 0.0


The expression in the parentheses is a lambda function. The backslash is used as the nearest ASCII equivalent to the Greek letter lambda (λ). This example is a lambda function with two arguments, x and str, which evaluates to "x + read str". So, the sumStr presented just above is precisely the same as the one that used addStr in a let binding.

Lambdas are handy for writing one-off functions to be used with maps, folds and their siblings, especially where the function in question is simple - as cramming complicated expressions in a lambda can hurt readability.

Since variables are being bound in a lambda expression (to the arguments, just like in a regular function definition), pattern matching can be used in them as well. A trivial example would be redefining tail with a lambda:

tail' = ($$_:xs) -> xs)  ## Operators and sections As noted in a number of occasions, operators such as the arithmetical ones can be used surrounded in parentheses and used prefix, like other functions: 2 + 4 (+) 2 4  Generalizing that point, we can now define the term "operator" clearly: as far as Haskell is concerned it's a function with two arguments and a name consisting entirely of non-alphanumeric characters. Unlike other functions, operators can be used infix straight away. We can define new operators in the usual way; just don't use any alphanumeric characters. For example, here's the set-difference definition from Data.List: (\$$ :: (Eq a) => [a] -> [a] -> [a]
xs \\ ys = foldl (\zs y -> delete y zs) xs ys


Note that, aside from just using operators infix, you can define them infix as well. This is a point that most newcomers to Haskell miss. I.e., although one could have written:

(\\) xs ys = foldl (\zs y -> delete y zs) xs ys


It's more common to define operators infix. However, do note that in type declarations, you have to write them with the parentheses.

Sections are a nifty piece of syntactical sugar that can be used with operators. An operator within parentheses and flanked by one of its arguments...

(2+) 4
(+4) 2


... is a new function in its own right. (2+), for instance, has the type (Num a) => a -> a. We can pass sections to other functions, e.g. map (+2) [1..4] == [3..6]. For another example, we can add an extra flourish to the multiplyList function we wrote back in More about lists:

multiplyList :: Integer -> [Integer] -> [Integer]
multiplyList m = map (m*)


If you have a "normal", prefix function, and want to use it as an operator, simply surround it by backticks:

1 elem [1..4]


This is called making the function infix. It's normally done for readability purposes: 1 elem [1..4] reads better than elem 1 [1..4]. You can also define functions infix:

elem :: (Eq a) => a -> [a] -> Bool
x elem xs = any (==x) xs


But once again notice that in the type signature you have to use the prefix style.

Sections even work with infix functions:

(1 elem) [1..4]
(elem [1..4]) 1


You can only make binary functions (that is, those that take two arguments) infix. Think about the functions you use, and see which ones would read better if you used them infix.

Exercises
• Lambdas are a nice way to avoid defining unnecessary separate functions. Convert the following let- or where-bindings to lambdas:
• map f xs where f x = x * 2 + 3
• let f x y = read x + y in foldr f 1 xs
• Sections are just syntactic sugar for lambda operations. I.e. (+2) is equivalent to \x -> x + 2. What would the following sections 'desugar' to? What would be their types?
• (4+)
• (1 elem)
• (notElem "abc")
 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

# Higher order functions and Currying

﻿

 Print version (Solutions) Elementary Haskell edit this chapter

Higher-order functions are functions that take other functions as arguments. We have already met some of them, such as map, so there isn't anything really frightening or unfamiliar about them. They offer a form of abstraction that is unique to the functional programming style. In functional programming languages like Haskell, functions are just like any other value, so it doesn't get any harder to deal with higher-order functions.

Higher order functions have a separate section in this book, not because they are particularly difficult – we've already worked with them, after all – but because they are powerful enough to draw special attention to them. We will see in this section how much we can do if we can pass around functions as values. Generally speaking, it is a good idea to abstract over a functionality whenever we can. Besides, Haskell without higher order functions wouldn't be quite as much fun.

## The Quickest Sorting Algorithm In Town

Don't get too excited, but quickSort is certainly one of the quickest. Have you heard of it? If so, you can skip the following subsection and go straight to the next one.

### The Idea Behind quickSort

The idea is very simple. For a big list, we pick an element, and divide the whole list into three parts.

The first part has all elements that should go before that element, the second part consists of all of the elements that are equal to the picked element, the third has the elements that ought to go after that element. And then, of course, we are supposed to concatenate these. What we get is somewhat better, right?

The trick is to note that only the first and the third are yet to be sorted, and for the second, sorting doesn't really make sense (they are all equal!). How to go about sorting the yet-to-be-sorted sub-lists? Why... apply the same algorithm on them again! By the time the whole process is finished, you get a completely sorted list.

### So Let's Get Down To It!

-- if the list is empty, we do nothing
-- note that this is the base case for the recursion
quickSort [] = []

-- if there's only one element, no need to sort it
-- actually, the third case takes care of this one pretty well
-- I just wanted you to take it step by step
quickSort [x] = [x]

-- this is the gist of the process
-- we pick the first element as our "pivot", the rest is to be sorted
-- don't forget to include the pivot in the middle part!
quickSort (x : xs) = (quickSort less) ++ (x : equal) ++ (quickSort more)
where less = filter (< x) xs
equal = filter (== x) xs
more = filter (> x) xs


And we are done! Even if you have met quickSort before, perhaps you thought recursion was a neat trick but difficult to implement.

## Now, How Do We Use It?

With quickSort at our disposal, sorting any list is a piece of cake. Suppose we have a list of String, maybe from a dictionary, and we want to sort them; we just apply quickSort to the list. For the rest of this chapter, we will use a pseudo-dictionary of words (but a 25,000 word dictionary should do the trick as well):

dictionary = ["I", "have", "a", "thing", "for", "Linux"]


We get, for quickSort dictionary,

["I", "Linux", "a", "for", "have", "thing"]


But, what if we wanted to sort them in the descending order? Easy, just reverse the list, reverse (quickSort dictionary) gives us what we want.

But wait! We didn't really sort in the descending order, we sorted (in the ascending order) and reversed it. They may have the same effect, but they are not the same thing!

Besides, you might object that the list you got isn't what you wanted. "a" should certainly be placed before "I". "Linux" should be placed between "have" and "thing". What's the problem here?

The problem is, the way Strings are represented in a typical programming settings is by a list of Unicode characters. Unicode (and almost all other encodings of characters) specifies that the character code for capital letters are less than the small letters. Bummer. So "Z" is less than "a". We should do something about it. Looks like we need a case insensitive quickSort as well. It might come handy some day.

But, there's no way you can blend that into quickSort as it stands. We have work to do.

## Tweaking What We Already Have

What we need to do is to factor out the comparisons quickSort makes. We need to provide quickSort with a function that compares two elements, and gives an Ordering, and as you can imagine, an Ordering is any of LT, EQ, GT.

To sort in the descending order, we supply quickSort with a function that returns the opposite of the usual Ordering. For the case-insensitive sort, we may need to define the function ourselves. By all means, we want to make quickSort applicable to all such functions so that we don't end up writing it over and over again, each time with only minor changes.

## quickSort, Take Two

Our quickSort will take two things this time: first, the comparison function, and second, the list to sort.

A comparison function will be a function that takes two things, say, x and y, and compares them. If x is less than y (according to the criteria we want to implement by this function), then the value will be LT. If they are equal (well, equal with respect to the comparison, we want "Linux" and "linux" to be equal when we are dealing with the insensitive case), we will have EQ. The remaining case gives us GT (pronounced: greater than, for obvious reasons).

-- no matter how we compare two things
-- the first two equations should not change
-- they need to accept the comparison function though
-- but the result doesn't need it
quickSort _ [] = []
quickSort _ [x] = [x]

-- we are in a more general setting now
-- but the changes are worth it!
-- c is the comparison function
quickSort c (x : xs) = (quickSort c less) ++ (x : equal) ++ (quickSort c more)
where less  = filter (\y -> y c x == LT) xs
equal = filter (\y -> y c x == EQ) xs
more  = filter (\y -> y c x == GT) xs


Cool!

Note

Almost all the basic data types in Haskell are members of the Ord class. This class defines an ordering, the "natural" one. The functions (or, operators, in this case) (<), (==) or (>) provide shortcuts to the compare function each type defines. When we want to use the natural ordering as defined by the types themselves, the above code can be written using those operators, as we did last time. In fact, that makes for much clearer style; however, we wrote it the long way just to make the relationship between sorting and comparing more evident.

## But What Did We Gain?

Reuse. We can reuse quickSort to serve different purposes.

-- the usual ordering
-- uses the compare function from the Ord class
usual = compare

-- the descending ordering, note we flip the order of the arguments to compare
descending x y = compare y x

-- the case-insensitive version is left as an exercise!
insensitive = ...
-- can you think of anything without making a very big list of all possible cases?


And we are done!

quickSort usual dictionary


should, then, give

["I", "Linux", "a", "for", "have", "thing"]


The comparison is just compare from the Ord class. This was our quickSort, before the tweaking.

quickSort descending dictionary


now gives

["thing", "have", "for", "a", "Linux", "I"]


And finally,

quickSort insensitive dictionary


gives

["a", "for", "have", "I", "Linux", "thing"]


Exactly what we wanted!

Exercises
Write insensitive, such that quickSort insensitive dictionary gives ["a", "for", "have", "I", "Linux", "thing"].

## Higher-Order Functions and Types

Our quickSort has type (a -> a -> Ordering) -> [a] -> [a].

Most of the time, the type of a higher-order function provides a good guideline about how to use it. A straightforward way of reading the type signature would be, "quickSort takes a function that gives an ordering of as, and a list of as, to give a list of as". It is then natural to guess that the function sorts the list respecting the given ordering function.

Note that the parentheses surrounding a -> a -> Ordering is mandatory. It says that a -> a -> Ordering altogether form a single argument, an argument that happens to be a function. What happens if we omit the parentheses? We would get a function of type a -> a -> Ordering -> [a] -> [a], which accepts four arguments instead of the desired two (a -> a -> Ordering and [a]). Furthermore none of the four arguments, neither a nor Ordering nor [a] are functions, so omitting the parentheses would give us something that isn't a higher order function.

Furthermore, it's worth noting that the -> operator is right-associative, which means that a -> a -> Ordering -> [a] -> [a] means the same thing as a -> (a -> (Ordering -> ([a] -> [a]))). We really must insist that the a -> a -> Ordering be clumped together by writing those parentheses... but wait... if -> is right-associative, wouldn't that mean that the correct signature (a -> a -> Ordering) -> [a] -> [a] actually means... (a -> a -> Ordering) -> ([a] -> [a])?

Is that really what we want?

If you think about it, we're trying to build a function that takes two arguments, a function and a list, returning a list. Instead, what this type signature is telling us is that our function takes one argument (a function) and returns another function. That is profoundly odd... but if you're lucky, it might also strike you as being profoundly beautiful. Functions in multiple arguments are fundamentally the same thing as functions that take one argument and give another function back. It's OK if you're not entirely convinced. We'll go into a little bit more detail below and then show how something like this can be turned to our advantage.

Exercises

The following exercise combines what you have learned about higher order functions, recursion and IO. We are going to recreate what programmers from more popular languages call a "for loop". Implement a function

for :: a -> (a->Bool) -> (a->a) -> (a-> IO ()) -> IO ()
for i p f job = -- ???


An example of how this function would be used might be

for 1 (<10) (+1) (print)


which prints the numbers 1 to 9 on the screen.

Starting from an initial value i, the for executes job i. It then modifies this value f i and checks to see if the modified value satisfies some condition. If it doesn't, it stops; otherwise, the for loop continues, using the modified f i in place of i.

1. The paragraph above gives an imperative description of the for loop. What would a more functional description be?
2. Implement the for loop in Haskell.
3. Why does Haskell not have a for loop as part of the language, or in the standard library?

Some more challenging exercises you could try

1. What would be a more Haskell-like way of performing a task like 'print the list of numbers from 1 to 10'? Are there any problems with your solution?
2. Implement a function sequenceIO :: [IO a] -> IO [a]. Given a list of actions, this function runs each of the actions in order and returns all their results as a list.
3. Implement a function mapIO :: (a -> IO b) -> [a] -> IO [b] which given a function of type a -> IO b and a list of type [a], runs that action on each item in the list, and returns the results.
This exercise was inspired from a blog post by osfameron. No peeking!

## Currying

I hope you followed the reasoning of the preceding chapter closely enough. If you haven't, you should, so give it another try.

Currying is a technique[52] that lets you partially apply a multi-parameter function. When you do that, it remembers those given values, and waits for the remaining parameters.

Our quickSort takes two parameters, the comparison function, and the list. We can, by currying, construct variations of quickSort with a given comparison function. The variation just "remembers" the specific comparison, so you can apply it to the list, and it will sort the list using the that comparison function.

descendingSort = quickSort descending


What is the type of descendingSort? quickSort was (a -> a -> Ordering) -> [a] -> [a], and the comparison function descending was a -> a -> Ordering. Applying quickSort to descending (that is, applying it partially, we haven't specified the list in the definition) we get a function (our descendingSort) for which the first parameter is already given, so you can scratch that type out from the type of quickSort, and we are left with a simple [a] -> [a]. So we can apply this one to a list right away!

Note: This will not work in newer compilers without either applying a pragma or adding a definition: descendingSort :: Ord a => [a] -> [a]

descendingSort dictionary


gives us

["thing", "have", "for", "a", "Linux", "I"]


It's rather neat. But is it useful? You bet it is. It is particularly useful as sections, you might notice. Currying often makes Haskell programs very concise. More than that, it often makes your intentions a lot clearer. Remember

less = filter (< x) xs


from the first version of quickSort? You can read it aloud like "keep those in xs that are less than x". Although (< x) is just a shorthand for (\y -> y < x), try reading that aloud!

## Function manipulation

We will close the chapter discussing a few examples of very common and very useful general-purpose higher-order functions, beginning with flip. flip is a handy little Prelude function that has the following type:

flip :: (a -> b -> c) -> b -> a -> c


It takes a function of two arguments and returns a version of the same function with the arguments swapped, so that:

Prelude> (flip (/)) 3 1
0.3333333333333333
Prelude> (flip map) [1,2,3] (*2)
[2,4,6]


We could have used flip to write the descending comparing function from the quickSort example point-free, as simply:

descending = flip compare


flip is particularly useful when we want to pass a function with two arguments of different types to another function and the arguments are in the wrong order with respect to the signature of the higher-order function.

The (.) composition operator is, as should be clear by now, just another higher-order function. It has the signature:

(.) :: (b -> c) -> (a -> b) -> a -> c


(.) takes two functions as argument and returns a new function, which applies the second argument and then the first.

Composition and higher-order functions provide a range of powerful tricks. For a very small sample of that, first consider the inits function, defined in the module Data.List. Quoting the documentation, it "returns all initial segments of the argument, shortest first", so that:

Prelude Data.List> inits [1,2,3]
[[],[1],[1,2],[1,2,3]]


With the higher order functions flip, scanl, (.) and map we can, using only functions in Prelude, provide the following one line implementation for inits (written point-free for extra dramatic effect):

myInits :: [a] -> [[a]]
myInits = map reverse . scanl (flip (:)) []


Swallowing a definition so condensed may look daunting at first; so slowly analyse it bit by bit, recalling what each function does and using the type signatures as a guide.

Note that the definition of myInits is not only very concise but also clean, in particular with parentheses usage kept to a bare minimum. Naturally, if one goes overboard with composition by writing mile-long (.) chains things are bound to get confusing, but when deployed reasonably this is a very attractive style. Furthermore, the implementation is quite "high level": we do not deal explicitly at all with details like pattern matching or recursion; the functions we deployed - both the higher-order ones and their functional arguments - take care of such plumbing.

Finally, we will present a very curious higher-order operator, ($). Its type is: ($) :: (a -> b) -> a -> b


It takes a function as its first argument, and all it does is to apply the function to the second argument, so that, for instance, (head $"abc") == (head "abc"). By now, it would be entirely justified if you thought that ($) was completely useless! However, there are two interesting points about it. First, ($) has very low precedence[53], unlike regular function application which has very high precedence. In effect, that means we can write a non-point-free version of myInits without having to add any parentheses in spite of the intricate expression at the left of the argument: myInits :: [a] -> [[a]] myInits xs = map reverse . scanl (flip (:)) []$ xs


Furthermore, as ($) is just a function which happens to apply functions, and functions are just values, we can write intriguing expressions such as: map ($ 2) [(2*), (4*), (8*)]


(Yes, that is a list of functions, and it is perfectly legal.)

Function manipulation through higher-order gives us a great deal of power. As we continue throughout this book we will see many examples of such power being harnessed.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Elementary Haskell edit this chapter Haskell edit book structure

# Using GHCi effectively

﻿

 Print version Elementary Haskell edit this chapter

GHCi offers several ways to make you work faster. This section will describe them and explain what they are for.

## User interface

### Tab completion

Whenever you are typing anything into the GHCi, you can hit Tab and you will be presented with a list of all possible names that start with what you've written so far. And if there is only one way how to continue, the string will be auto-completed. For example fol<Tab> will append letter d, another Tab will list four functions: fold foldl1 foldr and foldr1. Only functions and other defined objects (e.g. Data types) from modules that are currently imported are offered.

Tab completion works also when you are loading a file with your program into GHCi. After typing :l fi<Tab>, you will be presented with all files that start with "fi" and are present in the current directory (the one you were in when you launched GHCi).

The same also applies when you are importing modules, after typing :m +Da<Tab>, you will be presented with all modules that start with "Da" present in installed packages.

### ": commands"

On GHCi command line, commands for the interpreter start with the character ":" (colon).

•  :help -- prints a list of all available commands.
•  :reload -- reloads the file that has been loaded last time, so changes are visible from GHCi.
•  :type <Haskell expression> -- prints the type of a given expression, e.g. a function that has been defined previously.
•  :module +Data.List -- loads a module (in this example, Data.List). You can also unload a module with :module -Data.List.
•  :browse Data.List -- gives the type signatures for all functions available from a module.

Many of these commands can be abbreviated - :l for :load, :r for :reload, :h for :help, :t for :type and :m for module.

Here again, you can use Tab to see the list of commands, type :Tab to see all possible commands.

### Timing Functions in GHCi

GHCi provides a basic way to measure how much time a function takes to run, which can be useful for people who quickly want to find out which version of a function runs faster.

1. Type :set +s into the ghci command line.
2. run the function(s) you are testing. The time the function took to run will be displayed after GHCi outputs the results of the function.

### Multi-line Input

If you are trying to define a function that takes up multiple lines, or if you want to type a do block into ghci, there is an easy way to do this:

1. Begin a new line with :{
2. Type in your code. Press enter when you need a new line.
3. Type :} to end the multiline input.

For example:

   *Main> :{
*Main|               putStrLn "What is your name?"
*Main|               name <- getLine
*Main|               putStrLn $"Hello " ++ name *Main| :} *Main>  The same can be accomplished by using :set +m command (allow multiline commands). An empty line will end the block. In addition, line breaks in ghci commands can be separated by ;, like this:  *Main> let askname1 = do ; putStrLn "what is your name?" ; name <- getLine ; putStrLn$ "Hello " ++ name


# Modules

﻿

 Print version Intermediate Haskell edit this chapter

Modules are the primary means of organizing Haskell code. We have already met them in passing, when using import statements to put library functions into scope. In this chapter, we will have a closer look at how modules work. Beyond allowing us to make better use of libraries, such knowledge will help us to shape our own programs and, in particular, to create standalone programs which can be executed independently of GHCi (incidentally, that is the topic of the very next chapter, Standalone programs).

## Modules

Haskell modules[54] are a useful way to group a set of related functionalities into a single package and manage a set of different functions that have the same name. The module definition is the first thing that goes in your Haskell file.

Here is what a basic module definition looks like:

module YourModule where


Note that

1. the name of the module begins with a capital letter;
2. each file contains only one module.

The name of the file must be that of the module but with a .hs file extension. Any dots '.' in the module name are changed for directories.[55] So the module YourModule would be in the file YourModule.hs while a module Foo.Bar would be in the file Foo/Bar.hs or Foo\Bar.hs. Since the module name must begin with a capital letter, the file name must also start with a capital letter.

## Importing

One thing your module can do is import functions from other modules. That is, in between the module declaration and the rest of your code, you may include some import declarations such as

-- import only the functions toLower and toUpper from Data.Char
import Data.Char (toLower, toUpper)

-- import everything exported from Data.List
import Data.List

-- import everything exported from MyModule
import MyModule


Imported datatypes are specified by their name, followed by a list of imported constructors in parenthesis. For example:

-- import only the Tree data type, and its Node constructor from Data.Tree
import Data.Tree (Tree(Node))


Now what to do if you import some modules, but some of them have overlapping definitions? Or if you import a module, but want to overwrite a function yourself? There are three ways to handle these cases: Qualified imports, hiding definitions and renaming imports.

### Qualified imports

Say MyModule and MyOtherModule both have a definition for remove_e, which removes all instances of e from a string. However, MyModule only removes lower-case e's, and MyOtherModule removes both upper and lower case. In this case the following code is ambiguous:

-- import everything exported from MyModule
import MyModule

-- import everything exported from MyOtherModule
import MyOtherModule

-- someFunction puts a c in front of the text, and removes all e's from the rest
someFunction :: String -> String
someFunction text = 'c' : remove_e text


It isn't clear which remove_e is meant! To avoid this, use the qualified keyword:

import qualified MyModule
import qualified MyOtherModule

someFunction text = 'c' : MyModule.remove_e text -- Will work, removes lower case e's
someOtherFunction text = 'c' : MyOtherModule.remove_e text -- Will work, removes all e's
someIllegalFunction text = 'c' : remove_e text -- Won't work, remove_e isn't defined.


See the difference? In the latter code snippet, the function remove_e isn't even defined. Instead, we call the functions from the imported modules by prefixing them with the module's name. Note that MyModule.remove_e also works if the qualified keyword isn't included. The difference lies in the fact that remove_e is ambiguously defined in the first case, and undefined in the second case. If we have a remove_e defined in the current module, then using remove_e without any prefix will call this function.

Note

There is an ambiguity between a qualified name like MyModule.remove_e and the function composition operator (.). Writing reverse.MyModule.remove_e is bound to confuse your Haskell compiler. One solution is stylistic: to always use spaces for function composition, for example, reverse . remove_e or Just . remove_e or even Just . MyModule.remove_e

### Hiding definitions

Now suppose we want to import both MyModule and MyOtherModule, but we know for sure we want to remove all e's, not just the lower cased ones. It will become really tedious to add MyOtherModule before every call to remove_e. Can't we just not import remove_e from MyModule? The answer is: yes we can.

-- Note that I didn't use qualified this time.
import MyModule hiding (remove_e)
import MyOtherModule

someFunction text = 'c' : remove_e text


This works. Why? Because of the word hiding on the import line. Followed by it is a list of functions that shouldn't be imported. Hiding more than one function works like this:

import MyModule hiding (remove_e, remove_f)


Note that algebraic datatypes and type synonyms cannot be hidden. These are always imported. If you have a datatype defined in multiple imported modules, you must use qualified names.

### Renaming imports

This is not really a technique to allow for overwriting, but it is often used along with the qualified flag. Imagine:

import qualified MyModuleWithAVeryLongModuleName

someFunction text = 'c' : MyModuleWithAVeryLongModuleName.remove_e $text  Especially when using qualified, this gets irritating. We can improve things by using the as keyword: import qualified MyModuleWithAVeryLongModuleName as Shorty someFunction text = 'c' : Shorty.remove_e$ text


This allows us to use Shorty instead of MyModuleWithAVeryLongModuleName as prefix for the imported functions. As long as there are no ambiguous definitions, the following is also possible:

import MyModule as My
import MyCompletelyDifferentModule as My


In this case, both the functions in MyModule and the functions in MyCompletelyDifferentModule can be prefixed with My.

### Combining renaming with limited import

Sometimes it is convenient to use the import directive twice for the same module. A typical scenario is as follows:

import qualified Data.Set as Set
import Data.Set (Set, empty, insert)


This give access to all of the Data.Set module via the alias "Set", and also lets you access a few selected functions (empty, insert, and the constructor) without using the "Set" prefix.

## Exporting

In the examples at the start of this article, the words "import everything exported from MyModule" were used.[56] This raises a question. How can we decide which functions are exported and which stay "internal"? Here's how:

module MyModule (remove_e, add_two) where

add_one blah = blah + 1

remove_e text = filter (/= 'e') text

add_two blah = add_one . add_one blah  In this case, only remove_e and add_two are exported. While add_two is allowed to make use of add_one, functions in modules that import MyModule cannot use add_one, as it isn't exported. Datatype export specifications are written quite similarly to import. You name the type, and follow with the list of constructors in parenthesis: module MyModule2 (Tree(Branch, Leaf)) where data Tree a = Branch {left, right :: Tree a} | Leaf a  In this case, the module declaration could be rewritten "MyModule2 (Tree(..))", declaring that all constructors are exported. Maintaining an export list is good practice not only because it reduces namespace pollution, but also because it enables certain compile-time optimizations which are unavailable otherwise. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Intermediate Haskell edit this chapter Haskell edit book structure # Indentation ﻿  Print version (Solutions) Intermediate Haskell edit this chapter Haskell relies on indentation to reduce the verbosity of your code, but working with the indentation rules can be a bit confusing. The rules may seem many and arbitrary, but the reality of things is that there are only one or two layout rules, and all the seeming complexity and arbitrariness comes from how these rules interact with your code. So to take the frustration out of indentation and layout, the simplest solution is to get a grip on these rules.[57] ## The golden rule of indentation While the rest of this chapter will discuss in detail Haskell's indentation system, you will do fairly well if you just remember a single rule: Code which is part of some expression should be indented further in than the beginning of that expression (even if the expression is not the leftmost element of the line). What does that mean? The easiest example is a 'let' binding group. The equations binding the variables are part of the 'let' expression, and so should be indented further in than the beginning of the binding group: the 'let' keyword. When you start the expression on a separate line, you only need to indent by one space (although more than one space is also acceptable). let x = a y = b  Although the separation above makes things very clear, it is common to place the first line alongside the 'let' and indent the rest to line up: wrong wrong right let x = a y = b  let x = a y = b  let x = a y = b  This tends to trip up a lot of beginners: All grouped expressions must be exactly aligned. (On the first line, Haskell counts everything to the left of the expression as indent, even though it is not whitespace). Here are some more examples: do foo bar baz do foo bar baz where x = a y = b case x of p -> foo p' -> baz  Note that with 'case' it's less common to place the first subsidiary expression on the same line as the 'case' keyword, unlike common practice with 'do' and 'where' expression. Hence the subsidiary expressions in a case expression tend to be indented only one step further than the 'case' line. Also note we lined up the arrows here: this is purely aesthetic and isn't counted as different layout; only indentation, whitespace beginning on the far-left edge, makes a difference to layout. Things get more complicated when the beginning of the expression doesn't start at the left-hand edge. In this case, it's safe to just indent further than the line containing the expression's beginning. So, myFunction firstArgument secondArgument = do -- the 'do' doesn't start at the left-hand edge foo -- so indent these commands more than the beginning of the line containing the 'do'. bar baz  Here are some alternative layouts which work: myFunction firstArgument secondArgument = do foo bar baz myFunction firstArgument secondArgument = do foo bar baz myFunction firstArgument secondArgument = do foo bar baz  ## A mechanical translation Indentation is actually optional if you instead separate things using semicolons and curly braces such as in "one-dimensional" languages like C. It may be occasionally useful to write code in this style, and understanding how to convert from one style to the other can help understand the indentation rules. To do so, you need to understand two things: where we need semicolons/braces, and how to get there from layout. The entire layout process can be summed up in three translation rules (plus a fourth one that doesn't come up very often): 1. If you see one of the layout keywords, (let, where, of, do), insert an open curly brace (right before the stuff that follows it) 2. If you see something indented to the SAME level, insert a semicolon 3. If you see something indented LESS, insert a closing curly brace 4. If you see something unexpected in a list, like where, insert a closing brace before instead of a semicolon. Exercises Answer in one word: what happens if you see something indented MORE? Exercises Translate the following layout into curly braces and semicolons. Note: to underscore the mechanical nature of this process, we deliberately chose something which is probably not valid Haskell:  of a b c d where a b c do you like the way i let myself abuse these layout rules  ## Layout in action wrong wrong right right  do first thing second thing third thing   do first thing second thing third thing   do first thing second thing third thing   do first thing second thing third thing  ### Indent to the first Remember that, due to the "golden rule of indentation" described above, although the keyword do tells Haskell to insert a curly brace where the curly braces goes depends not on the do, but the thing that immediately follows it. For example, this weird-looking block of code is totally acceptable:  do first thing second thing third thing  As a result, you could also write combined if/do combination like this: Wrong Right  if foo then do first thing second thing third thing else do something else   if foo then do first thing second thing third thing else do something else  This is also the reason why you can write things like this main = do first thing second thing  or main = do first thing second thing  instead of main = do first thing second thing  Either way is acceptable ### if within do This is a combination which trips up many Haskell programmers. Why does the following block of code not work? -- why is this bad? do first thing if condition then foo else bar third thing  Just to reiterate, the if then else block is not at fault for this problem. Instead, the issue is that the do block notices that the then part is indented to the same column as the if part, so it is not very happy, because from its point of view, it just found a new statement of the block. It is as if you had written the unsugared version on the right: sweet (layout) unsweet -- why is this bad? do first thing if condition then foo else bar third thing  -- still bad, just explicitly so do { first thing ; if condition ; then foo ; else bar ; third thing }  Naturally, the Haskell compiler is confused because it thinks that you never finished writing your if expression, before writing a new statement. The compiler sees that you have written something like if condition;, which is clearly bad, because it is unfinished. So, in order to fix this, we need to indent the bottom parts of this if block a little bit inwards sweet (layout) unsweet -- whew, fixed it! do first thing if condition then foo else bar third thing  -- the fixed version without sugar do { first thing ; if condition then foo else bar ; third thing }  This little bit of indentation prevents the do block from misinterpreting your then as a brand new expression. Of course, you might as well prefer to always add indentation before then and else, even when it is not really necessary. That wouldn't hurt legibility, and would avoid bad surprises like this one. Exercises The if-within-do issue has tripped up so many Haskellers that one programmer has posted a proposal to the Haskell prime initiative to add optional semicolons between if then else. How would that help? ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Solutions to exercises Intermediate Haskell edit this chapter Haskell edit book structure # More on datatypes  Print version Intermediate Haskell edit this chapter ## Enumerations One special case of the data declaration is the enumeration. This is simply a data type where none of the constructor functions have any arguments: data Month = January | February | March | April | May | June | July | August | September | October | November | December  You can mix constructors that do and do not have arguments, but it's only an enumeration if none of the constructors have arguments. For instance: data Colour = Black | Red | Green | Blue | Cyan | Yellow | Magenta | White | RGB Int Int Int  The last constructor takes three arguments, so Colour is not an enumeration. As you will see further on when we discuss classes and derivation, this distinction is not only conceptual. Incidentally, the definition of the Bool datatype is: data Bool = False | True deriving (Eq, Ord, Enum, Read, Show, Bounded)  ## Named Fields (Record Syntax) Consider a datatype whose purpose is to hold configuration settings. Usually when you extract members from this type, you really only care about one or possibly two of the many settings. Moreover, if many of the settings have the same type, you might often find yourself wondering "wait, was this the fourth or fifth element?" One thing you could do would be to write accessor functions. Consider the following made-up configuration type for a terminal program: data Configuration = Configuration String -- user name String -- local host String -- remote host Bool -- is guest? Bool -- is super user? String -- current directory String -- home directory Integer -- time connected deriving (Eq, Show)  You could then write accessor functions, like (I've only listed a few): getUserName (Configuration un _ _ _ _ _ _ _) = un getLocalHost (Configuration _ lh _ _ _ _ _ _) = lh getRemoteHost (Configuration _ _ rh _ _ _ _ _) = rh getIsGuest (Configuration _ _ _ ig _ _ _ _) = ig ...  You could also write update functions to update a single element. Of course, now if you add an element to the configuration, or remove one, all of these functions now have to take a different number of arguments. This is highly annoying and is an easy place for bugs to slip in. However, there's a solution. We simply give names to the fields in the datatype declaration, as follows: data Configuration = Configuration { username :: String, localhost :: String, remotehost :: String, isguest :: Bool, issuperuser :: Bool, currentdir :: String, homedir :: String, timeconnected :: Integer }  This will automatically generate the following accessor functions for us: username :: Configuration -> String localhost :: Configuration -> String ...  Moreover, it gives us a convenient update method. Here is a short example for a "post working directory" and "change directory" like functions that work on Configurations: changeDir :: Configuration -> String -> Configuration changeDir cfg newDir = -- make sure the directory exists if directoryExists newDir then -- change our current directory cfg{currentdir = newDir} else error "directory does not exist" postWorkingDir :: Configuration -> String -- retrieve our current directory postWorkingDir cfg = currentdir cfg  So, in general, to update the field x in a datatype y to z, you write y{x=z}. You can change more than one; each should be separated by commas, for instance, y{x=z, a=b, c=d}. Note Those of you familiar with object-oriented languages might be tempted to, after all of this talk about "accessor functions" and "update methods", think of the y{x=z} construct as a setter method, which modifies the value of x in a pre-existing y. It is not like that – remember that in Haskell variables are immutable. Therefore, if, using the example above, you do something like conf2 = changeDir conf1 "/opt/foo/bar" conf2 will be defined as a Configuration which is just like conf1 except for having "/opt/foo/bar" as its currentdir, but conf1 will remain unchanged. ### It's only sugar You can of course continue to pattern match against Configurations as you did before. The named fields are simply syntactic sugar; you can still write something like: getUserName (Configuration un _ _ _ _ _ _ _) = un  But there is little reason to. Finally, you can pattern match against named fields as in: getHostData (Configuration {localhost=lh,remotehost=rh}) = (lh,rh)  This matches the variable lh against the localhost field on the Configuration and the variable rh against the remotehost field on the Configuration. These matches of course succeed. You could also constrain the matches by putting values instead of variable names in these positions, as you would for standard datatypes. You can create values of Configuration in the old way as shown in the first definition below, or in the named-field's type, as shown in the second definition below: initCFG = Configuration "nobody" "nowhere" "nowhere" False False "/" "/" 0 initCFG' = Configuration { username="nobody", localhost="nowhere", remotehost="nowhere", isguest=False, issuperuser=False, currentdir="/", homedir="/", timeconnected=0 }  The first way is much shorter, although the second is much clearer. ## Parameterized Types Parameterized types are similar to "generic" or "template" types in other languages. A parameterized type takes one or more type parameters. For example, the Standard Prelude type Maybe is defined as follows: data Maybe a = Nothing | Just a  This says that the type Maybe takes a type parameter a. You can use this to declare, for example: lookupBirthday :: [Anniversary] -> String -> Maybe Anniversary  The lookupBirthday function takes a list of birthday records and a string and returns a Maybe Anniversary. Typically, our interpretation is that if it finds the name then it will return Just the corresponding record, and otherwise, it will return Nothing. You can parameterize type and newtype declarations in exactly the same way. Furthermore you can combine parameterized types in arbitrary ways to construct new types. ### More than one type parameter We can also have more than one type parameter. An example of this is the Either type: data Either a b = Left a | Right b  For example: pairOff :: Int -> Either Int String pairOff people | people < 0 = Right "Can't pair off negative number of people." | people > 30 = Right "Too many people for this activity." | even people = Left (people div 2) | otherwise = Right "Can't pair off an odd number of people." groupPeople :: Int -> String groupPeople people = case pairOff people of Left groups -> "We have " ++ show groups ++ " group(s)." Right problem -> "Problem! " ++ problem  In this example pairOff indicates how many groups you would have if you paired off a certain number of people for your activity. It can also let you know if you have too many people for your activity or if somebody will be left out. So pairOff will return either an Int representing the number of groups you will have, or a String describing the reason why you can't create your groups. ### Kind Errors The flexibility of Haskell parameterized types can lead to errors in type declarations that are somewhat like type errors, except that they occur in the type declarations rather than in the program proper. Errors in these "types of types" are known as "kind" errors. You don't program with kinds: the compiler infers them for itself. But if you get parameterized types wrong then the compiler will report a kind error.  Print version Intermediate Haskell edit this chapter Haskell edit book structure # Other data structures  Print version Intermediate Haskell edit this chapter ## Trees Now let's look at one of the most important data structures: trees. A tree is an example of a recursive datatype. While there are several different kinds of trees, for this discussion we will adopt the following definition, which captures the essential features of a tree that will concern us here: data Tree a = Leaf a | Branch (Tree a) (Tree a)  As you can see, it's parametrised, so we can have trees of Ints, trees of Strings, trees of Maybe Ints, even trees of (Int, String) pairs, if you really want. What makes it special is that Tree appears in the definition of itself. We will see how this works by using an already known example: the list. ### Lists as Trees As we have seen in More about lists and List Processing, we break lists down into two cases: An empty list (denoted by []), and an element of the specified type, with another list (denoted by (x:xs)). This gives us valuable insight about the definition of lists: data [a] = [] | (a:[a]) -- Pseudo-Haskell, will not work properly.  Which is sometimes written as (for Lisp-inclined people): data List a = Nil | Cons a (List a)  As you can see this is also recursive, like the tree we had. Here, the constructor functions are [] and (:). They represent what we have called Leaf and Branch. We can use these in pattern matching, just as we did with the empty list and the (x:xs): ### Maps and Folds We already know about maps and folds for lists. With our realisation that a list is some sort of tree, we can try to write map and fold functions for our own type Tree. To recap: data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving (Show) data [a] = [] | (:) a [a] -- (:) a [a] is the same as (a:[a]) with prefix instead of infix notation.  We consider map first, then folds. Note Deriving is explained later on in the section Class Declarations. For now, understand it as telling Haskell (and by extension your interpreter) how to display a Tree instance. #### Map Let's take a look at the definition of map for lists: map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x:xs) = f x : map f xs  First, if we were to write treeMap, what would its type be? Defining the function is easier if you have an idea of what its type should be. We want it to work on a Tree of some type, and it should return another Tree of some type. What treeMap does is applying a function on each element of the tree, so we also need a function. In short: treeMap :: (a -> b) -> Tree a -> Tree b  See how this is similar to the list example? Next, we should start with the easiest case. When talking about a Tree, this is obviously the case of a Leaf. A Leaf only contains a single value, so all we have to do is apply the function to that value and then return a Leaf with the altered value: treeMap :: (a -> b) -> Tree a -> Tree b treeMap f (Leaf x) = Leaf (f x)  Also, this looks a lot like the empty list case with map. Now if we have a Branch, it will include two subtrees; what do we do with them? When looking at the list-map, you can see it uses a call to itself on the tail of the list. We also shall do that with the two subtrees. The complete definition of treeMap is as follows: treeMap :: (a -> b) -> Tree a -> Tree b treeMap f (Leaf x) = Leaf (f x) treeMap f (Branch left right) = Branch (treeMap f left) (treeMap f right)  We can make this a bit more readable by noting that treeMap f is itself a function with type Tree a -> Tree b. This gives us the following revised definition: treeMap :: (a -> b) -> Tree a -> Tree b treeMap f = g where g (Leaf x) = Leaf (f x) g (Branch left right) = Branch (g left) (g right)  If you don't understand it just now, re-read it. Especially the use of pattern matching may seem weird at first, but it is essential to the use of datatypes. The most important thing to remember is that pattern matching happens on constructor functions. If you understand it, read on for folds. #### Fold Now we've had the treeMap, let's try to write a treeFold. Again, let's take a look at the definition of foldr for lists, as it is easier to understand. foldr :: (a -> b -> b) -> b -> [a] -> b foldr f z [] = z foldr f z (x:xs) = f x (foldr f z xs)  Recall that lists have two constructors: (:) :: a -> [a] -> [a] -- two arguments [] :: [a] -- zero arguments  Thus foldr takes two arguments corresponding to the two constructors: f :: a -> b -> b -- a two-argument function z :: b -- like a zero-argument function  We'll use the same strategy to find a definition for treeFold as we did for treeMap. First, the type. We want treeFold to transform a tree of some type into a value of some other type; so in place of [a] -> b we will have Tree a -> b. How do we specify the transformation? First note that Tree a has two constructors: Branch :: Tree a -> Tree a -> Tree a Leaf :: a -> Tree a  So treeFold will have two arguments corresponding to the two constructors: fbranch :: b -> b -> b fleaf :: a -> b  Putting it all together we get the following type definition: treeFold :: (b -> b -> b) -> (a -> b) -> Tree a -> b  That is, the first argument, of type (b -> b -> b), is a function specifying how to combine subtrees; the second argument, of type a -> b, is a function specifying what to do with leaves; and the third argument, of type Tree a, is the tree we want to fold. As with treeMap, we'll avoid repeating the arguments fbranch and fleaf by introducing a local function g: treeFold :: (b -> b -> b) -> (a -> b) -> Tree a -> b treeFold fbranch fleaf = g where -- definition of g goes here  The argument fleaf tells us what to do with Leaf subtrees: g (Leaf x) = fleaf x  The argument fbranch tells us how to combine the results of "folding" two subtrees: g (Branch left right) = fbranch (g left) (g right)  Our full definition becomes: treeFold :: (b -> b -> b) -> (a -> b) -> Tree a -> b treeFold fbranch fleaf = g where g (Leaf x) = fleaf x g (Branch left right) = fbranch (g left) (g right)  For examples of how these work, copy the Tree data definition and the treeMap and treeFold functions to a Haskell file, along with the following: tree1 :: Tree Integer tree1 = Branch (Branch (Branch (Leaf 1) (Branch (Leaf 2) (Leaf 3))) (Branch (Leaf 4) (Branch (Leaf 5) (Leaf 6)))) (Branch (Branch (Leaf 7) (Leaf 8)) (Leaf 9)) doubleTree = treeMap (*2) -- doubles each value in tree sumTree = treeFold (+) id -- sum of the leaf values in tree fringeTree = treeFold (++) (: []) -- list of the leaves of tree  Then load it into your favourite Haskell interpreter, and evaluate: doubleTree tree1 sumTree tree1 fringeTree tree1  ## Other datatypes Map and fold functions can be defined for any kind of data type. In order to generalize the strategy applied for lists and trees, in this final section we will work out a map and a fold for a very strange, intentionally contrived, datatype: data Weird a b = First a | Second b | Third [(a,b)] | Fourth (Weird a b)  It can be a useful exercise to write the functions as you follow the examples, trying to keep the coding one step ahead of your reading. ### General Map Again, we will begin with weirdMap. The first important difference in working with this Weird type is that it has two type parameters. For that reason, we will want the map function to take two functions as arguments, one to be applied on the elements of type a and another for the elements of type b. With that accounted for, we can write the type signature of weirdMap: weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d  Next step is writing the definitions for weirdMap. The key point is that maps preserve the structure of a datatype, so the function must evaluate to a Weird which uses the same constructor as the one used for the original Weird. For that reason, we need one definition to handle each constructor, and these constructors are used as patterns for writing them. As before, to avoid repeating the weirdMap argument list over and over again a where clause comes in handy. A sketch of the function is below: weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = g where g (First x) = --More to follow g (Second y) = --More to follow g (Third z) = --More to follow g (Fourth w) = --More to follow  The first two cases are fairly straightforward, as there is just a single element of a or b type inside the Weird. weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = g where g (First x) = First (fa x) g (Second y) = Second (fb y) g (Third z) = --More to follow g (Fourth w) = --More to follow  Third is trickier because it contains another data structure (a list) whose elements are themselves data structures (the tuples). So we need to navigate the nested data structures, apply fa and fb on all elements of type a and b inside it and eventually (as a map must preserve structure) produce a list of tuples – [(c,d)] – to be used with the constructor. The simplest approach might seem to be just breaking down the list inside the Weird and playing with the patterns:  g (Third []) = Third [] g (Third ((x,y):zs)) = Third ( (fa x, fb y) : ( (\(Third z) -> z) (g (Third zs)) ) )  This appears to be written as a typical recursive function for lists. We start by applying the functions of interest to the first element in order to obtain the head of the new list, (fa x, fb y). But what we will cons it to? As g requires a Weird argument we need to make a Weird using the list tail in order to make the recursive call. But then g will give a Weird and not a list, so we have to retrieve the modified list from that – that's the role of the lambda function. And finally, there is also the empty list base case to be defined as well. After all of that, we are left with a messy function. Every recursive call of g requires wrapping zs into a Weird, while what we really wanted to do was to build a list with (fa x, fb y) and the modified xs. The problem with this solution is that g can (thanks to pattern matching) act directly on the list head but (due to its type signature) can't be called directly on the list tail. For that reason, it would be better to apply fa and fb without breaking down the list with pattern matching (as far as g is directly concerned, at least). But there was a way to directly modify a list element-by-element...  g (Third z) = Third ( map (\(x, y) -> (fa x, fb y) ) z)  ...our good old map function, which modifies all tuples in the list z using a lambda function. In fact, the first attempt at writing the definition looked just like an application of the list map except for the spurious Weird packing and unpacking. We got rid of these by having the pattern splitting of z done by map, which works directly with regular lists. You could find it useful to expand the map definition inside g for seeing a clearer picture of that difference. Finally, you may prefer to write this new version in an alternative, very clean way using list comprehension syntax:  g (Third z) = Third [ (fa x, fb y) | (x,y) <- z ]  Adding the Third function, we only have the Fourth one left to do: weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = g where g (First x) = First (fa x) g (Second y) = Second (fb y) g (Third z) = Third ( map (\(x, y) -> (fa x, fb y) ) z) g (Fourth w) = --More to follow  Dealing with the recursive Fourth constructor is actually really easy. Just apply g recursively! weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = g where g (First x) = First (fa x) g (Second y) = Second (fb y) g (Third z) = Third ( map (\(x, y) -> (fa x, fb y) ) z) g (Fourth w) = Fourth (g w)  ### General Fold While we were able to define a map by specifying as arguments a function for every separate type, this isn't enough for a fold. For a fold, we'll need a function for every constructor function. This is also the case with lists! Remember the constructors of a list are [] and (:). The z argument in the foldr function corresponds to the [] constructor. The f argument in the foldr function corresponds to the (:) constructor. The Weird datatype has four constructors, so we need four functions – one for handling the internal structure of the datatype specified by each constructor. Next, we have an argument of the Weird a b type, and finally we want the whole fold function to evaluate to a value of some other, arbitrary, type. Additionally, each individual function we pass to weirdFold must evaluate to the same type weirdFold does. That allows us to make a mock type signature and sketch the definition: weirdFold :: (something1 -> c) -> (something2 -> c) -> (something3 -> c) -> (something4 -> c) -> Weird a b -> c weirdFold f1 f2 f3 f4 = g where g (First x) = --Something of type c here g (Second y) = --Something of type c here g (Third z) = --Something of type c here g (Fourth w) = --Something of type c here  Now we need to figure out to which types something1, something2, something3 and something4 correspond to. That is done by analysing the constructors, since the functions must take as arguments the elements of the datatype (whose types are specified by the constructor type signature). Again, the types and definitions of the first two functions are easy to find. The third one isn't difficult either, as for the purposes of folding the list of (a,b) tuples is no different from a simple type – unlike in the map example, its internal structure does not concern us now. The fourth constructor, however, is recursive, and we have to watch out. As in the case of weirdMap, we also need to recursively call the g function. This brings us to the following, final, definition: weirdFold :: (a -> c) -> (b -> c) -> ([(a,b)] -> c) -> (c -> c) -> Weird a b -> c weirdFold f1 f2 f3 f4 = g where g (First x) = f1 x g (Second y) = f2 y g (Third z) = f3 z g (Fourth w) = f4 (g w)  Note If you were expecting very complex expressions in the weirdFold above and are surprised by the immediacy of the solution, it might be helpful to have a look on what the common foldr would look like if we wrote it in this style and didn't have the special square-bracket syntax of lists to distract us: -- List a is [a], Cons is (:) and Nil is [] data List a = Cons a (List a) | Nil listFoldr :: (a -> b -> b) -> (b) -> List a -> b listFoldr fCons fNil = g where g (Cons x xs) = fCons x (g xs) g Nil = fNil  Now it is easier to see the parallels. The extra complications are that Cons (that is, (:)) takes two arguments (and, for that reason, so does fCons) and is recursive, requiring a call to g. Also, fNil is of course not really a function, as it takes no arguments. #### Folds on recursive datatypes As far as folds are concerned Weird was a fairly nice datatype to deal with. Just one recursive constructor, which isn't even nested inside other structures. What would happen if we added a truly complicated fifth constructor?  Fifth [Weird a b] a (Weird a a, Maybe (Weird a b))  A valid, and tricky, question. In general, the following rules apply: • A function to be supplied to a fold has the same number of arguments as the corresponding constructor. • The type of the arguments of such a function match the types of the constructor arguments, except if the constructor is recursive (that is, takes an argument of its own type). • If a constructor is recursive, any recursive argument of the constructor will correspond to an argument of the type the fold evaluates to.[58] • If a constructor is recursive, the complete fold function should be (recursively) applied to the recursive constructor arguments. • If a recursive element appears inside another data structure, the appropriate map function for that data structure should be used to apply the fold function to it. So f5 would have the type: f5 :: [c] -> a -> (Weird a a, Maybe c) -> c  as the type of Fifth is: Fifth :: [Weird a b] -> a -> (Weird a a, Maybe (Weird a b)) -> Weird a b  The definition of g for the Fifth constructor will be:  g (Fifth list x (waa, mc)) = f5 (map g list) x (waa, maybeMap g mc) where maybeMap f Nothing = Nothing maybeMap f (Just w) = Just (f w)  Now note that nothing strange happens with the Weird a a part. No g gets called. What's up? This is a recursion, right? Well... not really. Weird a a and Weird a b are different types, so it isn't a real recursion. It isn't guaranteed that, for example, f2 will work with something of type 'a', where it expects a type 'b'. It can be true for some cases, but not for everything. Also look at the definition of maybeMap. Verify that it is indeed a map function as: • It preserves structure. • Only types are changed. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Intermediate Haskell edit this chapter Haskell edit book structure # Classes and types ﻿  Print version Intermediate Haskell edit this chapter Back in Type basics II we had a brief encounter with type classes, which were presented as the mechanism used by Haskell to deal with number types. As we hinted back then, however, classes have many other uses. Starting with this chapter, we will see how to define and implement type classes, and how to use them to our advantage. Broadly speaking, the point of type classes is to ensure that certain operations will be available for values of chosen types. For example, if we know a type belongs to (or, to use the jargon, instantiates) the class Fractional, then we are guaranteed to, among other things, be able to perform real division with its values[59]. ## Classes and instances Up to now we have seen how existing type classes appear in signatures, like that of (==): (==) :: (Eq a) => a -> a -> Bool  Now it is time to switch perspectives. First, we quote the definition of the Eq class from Prelude: class Eq a where (==), (/=) :: a -> a -> Bool -- Minimal complete definition: -- (==) or (/=) x /= y = not (x == y) x == y = not (x /= y)  The definition states that if a type a is to be made an instance of the class Eq it must support the functions (==) and (/=) - the class methods - both of them having type a -> a -> Bool. Additionally, the class provides default definitions for (==) and (/=) in terms of each other. As a consequence, there is no need for a type in Eq to provide both definitions - given one of them, the other will be generated automatically. With a class defined, we proceed to make existing types instances of it. Here is an arbitrary example of an algebraic data type made into an instance of Eq by an instance declaration: data Foo = Foo {x :: Integer, str :: String} instance Eq Foo where (Foo x1 str1) == (Foo x2 str2) = (x1 == x2) && (str1 == str2)  And now we can apply (==) and (/=) to Foo values in the usual way: *Main> Foo 3 "orange" == Foo 6 "apple" False *Main> Foo 3 "orange" /= Foo 6 "apple" True  A few important remarks: • The class Eq is defined in the Standard Prelude. This code sample defines the type Foo and then declares it to be an instance of Eq. The three definitions (class, data type and instance) are completely separate and there is no rule about how they are grouped. That works both ways - you could just as easily create a new class Bar and then declare the type Integer to be an instance of it. • Classes are not types, but categories of types; and so the instances of a class are not values, but types[60]. • The definition of (==) for Foo relies on the fact that Integer and String are also members of Eq, as we are using (==) with the values of the fields. In fact almost all types in Haskell (the most notable exception being functions) are members of Eq. • Type synonyms defined with type keyword cannot be made instances of a class. ## Deriving Since equality tests between values are commonplace, in all likelihood most of the data types you create in any real program should be members of Eq, and for that matter a lot of them will also be members of other Prelude classes such as Ord and Show. This would require large amounts of boilerplate for every new type, so Haskell has a convenient way to declare the "obvious" instance definitions using the keyword deriving. Using it, Foo would be written as: data Foo = Foo {x :: Integer, str :: String} deriving (Eq, Ord, Show)  This makes Foo an instance of Eq with an automatically generated definition of == exactly equivalent to the one we just wrote, and also makes it an instance of Ord and Show for good measure. You can only use deriving with a limited set of built-in classes, which are described very briefly below: Eq Equality operators == and /= Ord Comparison operators < <= > >=; min, max and compare. Enum For enumerations only. Allows the use of list syntax such as [Blue .. Green]. Bounded Also for enumerations, but can also be used on types that have only one constructor. Provides minBound and maxBound, the lowest and highest values that the type can take. Show Defines the function show, which converts a value into a string, and other related functions. Read Defines the function read, which parses a string into a value of the type, and other related functions. The precise rules for deriving the relevant functions are given in the language report. However they can generally be relied upon to be the "right thing" for most cases. The types of elements inside the data type must also be instances of the class you are deriving. This provision of special "magic" function synthesis for a limited set of predefined classes goes against the general Haskell philosophy that "built in things are not special". However it does save a lot of typing. Experimental work with Template Haskell is looking at how this magic (or something like it) can be extended to all classes. ## Class inheritance Classes can inherit from other classes. For example, here is the main part of the definition of Ord in Prelude: class (Eq a) => Ord a where compare :: a -> a -> Ordering (<), (<=), (>=), (>) :: a -> a -> Bool max, min :: a -> a -> a  The actual definition is rather longer and includes default implementations for most of the functions. The point here is that Ord inherits from Eq. This is indicated by the => notation in the first line, which mirrors the way classes appear in type signatures. Here, it means that for a type to be an instance of Ord it must also be an instance of Eq, and hence needs to implement the == and /= operations[61]. A class can inherit from several other classes: just put all the ancestor classes in the parentheses before the =>. Let us illustrate that with yet another Prelude quote: class (Num a, Ord a) => Real a where -- | the rational equivalent of its real argument with full precision toRational :: a -> Rational  ## Standard classes This diagram, copied from the Haskell Report, shows the relationships between the classes and types in the Standard Prelude. The names in bold are the classes, while the non-bold text stands for the types that are instances of each class ((->) refers to functions and [], to lists). The arrows linking classes indicate the inheritance relationships, pointing to the inheriting class. ## Type constraints With all pieces in place, we can go full circle by returning to the very first example involving classes in this book: (+) :: (Num a) => a -> a -> a  (Num a) => is a type constraint, which restricts the type a to instances of the class Num. In fact, (+) is a method of Num, along with quite a few other functions (notably, (*) and (-); but not (/)). You can put several limits into a type signature like this: foo :: (Num a, Show a, Show b) => a -> a -> b -> String foo x y t = show x ++ " plus " ++ show y ++ " is " ++ show (x+y) ++ ". " ++ show t  Here, the arguments x and y must be of the same type, and that type must be an instance of both Num and Show. Furthermore the final argument t must be of some (possibly different) type that is also an instance of Show. This example also displays clearly how constraints propagate from the functions used in a definition (in this case, (+) and show) to the function being defined. ### Other uses Beyond simple type signatures, type constraints can be introduced in a number of other places: • instance declarations (typical with parametrized types); • class declarations (constraints can be introduced in the method signatures in the usual way for any type variable other than the one defining the class[62]); • data declarations[63], where they act as constraints for the constructor signatures. Note Type constraints in data declarations are less useful than it might seem at first. Consider: data (Num a) => Foo a = F1 a | F2 a String  Here, Foo is a type with two constructors, both taking an argument of a type a which must be in Num. However, the (Num a) => constraint is only effective for the F1 and F2 constructors, and not for other functions involving Foo. Therefore, in the following example... fooSquared :: (Num a) => Foo a -> Foo a fooSquared (F1 x) = F1 (x * x) fooSquared (F2 x s) = F2 (x * x) s  ... even though the constructors ensure a will be some type in Num we can't avoid duplicating the constraint in the signature of fooSquared[64]. ## A concerted example To provide a better view of how the interplay between types, classes and constraints, we will present a very simple and somewhat contrived example. In it, we define a Located class, a Movable class which inherits from it, and a function with a Movable constraint implemented using the methods of the parent class, i.e. Located. -- Location, in two dimensions. class Located a where getLocation :: a -> (Int, Int) class (Located a) => Movable a where setLocation :: (Int, Int) -> a -> a -- An example type, with accompanying instances. data NamedPoint = NamedPoint { pointName :: String , pointX :: Int , pointY :: Int } deriving (Show) instance Located NamedPoint where getLocation p = (pointX p, pointY p) instance Movable NamedPoint where setLocation (x, y) p = p { pointX = x, pointY = y } -- Moves a value of a Movable type by the specified displacement. -- This works for any movable, including NamedPoint. move :: (Movable a) => (Int, Int) -> a -> a move (dx, dy) p = setLocation (x + dx, y + dy) p where (x, y) = getLocation p  ### A word of advice Do not read too much into the Movable example just above; it is merely a demonstration of class-related language features. It would be a mistake to think that every single functionality which might be conceivably generalized, such as setLocation, needs a type class of its own. In particular, if all your Located instances should be able to be moved as well then Movable is unnecessary - and if there is just one instance there is no need for type classes at all! Classes are best used when there are several types instantiating it (or if you expect others to write additional instances) and you do not want users to know or care about the differences between the types. An extreme example would be Show: general-purpose functionality implemented by an immense number of types, about which you do not need to know a thing about before calling show. In the following chapters we will explore a number of important type classes in the libraries; they provide good examples of the sort of functionality which fits comfortably into a class. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Intermediate Haskell edit this chapter Haskell edit book structure # The Functor class ﻿  Print version Intermediate Haskell edit this chapter This chapter serves a dual role. In it, we will not only introduce the very important Functor class but also use it as a simple example of how type classes can be useful tools for solving problems in a more general way. ## Motivation In Other data structures, we have seen how apply-to-all-elements operations (of which map is the prime example) are not specific to lists. One of the examples we worked through was that of the following tree datatype: data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving (Show)  The map function we wrote for it was: treeMap :: (a -> b) -> Tree a -> Tree b treeMap f (Leaf x) = Leaf (f x) treeMap f (Branch left right) = Branch (treeMap f left) (treeMap f right)  Later in that chapter we also defined "maps" for Weird and (in a passage of the final example) Maybe; and could conceivably do the same for any arbitrary data structure. The list-based map makes the apply-to-all-elements strategy general with regards to the function being applied[65]. Now, we can see that a further generalization is possible: mapping over types other than lists. Type classes provide a way of expressing this generality. ## Introducing Functor Functor is a Prelude class for types which can be mapped over. It has a single method, called fmap. The class is defined as follows: class Functor f where fmap :: (a -> b) -> f a -> f b  The usage of the type variable f can look a little strange at first. Here, f is a parametrized data type; in the signature of fmap, it takes a as a type parameter in one of its appearances and b in the other. It becomes easier to see what is going on by considering an instance of Functor - say, Maybe. By replacing f with Maybe we get the following signature for fmap... fmap :: (a -> b) -> Maybe a -> Maybe b  ... which fits the natural definition: instance Functor Maybe where fmap f Nothing = Nothing fmap f (Just x) = Just (f x)  (Incidentally, this definition is in Prelude; and thus we didn't really need to have implemented maybeMap for that example in "Other data structures".) Unsurprisingly, the Functor instance for lists (also in Prelude) is just... instance Functor [] where fmap = map  ... and replacing f for [] in the fmap signature gives us the familiar type of map. Summing it all up, we can say that fmap is a generalization of map for any parametrized data type[66]. Naturally, we can provide Functor instances for our own data types. In particular, treeMap can be promptly relocated to an instance: instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Branch left right) = Branch (fmap f left) (fmap f right)  To close this first stretch, a quick demo of fmap in action with the instances above (to reproduce it, you only need to load the data and instance declarations for Tree; the others are already in Prelude): *Main> fmap (2*) [1,2,3,4] [2,4,6,8] *Main> fmap (2*) (Just 1) Just 2 *Main> fmap (fmap (2*)) [Just 1, Just 2, Just 3, Nothing] [Just 2,Just 4,Just 6,Nothing] *Main> fmap (2*) (Branch (Branch (Leaf 1) (Leaf 2)) (Branch (Leaf 3) (Leaf 4))) Branch (Branch (Leaf 2) (Leaf 4)) (Branch (Leaf 6) (Leaf 8))  Note Beyond the [] and Maybe examples mentioned here Prelude provides a number of other handy Functor instances. The full list can be found in GHC's documentation for the Control.Monad module. ### The functor laws When providing a new instance of Functor, you should ensure it satisfies the two functor laws. There is nothing mysterious about these laws; their role is to guarantee fmap behaves sanely and actually performs a mapping operation (as opposed to some other nonsense). Indeed, a type whose Functor instance does not obey the functor laws cannot properly be called a functor[67]. The first law is: fmap id == id  id is the identity function, which returns its argument unaltered. The first law states that mapping id over a functor must return the functor unchanged. Next, the second law: fmap (f . g) == fmap f . fmap g  It states that it should not matter whether we map a composed function or first map one function and then the other (assuming the application order remains the same in both cases). ## What did we gain? At this point, we can ask what benefit we get from the extra layer of generalization brought by the Functor class. There are two significant advantages: • Starting from the more superficial one, the availability of the fmap method relieves us from having to recall, read and write a plethora of differently named mapping methods (maybeMap, treeMap, weirdMap, ad infinitum). As a consequence, code becomes both cleaner and easier to understand - on spotting a use of fmap we instantly have a general idea of what is going on[68]. • More significant, however, is the ability, granted by the type class system, to write fmap-based algorithms which work out of the box with any functor - be it [], Maybe, Tree or whichever you need. Indeed, a number of useful classes in the hierarchical libraries inherit from Functor[69]. Type classes make it possible to create general solutions to whole categories of problems. While it is possible that, depending on what you will use Haskell for, you will not need to define new classes often, it is sure that you will be using type classes all the time. Many of the most powerful features and sophisticated capabilities of Haskell rely on type classes residing either in the standard libraries or elsewhere. And of course, classes will be a permanent presence in this book from this point on - beginning with the all-important Monad. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Intermediate Haskell edit this chapter Haskell edit book structure # Monads # Understanding monads  Print version (Solutions) Monads edit this chapter Monads are very useful in Haskell, but the concept is usually difficult for newcomers to grasp. Since they have so many applications, people often explain them from a particular point of view, which can confuse your understanding of monads in their full glory. Historically, monads were introduced into Haskell to perform input/output. After all, lazy evaluation means that the order of evaluation is rather unpredictable, whereas a predetermined execution order is crucial for things like reading and writing files. Hence, a method for specifying a sequence of operations was needed, and monads were exactly the right abstraction for that. But monads are by no means limited to input/output; they can model any imperative language. The choice of monad determines the semantics of this language, i.e., whether it supports exceptions, state, non-determinism, continuations, coroutines and so on[70]. The present chapter introduces the basic notions with the example of the Maybe monad, the simplest monad for handling exceptions. Beginning Haskell programmers will probably also want to understand the IO monad and then broaden their scope to the many other monads and the effects they represent; this chapter provides the corresponding pointers. ## Definition Let us dive in with both feet. A monad is defined by two things: • a function return[71]; and • an operator (>>=) which is pronounced "bind". The function and operator are methods of the Monad type class, have types  return :: a -> M a (>>=) :: M a -> ( a -> M b ) -> M b  and are required to obey three laws that will be explained later on. For a concrete example, take the Maybe monad. The type constructor is M = Maybe so that return and (>>=) have types  return :: a -> Maybe a (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b  They are implemented as  return x = Just x (>>=) m g = case m of Nothing -> Nothing Just x -> g x  and our task is to explain how and why this definition is useful. ### Motivation: Maybe To see the usefulness of (>>=) and the Maybe monad, consider the following example: imagine a family database that provides two functions  father :: Person -> Maybe Person mother :: Person -> Maybe Person  that look up the name of someone's father or mother, returning Nothing if they are not stored in the database. With them, we can query various grandparents. For instance, the following function looks up the maternal grandfather:  maternalGrandfather :: Person -> Maybe Person maternalGrandfather p = case mother p of Nothing -> Nothing Just m -> father m -- mother's father  Or consider a function that checks whether both grandfathers are in the database:  bothGrandfathers :: Person -> Maybe (Person, Person) bothGrandfathers p = case father p of Nothing -> Nothing Just f -> case father f of Nothing -> Nothing Just gf -> -- found first grandfather case mother p of Nothing -> Nothing Just m -> case father m of Nothing -> Nothing Just gm -> -- found second one Just (gf, gm)  What a mouthful! Every single query might fail by returning Nothing and the whole function must fail with Nothing if that happens. Clearly there has to be a better way to write that than repeating the case of Nothing again and again! Indeed, and that's what the Maybe monad is set out to do. For instance, the function retrieving the maternal grandfather has exactly the same structure as the (>>=) operator, and we can rewrite it as  maternalGrandfather p = mother p >>= father  With the help of lambda expressions and return, we can rewrite the mouthful of two grandfathers as well:  bothGrandfathers p = father p >>= (\f -> father f >>= (\gf -> mother p >>= (\m -> father m >>= (\gm -> return (gf,gm) ))))  While these nested lambda expressions may look confusing to you, the thing to take away here is that (>>=) eliminates any mention of Nothing, shifting the focus back to the interesting part of the code. ### Type class In Haskell, the type class Monad is used to implement monads. It is provided by the Control.Monad module and included in the Prelude. The class has the following methods:  class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b (>>) :: m a -> m b -> m b fail :: String -> m a  Aside from return and bind, it defines two additional functions (>>) and fail. The operator (>>) called "then" is a mere convenience and commonly implemented as  m >> n = m >>= \_ -> n  It is used for sequencing two monadic actions when the second does not care about the result of the first, which is common for monads like IO.  printSomethingTwice :: String -> IO () printSomethingTwice str = putStrLn str >> putStrLn str  The function fail handles pattern match failures in do notation. It's an unfortunate technical necessity and doesn't really have anything to do with monads. You are advised to not call fail directly in your code. ## Notions of Computation While you probably agree now that (>>=) and return are very handy for removing boilerplate code that crops up when using Maybe, there is still the question of why this works and what this is all about. To answer this, we shall write the example with the two grandfathers in a very suggestive style:  bothGrandfathers p = do f <- father p -- assign p's father to f gf <- father f -- assign f's father (p's paternal grandfather) to gf m <- mother p -- assign p's mother to m gm <- father m -- assign m's father (p's maternal grandfather) to gm return (gf, gm) -- return result pair  If this looks like a code snippet of an imperative programming language to you, that's because it is. In particular, this imperative language supports exceptions : father and mother are functions that might fail to produce results, i.e. raise an exception, and when that happens, the whole do-block will fail, i.e. terminate with an exception. In other words, the expression father p, which has type Maybe Person, is interpreted as a statement of an imperative language that returns a Person as result. This is true for all monads: a value of type M a is interpreted as a statement of an imperative language that returns a value of type a as result; and the semantics of this language are determined by the monad M. Now, the bind operator (>>=) is simply a function version of the semicolon. Just like a let expression can be written as a function application,  let x = foo in bar corresponds to (\x -> bar) foo  an assignment and semicolon can be written as the bind operator:  x <- foo; bar corresponds to foo >>= (\x -> bar)  The return function lifts a value a to a full-fledged statement M a of the imperative language. Different semantics of the imperative language correspond to different monads. The following table shows the classic selection that every Haskell programmer should know. If the idea behind monads is still unclear to you, studying each of the examples in the following subchapters will not only give you a well-rounded toolbox but also help you understand the common abstraction behind them. Semantics Monad Wikibook chapter Exception (anonymous) Maybe Haskell/Understanding monads/Maybe Exception (with error description) Error Haskell/Understanding monads/Error Global state State Haskell/Understanding monads/State Input/Output IO Haskell/Understanding monads/IO Nondeterminism [] (lists) Haskell/Understanding monads/List Environment Reader Haskell/Understanding monads/Reader Logger Writer Haskell/Understanding monads/Writer Furthermore, the semantics do not only occur in isolation but can also be mixed and matched. This gives rise to monad transformers. Some monads, like monadic parser combinators have loosened their correspondence to an imperative language. ## Monad Laws We can't just allow any junky implementation of (>>=) and return if we want to interpret them as the primitive building blocks of an imperative language. For that, an implementation has to obey the following three laws:  m >>= return = m -- right unit return x >>= f = f x -- left unit (m >>= f) >>= g = m >>= (\x -> f x >>= g) -- associativity  In Haskell, every instance of the Monad type class is expected to obey them. ### Return as neutral element The behavior of return is specified by the left and right unit laws. They state that return doesn't perform any computation, it just collects values. For instance,  maternalGrandfather p = do m <- mother p gm <- father m return gm  is exactly the same as  maternalGrandfather p = do m <- mother p father m  by virtue of the right unit law. ### Associativity of bind The law of associativity makes sure that – just like the semicolon – the bind operator (>>=) only cares about the order of computations, not about their nesting; e.g. we could have written bothGrandfathers like this (compare with our earliest version without do):  bothGrandfathers p = (father p >>= father) >>= (\gf -> (mother p >>= father) >>= (\gm -> return (gf,gm) ))  The associativity of the then operator (>>) is a special case:  (m >> n) >> o = m >> (n >> o)  It is easier to picture the associativity of bind by recasting the law as  (f >=> g) >=> h = f >=> (g >=> h)  where (>=>) is the monad composition operator, a close analogue of the (flipped) function composition operator (.), defined as  (>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c f >=> g = \x -> f x >>= g  ## Monads and Category Theory Monads originally come from a branch of mathematics called Category Theory. Fortunately, it is entirely unnecessary to understand category theory in order to understand and use monads in Haskell. However, the definition of monads in Category Theory uses a slightly different presentation. Translated into Haskell, this presentation gives an alternative yet equivalent definition of a monad which can give us some additional insight [72]. So far, we have defined monads in terms of >>= and return. The alternative definition, instead, starts with monads as functors with two additional combinators:  fmap :: (a -> b) -> M a -> M b -- functor return :: a -> M a join :: M (M a) -> M a  (Reminder: as discussed in the chapter on the functor class, a functor M can be thought of as container, so that M a "contains" values of type a, with a corresponding mapping function, i.e. fmap, that allows functions to be applied to values inside it.) Under this interpretation, the functions behave as follows: • fmap applies a given function to every element in a container • return packages an element into a container, • join takes a container of containers and flattens it into a single container. With these functions, the bind combinator can be defined as follows:  m >>= g = join (fmap g m)  Likewise, we could give a definition of fmap and join in terms of (>>=) and return:  fmap f x = x >>= (return . f) join x = x >>= id  ### Is my Monad a Functor? At this point we might, with good reason, deduce that all monads are by definition functors as well. While according to category theory that is indeed the case, GHC does it differently, and the Monad and Functor classes are unrelated. That will likely change in future versions of Haskell, so that every Monad instance will be guaranteed to have a matching Functor instance and the corresponding fmap. Meanwhile, Control.Monad defines liftM, a function with a strangely familiar type signature... liftM :: (Monad m) => (a1 -> r) -> m a1 -> m r  As you might suspect, liftM is merely fmap implemented with (>>=) and return, just as we have done above. For a properly implemented monad with a matching Functor (that is, pretty much any sensible monad) liftM and fmap are interchangeable. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Solutions to exercises Monads Understanding monads (May 04, 2010) >> Maybe >> List >> do Notation >> IO >> State >> Additive monads (MonadPlus) >> Monad transformers >> Practical monads edit this chapter Haskell edit book structure # The Maybe monad  Print version Monads edit this chapter We introduced monads using Maybe as an example. The Maybe monad represents computations which might "go wrong", in the sense of not returning a value; the definitions of return and (>>=) for Monad amounting to:[73]  return :: a -> Maybe a return x = Just x (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b (>>=) m g = case m of Nothing -> Nothing Just x -> g x  Now, we will present two additional examples, similar in spirit to the grandparents one in the previous chapter, and then conclude with some general points. ## Safe functions The Maybe datatype provides a way to make a safety wrapper around functions which can fail to work for a range of arguments. head and tail, which only work with non-empty lists, would be a good example of that. Another typical case, which we will explore in this section, are mathematical functions like sqrt and log, which (as far as real numbers are concerned) are only defined for non-negative arguments. For instance, our implementation of a "safe" square root function could be: safeSqrt :: (Floating a, Ord a) => a -> Maybe a safeSqrt x | x >= 0 = Just (sqrt x) | otherwise = Nothing  We could now decide to write similar "safe functions" for all functions with limited domains, such as division, logarithm and inverse trigonometric functions (safeDiv, safeLog, safeArcSin, etc.). However, when we try to combine them we run into a problem: they output a Maybe a, but take an a as an input. As a result, before each application, we have to check whether the previous operation was successful: safeLogSqrt :: (Floating a, Ord a) => a -> Maybe a safeLogSqrt x = case safeSqrt x of Just root -> safeLog root Nothing -> Nothing  You can see the problem: the code looks ugly already when using one concatenation. If you had multiple concatenations the code would get even worse. This is in stark contrast to the easy concatenation that we get with "unsafe" functions: unsafeLogSqrt = log . sqrt  This, however, is precisely the sort of issue monads can tackle: through (>>=), they allow us to concatenate computations easily, so that the result of one is fed to the next. Maybe being a monad, we can achieve the desired effect very simply: > return 1000 >>= safeSqrt >>= safeLog Just 3.4538776394910684 > return (-1000) >>= safeSqrt >>= safeLog Nothing  safeLogSqrt :: (Floating a, Ord a) => a -> Maybe a safeLogSqrt x = return x >>= safeSqrt >>= safeLog  Every additional operation is just another >>= safeOperation term—no checks required, since (>>=) takes care of that. To make things even easier, we can write the combination in a style which mirrors composition of "unsafe" functions with (.) by using the (<=<) operator from the Control.Monad module: import Control.Monad((<=<)) safeLogSqrt' = safeLog <=< safeSqrt  ## Lookup tables A lookup table is a table which relates keys to values. You look up a value by knowing its key and using the lookup table. For example, you might have a lookup table of contact names as keys to their phone numbers as the values in a phone book application. An elementary way of implementing lookup tables in Haskell is to use a list of pairs: [(a, b)]. Here a is the type of the keys, and b the type of the values[74]. Here's how the phone book lookup table might look like: phonebook :: [(String, String)] phonebook = [ ("Bob", "01788 665242"), ("Fred", "01624 556442"), ("Alice", "01889 985333"), ("Jane", "01732 187565") ]  The most common thing you might do with a lookup table is look up values! However, this computation might fail. Everything is fine if we try to look up one of "Bob", "Fred", "Alice" or "Jane" in our phone book, but what if we were to look up "Zoe"? Zoe isn't in our phone book, so the lookup would fail. Hence, the Haskell function to look up a value from the table is a Maybe computation (it is available from Prelude): lookup :: Eq a => a -- a key -> [(a, b)] -- the lookup table to use -> Maybe b -- the result of the lookup  Let us explore some of the results from lookup: Prelude> lookup "Bob" phonebook Just "01788 665242" Prelude> lookup "Jane" phonebook Just "01732 187565" Prelude> lookup "Zoe" phonebook Nothing  Now let's expand this into using the full power of the monadic interface. Say, we're now working for the government, and once we have a phone number from our contact, we want to look up this phone number in a big, government-sized lookup table to find out the registration number of their car. This, of course, will be another Maybe-computation. But if they're not in our phone book, we certainly won't be able to look up their registration number in the governmental database! So what we need is a function that will take the results from the first computation, and put it into the second lookup, but only if we didn't get Nothing the first time around. If we did indeed get Nothing from the first computation, or if we get Nothing from the second computation, our final result should be Nothing. comb :: Maybe a -> (a -> Maybe b) -> Maybe b comb Nothing _ = Nothing comb (Just x) f = f x  As you might have guessed by now, comb is just (>>=); and so we can chain our computations together: getRegistrationNumber :: String -- their name -> Maybe String -- their registration number getRegistrationNumber name = lookup name phonebook >>= (\number -> lookup number governmentalDatabase)  If we then wanted to use the result from the governmental database lookup in a third lookup (say we want to look up their registration number to see if they owe any car tax), then we could extend our getRegistrationNumber function: getTaxOwed :: String -- their name -> Maybe Double -- the amount of tax they owe getTaxOwed name = lookup name phonebook >>= (\number -> lookup number governmentalDatabase) >>= (\registration -> lookup registration taxDatabase)  Or, using the do-block style: getTaxOwed name = do number <- lookup name phonebook registration <- lookup number governmentalDatabase lookup registration taxDatabase  Let's just pause here and think about what would happen if we got a Nothing anywhere. Trying to use >>= to combine a Nothing from one computation with another function will result in the Nothing being carried on and the second function ignored (refer to our definition of comb above if you're not sure). That is, a Nothing at any stage in the large computation will result in a Nothing overall, regardless of the other functions! Thus we say that the structure of the Maybe monad propagates failures. ## Summary The key features of the Maybe monad are that: 1. It represents computations that could fail. 2. It propagates failure. Another trait of the Maybe monad is that it is "open": if we have a Just value, we can extract its associated value by pattern matching (which is what we did in the first implementation of safeLogSqrt). That is is not true for all monads; often, they will be designed so as to help you by hiding unnecessary details. It is perfectly possible to make a "no-exit" monad, from which it is never possible to extract "pure" values, the obvious example being the IO monad: in this way it is impossible not to notice that an operation involves input/output (and thereby side effects), because any I/O operation will carry around the IO monad, which functions as a warning sign. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Monads Understanding monads (May 04, 2010) >> Maybe >> List >> do Notation >> IO >> State >> Additive monads (MonadPlus) >> Monad transformers >> Practical monads edit this chapter Haskell edit book structure # The List monad  Print version Monads edit this chapter Lists are commonplace in Haskell, and you surely have used them extensively before getting to this chapter. What is novel here is that the list type, [] a, is a monad too. Taken as monads, lists are used to model nondeterministic computations which may return an arbitrary number of results. There is a certain parallel with how Maybe represented computations which could return zero or one value; the difference being that through lists zero, one or many values can be returned (the number of values being reflected in the length of the list). ## Instantiation as monad The implementation of the return function, that injects a generic value into a list, is quite simple: return x = [x]  In other words, injecting a value into a list means making a list containing that value only. The type of the list return is return :: a -> [a], or, equivalently, return :: a -> [] a. The latter style of writing it makes it more obvious that we are replacing the generic type constructor in the signature of return (which we had called M in Understanding monads) by the list type constructor [] (do not confuse the latter with the empty list!). The binding operator is a little less trivial. We will begin by considering its type, which for the case of lists should be: [a] -> (a -> [b]) -> [b]  That is, from a list of a-type values and a function producing lists of b-type values from each a-type one, we get a list of b's. Now, mapping the function over the list of a's would give [[b]], a list of lists of b's. By using concat to concatenate the elements of this list of lists we get a list of b's - and a definition of (>>=): xs >>= f = concat (map f xs)  The bind operator is always key to understanding how a monad does its job, for it implements the chaining strategy which is responsible for making the monad useful. In the case of the list monad, the type of the function to the right of (>>=) brings non-determinism into play, as evaluating to a list for each value corresponds in effect to giving back a variable number of results for each value passed to the function. map in turn applies f to all values from the xs computation, leaving us with potentially many results for each element of xs. Finally, concat combines all of these results in a simple list of type [b], thus ensuring that the type of the final computation matches the signature of (>>=) and therefore that we can proceed with chaining it to other list computations. ## Bunny invasion To begin with, a simple example showing that it is easy to incorporate the familiar list processing functions in monadic code. A rabbit couple can spawn six rabbits every month. Considering half of them will be females, we can model the number of female rabbits after a certain number of generations using a function like: Prelude> let generation = replicate 3 Prelude> ["bunny"] >>= generation ["bunny","bunny","bunny"] Prelude> ["bunny"] >>= generation >>= generation ["bunny","bunny","bunny","bunny","bunny","bunny","bunny","bunny","bunny"]  In this trivial example all elements are equal, but one could for example model radioactive decay or chemical reactions, or any phenomena that produces a series of elements starting from a single one. ## Noughts and crosses Suppose we are modelling the game of noughts and crosses (known as tic-tac-toe in some parts of the world). An interesting (if somewhat contrived) problem might be to find all the possible ways the game could progress: find the possible states of the board 3 turns later, given a certain board configuration (i.e. a game in progress). The problem can be boiled down to the following steps: 1. Find the list of possible board configurations for the next turn. 2. Repeat the computation for each of these configurations: replace each configuration, call it C, with the list of possible configurations of the turn after C. 3. We will now have a list of lists (each sublist representing the turns after a previous configuration), so in order to be able to repeat this process, we need to collapse this list of lists into a single list. This structure should look similar to the monad instance for list described above. Here's how it might look, without using the list monad (concatMap is a shortcut for when you need to concat the results of a map: concatMap f xs = concat (map f xs)): nextConfigs :: Board -> [Board] nextConfigs = undefined -- details not important tick :: [Board] -> [Board] tick bds = concatMap nextConfigs bds thirdConfigs :: Board -> [Board] thirdConfigs bd = tick tick $tick [bd]  concatMap, however, is just the bind operator for lists, only with the arguments in reversed order; and so we can write a literal translation of thirdConfigs using the list monad: thirdConfigs :: Board -> [Board] thirdConfigs bd = return bd >>= nextConfigs >>= nextConfigs >>= nextConfigs  Alternatively, we can write the above as a do block - compare the translation with what we did for the grandparents example in Understanding monads: thirdConfigs :: Board -> [Board] thirdConfigs bd = do bd0 <- return bd -- Initial configuration bd1 <- nextConfigs bd0 -- After one turn bd2 <- nextConfigs bd1 -- After two turns nextConfigs bd2 -- After three turns  Note how the left arrow in the list monad, in effect, means "do whatever follows with all elements of the list on the right of the arrow". That effect is due to the mapping performed by the bind operator. We can simplify the monadic code a bit further by noting that using return bd to get a list with a single element and then immediately extracting that single element with the left arrow is redundant: thirdConfigs :: Board -> [Board] thirdConfigs bd = do bd1 <- nextConfigs bd bd2 <- nextConfigs bd1 nextConfigs bd2  Short and sweet, with the plumbing formerly done by the tick function now wholly implicit. ## List comprehensions One thing that can be helpful in visualizing how the list monad works is its uncanny similarity to list comprehensions. If we slightly modify the do block we just wrote for thirdConfigs so that it ends with a return... thirdConfigs bd = do bd1 <- nextConfigs bd bd2 <- nextConfigs bd1 bd3 <- nextConfigs bd2 return bd3  ... it mirrors exactly the following list comprehension: thirdConfigs bd = [ bd3 | bd1 <- nextConfigs bd, bd2 <- nextConfigs bd1, bd3 <- nextConfigs bd2 ]  (In a list comprehension, it is perfectly legal to use the elements drawn from one list to define the following ones, like we did here.) The resemblance is no coincidence: list comprehensions are, behind the scenes, also defined in terms of concatMap. For the correspondence to be complete, however, there should be a way to reproduce the filtering that list comprehensions are capable of. We will explain how that can be achieved a little later, in the Additive monads chapter.  Print version Monads Understanding monads (May 04, 2010) >> Maybe >> List >> do Notation >> IO >> State >> Additive monads (MonadPlus) >> Monad transformers >> Practical monads edit this chapter Haskell edit book structure # do Notation  Print version Monads edit this chapter Among the initial examples of monads, there were some which used an alternative syntax with do blocks for chaining computations. Those examples, however, were not the first time we have seen do: back in Simple input and output we had seen how code for doing input-output was written in an identical way. That is no coincidence: what we have been calling IO actions are just computations in a monad - namely, the IO monad. We will revisit IO soon; for now, though, let us consider exactly how the do notation translates into regular monadic code. Since the following examples all involve IO, we will refer to the computations/monadic values as actions, like in the earlier parts of the book. Still do works with any monad; there is nothing specific about IO in how it works. ## Translating the then operator The (>>) (then) operator is easy to translate between do notation and plain code, so we will see it first. For example, suppose we have a chain of actions like the following one: putStr "Hello" >> putStr " " >> putStr "world!" >> putStr "\n"  We can rewrite it in do notation as follows: do putStr "Hello" putStr " " putStr "world!" putStr "\n"  This sequence of instructions is very similar to what you would see in any imperative language such as C. The actions being chained could be anything, as long as all of them are in the same monad. In the context of the IO monad, for instance, an action might be writing to a file, opening a network connection or asking the user for input. The general way we translate these actions from the do notation to standard Haskell code is: do action1 action2 action3  which becomes action1 >> do action2 action3  and so on until the do block is empty. ## Translating the bind operator The (>>=) is a bit more difficult to translate from and to do notation, essentially because it involves passing a value, namely the result of an action, downstream in the binding sequence. These values can be stored using the <- notation, and used downstream in the do block. do x1 <- action1 x2 <- action2 action3 x1 x2  x1 and x2 are the results of action1 and action2 (for instance, if action1 is an IO Integer then x1 will be bound to an Integer). They are passed as arguments to action3, whose return type is a third action. The do block is broadly equivalent to the following vanilla Haskell snippet: action1 >>= \x1 -> action2 >>= \x2 -> action3 x1 x2  The second argument of (>>=) is a function specifying what to do with the result of the action passed as first argument; and so by chaining lambdas in this way we can pass results downstream. Remember that without extra parentheses a lambda extends all the way to the end of the expression, so x1 is still in scope at the point we call action3. We can rewrite the chain of lambdas more legibly as: action1 >>= \x1 -> action2 >>= \x2 -> action3 x1 x2  ### The fail method Above we said the snippet with lambdas was "broadly equivalent" to the do block. It is not an exact translation because the do notation adds special handling of pattern match failures. x1 and x2 when placed at the left of either <- or -> are patterns being matched. Therefore, if action1 returned a Maybe Integer we could write a do block like this... do Just x1 <- action1 x2 <- action2 action3 x1 x2  ...and x1 will be bound to an Integer. In such a case, however, what happens if action1 returns Nothing? Ordinarily, the program would crash with an non-exhaustive patterns error, just like the one we get when calling head on an empty list. With do notation, however, failures will be handled with the fail method for the relevant monad. The translation of the first statement done behind the scenes is equivalent to: action1 >>= f where f (Just x1) = do x2 <- action2 action3 x1 x2 f _ = fail "..." -- A compiler-generated message.  What fail actually does is up to the monad instance. While it will often just rethrow the pattern matching error, monads which incorporate some sort of error handling may deal with the failure in their own specific ways. For instance, Maybe has fail _ = Nothing; analogously, for the list monad fail _ = [] [75]. All things considered, the fail method is an artefact of do notation. It is better not to call it directly from your code, and to only rely on automatic handling of pattern match failures when you are sure that fail will do something sensible for the monad you are using. ## Example: user-interactive program Note We are going to interact with the user, so we will use putStr and getLine alternately. To avoid unexpected results in the output remember to disable output buffering importing System.IO and putting hSetBuffering stdout NoBuffering at the top of your code. Otherwise you can explictly flush the output buffer before each interaction with the user (namely a getLine) using hFlush stdout. If you are testing this code with ghci you don't have such problems. Consider this simple program that asks the user for his or her first and last names: nameDo :: IO () nameDo = do putStr "What is your first name? " first <- getLine putStr "And your last name? " last <- getLine let full = first ++ " " ++ last putStrLn ("Pleased to meet you, " ++ full ++ "!")  The code in do notation is readable and easy to follow. The <- notation makes it possible to treat first and last names as if they were pure variables, though they never can be in reality: function getLine is not pure because it can give a different result every time it is run. A possible translation into vanilla monadic code would be: nameLambda :: IO () nameLambda = putStr "What is your first name? " >> getLine >>= \first -> putStr "And your last name? " >> getLine >>= \last -> let full = first ++ " " ++ last in putStrLn ("Pleased to meet you, " ++ full ++ "!")  In cases like this, in which we just want to chain several actions, the imperative style of do notation feels natural, and can be pretty convenient. In comparison, monadic code with explicit binds and lambdas is something of an acquired taste. The example includes a let statement in the do block. They are translated to a regular let expression, with the in part being the translation of whatever follows it in the do block (in the example, that means the final putStrLn). ## Returning values The last statement in a do notation is the result of the do block. In the previous example, the result was of the type IO (), that is an empty value in the IO monad. Suppose that we want to rewrite the example, but returning a IO String with the acquired name. All we need to do is add a return: nameReturn :: IO String nameReturn = do putStr "What is your first name? " first <- getLine putStr "And your last name? " last <- getLine let full = first ++ " " ++ last putStrLn ("Pleased to meet you, " ++ full ++ "!") return full  This example will "return" the full name as a string inside the IO monad, which can then be utilized downstream elsewhere: greetAndSeeYou :: IO () greetAndSeeYou = do name <- nameReturn putStrLn ("See you, " ++ name ++ "!")  Here, the name will be obtained from user input and the greeting will be printed as side effects of nameReturn. Its return value will then be used to prepare the goodbye message. This kind of code is why it is so easy to misunderstand the nature of return: it does not only share a name with C's keyword, it seems to have the same function here. A small variation on the example, however, will dispel that impression: nameReturnAndCarryOn = do putStr "What is your first name? " first <- getLine putStr "And your last name? " last <- getLine let full = first++" "++last putStrLn ("Pleased to meet you, "++full++"!") return full putStrLn "I am not finished yet!"  The string in the extra line will be printed out, as return is not a final statement interrupting the flow like in C and other languages. Indeed, the type of nameReturnAndCarryOn is IO (), the type of the final putStrLn action, and after the function is called the IO String created by the return full will disappear without a trace. ## It is just sugar The do notation is just a syntactical convenience; it does not add anything essential. Keeping that in mind, we can raise a few points about style. First of all, do is never necessary for a single action; and so the Haskell "Hello world" is simply... main = putStrLn "Hello world!"  ...without any do in sight. Additionally, snippets like this one are always redundant: fooRedundant = do x <- bar return x  Thanks to the monad laws, we can (and should!) write it as: foo = bar  A subtler but crucial point is related to function composition. As we already know, the greetAndSeeYou action in the section just above could be rewritten as: greetAndSeeYou :: IO () greetAndSeeYou = nameReturn >>= \name -> putStrLn ("See you, " ++ name ++ "!")  While you might find the lambda a little unsightly, suppose we had a printSeeYou function defined elsewhere: printSeeYou :: String -> IO () printSeeYou name = putStrLn ("See you, " ++ name ++ "!")  Things suddenly look much nicer, and arguably even nicer than in the do version: greetAndSeeYou :: IO () greetAndSeeYou = nameReturn >>= printSeeYou  Or, if we had a non-monadic seeYou function: seeYou :: String -> String seeYou name = "See you, " ++ name ++ "!" -- Reminder: liftM f m == m >>= return . f == fmap f m greetAndSeeYou :: IO () greetAndSeeYou = liftM seeYou nameReturn >>= putStrLn  Keep in mind this last example with liftM; we will soon return to the theme of using non-monadic functions in monadic code, and why it can be useful. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Monads Understanding monads (May 04, 2010) >> Maybe >> List >> do Notation >> IO >> State >> Additive monads (MonadPlus) >> Monad transformers >> Practical monads edit this chapter Haskell edit book structure # The IO monad  Print version (Solutions) Monads edit this chapter As you should have picked up by now, Haskell is a functional and lazy language. This has some dire consequences for something apparently simple like input/output, but we can solve this with the IO monad. ## The Problem: Input/Output and Purity Haskell functions are in general pure functions: when given the same arguments, they return the same results. The reason for this paradigm is that pure functions are much easier to debug and to prove correct. Test cases can also be set up much more easily, since we can be sure that nothing other than the arguments will influence a function's result. We also require pure functions not to have side effects other than returning a value: a pure function must be self-contained, and cannot open a network connection, write a file or do anything other than producing its result. This allows the Haskell compiler to optimise the code very aggressively. However, there are very useful functions that cannot be pure: an input function, say getLine, will return different results every time it is called; indeed, that's the point of an input function, since an input function returning always the same result would be pointless. Output operations have side effects, such as creating files or printing strings on the terminal: this is also a violation of purity, because the function is no longer self-contained. Unwilling to drop the purity of standard functions, but unable to do without impure ones, Haskell places the latter ones in the IO monad. In other words, what we up to now have called "IO actions" are just values in the IO monad. The IO monad is built in such a way as to be "closed", that is, it is not possible to make a String out of a IO String. Such an extraction would be possible for other monads, such as Maybe String or [String], but not for IO String. In this way, any operation involving an impure function will be "tainted" with the IO monad, which then functions as a signal: if the IO monad is present in a function's signature, we know that that function may have side effects or may produce different results with the same inputs. Another advantage of using monads is that, by concatenating I/O operations with the (>>=) or (>>) operators, we provide an order in which these I/O operations will be executed. This is important because Haskell is a lazy language, and can decide to evaluate functions whenever the compiler decides it is appropriate: however, this can work only for pure functions! If an operation with side effects (say, writing a log file) were to be written lazily, its entries could be in just about any order: clearly not the result we desire. Locking I/O operations inside a monad allows to define a clear operating sequence. ## Combining Pure Functions and Input/Output If all useful operations entail input/output, why do we bother with pure functions? The reason is that, thanks to monad properties, we can still have pure functions doing the heavy work, and benefit from the ease with which they can be debugged, tested, proved correct and optimised, while we use the IO monad to get our data and deliver our results. Let's try a simple example: suppose we have a function converting a string to upper case: > let shout = map Data.Char.toUpper  The type of this function is clearly pure: > :t shout shout :: [Char] -> [Char]  Suppose you apply this function to a string with many repeated characters, for example: > shout "aaaaaaaaaaaaagh!" "AAAAAAAAAAAAAGH!"  The Haskell compiler needs to apply the function only four times: for 'a', 'g', 'h' and '!'. A C compiler or a Perl interpreter, knowing nothing about purity or self-containment, would have to call the function for all 16 characters, since they cannot be sure that the function will not change its output somehow. The shout function is really trivial, but remember that this line of reasoning is valid for any pure function, and this optimisation capability will be extremely valuable for more complex operations: suppose, for instance, that you had a function to render a character in a particular font, which is a much more expensive operation. To combine shout with I/O, we ask the user to insert a string (side effect: we are writing to screen), we read it (impure: result can be different every time), apply the (pure) function with liftM and, finally, write the resulting string (again, side effect). > putStr "Write your string: " >> liftM shout getLine >>= putStrLn Write your string: This is my string! THIS IS MY STRING!  ## IO facts • The do notation is especially popular with the IO monad, since it is not possible to extract values from it, and at the same time it is very convenient to use statements with <- to store input values that have to be used multiple times after having been acquired. The above example could be written in do notation as: do putStr "Write your string: " string <- getLine putStrLn (shout string)  • Every Haskell program starts from the main function, which has type IO (). Therefore, in reality, every line of Haskell code you have ever run has run in conjunction with the IO monad! • A way of viewing the IO monad is thinking of an IO a value as a computation which gives a value of type a while changing the state of the world by doing input and output. This state of the world is hidden from you by the IO monad, and obviously you cannot set it. Seen this way, IO is roughly analogous to the State monad, which we will meet shortly. With State, however, the state being changed is made of normal Haskell values, and so we can manipulate it directly with pure functions. • Actually, there is a "back door" out of the IO monad, System.IO.Unsafe.unsafePerformIO, which will transform, say, a IO String into a String. The naming of the function should point out clearly enough that it is usually not a good idea to use it. • Naturally, the standard library has many useful functions for performing I/O, all of them involving the IO monad. Several important ones are presented in the IO chapter in Haskell in Practice. ## Monadic control structures Given that monads allow us to express sequential execution of actions in a wholly general way, we would hope to use them to implement common iterative patterns, such as loops, in an elegant manner. In this section, we will present a few of the functions from the standard libraries which allow us to do precisely that. While the examples are being presented amidst a discussion on IO for more immediate appeal, keep in mind that what we demonstrate here applies to every monad. We begin by emphasizing that there is nothing magical about monadic values; we can manipulate them just like any other values in Haskell. Knowing that, we might try to, for instance, use the following to get five lines of user input: fiveGetLines = replicate 5 getLine  That won't do, however (try it in GHCi!). The problem is that replicate produces, in this case, a list of actions, while we want an action which returns a list (that is, IO [String] rather than [IO String]). What we need is a fold which runs down the list of actions, executing them and combining the results into a single list. As it happens, there is a Prelude function which does that: sequence. sequence :: (Monad m) => [m a] -> m [a]  And so, we get the desired action. fiveGetLines = sequence$ replicate 5 getLine


replicate and sequence form an appealing combination; and so Control.Monad offers a replicateM function for repeating an action an arbitrary number of times. Control.Monad provides a number of other convenience functions in the same spirit - monadic zips, folds, and so forth.

fiveGetLinesAlt = replicateM 5 getLine


A particularly important combination is that of map and sequence; it allows us to make actions from a list of values, run them sequentially and collect the results. mapM, a Prelude function, captures this pattern:

mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]


Also common are the variants of the above functions with a trailing underscore in the name, such as sequence_, mapM_ and replicateM_. They discard the return values, and so are appropriate when you are only interested in the side effects (they are to their underscore-less counterparts what (>>) is to (>>=)). mapM_ for instance has the following type:

mapM_ :: (Monad m) => (a -> m b) -> [a] -> m ()


Finally, it is worth mentioning that Control.Monad also provides forM and forM_, which are flipped versions of mapM and mapM_. forM_ happens to be the idiomatic Haskell counterpart to the imperative for-each loop; and the type signature suggests that neatly:

forM_ :: (Monad m) => [a] -> (a -> m b) -> m ()


Exercises
1. Using the monadic functions we have just introduced, write a function which prints an arbitrary list of values.
2. Generalize the bunny invasion example in the list monad chapter for an arbitrary number of generations.
3. What is the expected behaviour of sequence for the Maybe monad?

 Print version Monads Understanding monads  (May 04, 2010)  >> Maybe >> List  >> do Notation  >> IO >> State  >> Additive monads (MonadPlus)  >> Monad transformers  >> Practical monads edit this chapter Haskell edit book structure

 Print version (Solutions) Monads edit this chapter

If you programmed in any language before, chances are you wrote some functions that "kept state". In case you did not encounter the concept before, a state is one or more variables that are required to perform some computation, but are not among the arguments of the relevant function. In fact, object-oriented languages like C++ make extensive usage of state variables in objects in the form of member variables. Procedural languages like C use variables declared outside the current scope to keep track of state. In Haskell, however, such techniques can not be applied in a straightforward way, as they require mutable variables and thus clash with the default expectation of functional purity. We can very often keep track of state by passing parameters from function to function or by pattern matching of various sorts, but in some cases it is appropriate to find a more general or convenient solution. We will consider the common example of generation of pseudo-random numbers with pure functions, and find out how the State monad can make such a task easier.

## Pseudo-Random Numbers

Generating actual random numbers is a very complicated subject; we will consider pseudo-random numbers. They are called "pseudo" because they are not really random, they only look like it. Starting from an initial state (commonly called the seed), pseudo-random number generators produce a sequence of numbers that have the appearance of being random.

Every time a pseudo-random number is requested, a global state is updated: that's the part we have problems with in Haskell, since it is a side effect from the point of view of the function requesting the number. Sequences of pseudo-random numbers can be replicated exactly if the initial seed and the algorithm is known.

Producing a pseudo-random number in most programming languages is very simple: there is usually a function, such as C or C++'s rand(), that provides a pseudo-random value (or a random one, depending on the implementation). Haskell has a similar one in the System.Random module:

> :m System.Random
> :t randomIO
randomIO :: Random a => IO a
> randomIO
-1557093684


Obviously, save eerie coincidences, the value you will obtain will be different. A disadvantage of randomIO is that it requires us to utilise the IO monad, which breaks purity requirements. Usage of the IO monad is dictated by the process of updating the global generator state, so that the next time we call randomIO the value will be different.

### Implementation with Functional Purity

In general, we do not want to use the IO monad if we can help it, because of the loss of guarantees on no side effects and functional purity. Indeed, we can build a local generator (as opposed to the global generator, invisible to us, that randomIO uses) using mkStdGen, and use it as seed for the random function, which in turn returns a tuple with the pseudo-random number that we want and the generator to use the next time:

> :m System.Random
> let generator = mkStdGen 0           -- "0" is our seed
> generator
1 1
> random generator :: (Int, StdGen)
(2092838931,1601120196 1655838864)


While we have now regained functional purity, there are new problems to bother us. First and foremost, if we want to use generator to get random numbers, the obvious definition...

> let randInt = fst . random $generator :: Int > randInt 2092838931  ...is useless, as it will always give back the same value, 2092838931, no matter how many times random is called, for the same generator is always used. Of course we can take the second member of the tuple (i.e. the new generator) and feed it to a new call to random: > let (randInt, generator') = random generator :: (Int, StdGen) > randInt -- Same value 2092838931 > random generator' :: (Int, StdGen) -- Using new generator' returned from “random generator” (-2143208520,439883729 1872071452)  That, however, keeps us from having a function which simply gives back a new value, without the fuss of having to pass the generator around. What we really need is a way to automate the extraction of the second member of the tuple (i.e. the new generator) and feed it to a new call to random; and that is where the State monad comes into the picture. ## Definition of the State Monad Note In this chapter we will use the state monad provided by the module Control.Monad.Trans.State of the transformers package. By reading Haskell code in the wild you will soon meet Control.Monad.State, a module of the closely related mtl package. The differences between these two modules need not concern us at the moment; everything we discuss here also applies to the mtl variant. The Haskell type State is in essence a function that consumes state, and produces a result and the state after the result has been extracted. The function is wrapped by a data type definition, which comes along with a runState accessor so that pattern matching becomes unnecessary. For our current purposes, the definition is equivalent to: newtype State s a = State { runState :: s -> (a, s) }  Here, s is the type of the state, and a the type of the produced result. The name State for the type is arguably a bit of a misnomer, as the wrapped value is not the state itself but a state processor. ### Before You Ask... What in the world did we mean by "for our current purposes" two paragraphs above? The subtlety is that the transformers package implements the State type in a somewhat different way. The differences do not affect how we use or understand State; as a consequence of them, however, Control.Monad.Trans.State does not export a State constructor. Rather, there is a state function, state :: (s -> (a, s)) -> State s a  which does the same job. As for why the implementation is not the obvious one we presented above, we will get back to that a few chapters down the road. ### newtype You will also have noticed that we defined the data type with the newtype keyword, rather than the usual data. newtype can be used only for types with just one constructor and just one field; it ensures the trivial wrapping and unwrapping of the single field is eliminated by the compiler. For that reason, simple wrapper types such as State are usually defined with newtype. One might ask whether defining a synonym with type would be enough in such cases. type, however, does not allow us to define instances for the new data type, which is what we are about to do... ### Instantiating the Monad Note also that, in contrast to the monads we have met thus far, State has two type parameters. This means that, when we instantiate the monad, we are actually leaving the parameter for the state type: instance Monad (State s) where  This means that the "real" monad could be State String, State Int, or State SomeLargeDataStructure, but not State on its own. The return function is implemented as: return :: a -> State s a return x = state ( \st -> (x, st) )  In words, giving a value to return produces a function, wrapped in the State constructor: this function takes a state value, and returns it unchanged as the second member of a tuple, together with the specified result value. Binding is a bit intricate: (>>=) :: State s a -> (a -> State s b) -> State s b processor >>= processorGenerator = state$ \st ->
let (x, st') = runState processor st
in runState (processorGenerator x) st'


The idea is that, given a state processor and a function that can generate another processor given the result of the first one, these two processors are combined to obtain a function that takes the initial state, and returns the second result and state (i.e. after the second function has processed them).

Loose schematic representation of how bind creates a new state processor (pAB) from the given state processor (pA) and the given generator (f). s1, s2 and s3 are actual states. v2 and v3 are values. pA, pB and pAB are state processors. The diagram ignores wrapping and unwrapping of the functions in the State wrapper.

The diagram shows this schematically, for a slightly different, but equivalent form of the ">>=" (bind) function, given below (where wpA and wpAB are wrapped versions of pA and pAB).

-- pAB = s1 --> pA --> (v2,s2) --> pB --> (v3,s3)
wpA >>= f = wpAB
where wpAB = state $\s1 -> let pA = runState wpA (v2, s2) = pA s1 pB = runState$ f v2
(v3,s3) = pB s2
in  (v3,s3)


### Setting and Accessing the State

The monad instantiation allows us to manipulate various state processors, but you may at this point wonder where exactly the original state comes from in the first place. State s is also an instance of the MonadState class, which provides two additional functions:

put newState = state $\_ -> ((), newState)  This function will generate a state processor given a state. The processor's input will be disregarded, and the output will be a tuple carrying the state we provided. Since we do not care about the result (we are discarding the input, after all), the first element of the tuple will be, so to say, "null".[76] The specular operation is to read the state. This is accomplished by get: get = state$ \st -> (st, st)


The resulting state processor is going to produce the input st in both positions of the output tuple - that is, both as a result and as a state, so that it may be bound to other processors.

### Getting Values and State

From the definition of State, we know that runState is an accessor to apply to a State a b value to get the state-processing function; this function, given an initial state, will return the extracted value and the new state. Other similar, useful functions are evalState and execState, which work in a very similar fashion.

Function evalState, given a State a b and an initial state, will return the extracted value only, whereas execState will return only the new state; it is possibly easiest to remember them as defined as:

evalState :: State s a -> s -> a
evalState processor st = fst ( runState processor st )

execState :: State s a -> s -> s
execState processor st = snd ( runState processor st )


## Example: Rolling Dice

randomRIO (1,6)

Suppose we are coding a game in which at some point we need an element of chance. In real-life games that is often obtained by means of dice, which we will now try to simulate with Haskell code. For starters, we will consider the result of throwing two dice: to do that, we resort to the function randomR, which allows to specify an interval from which the pseudo-random values will be taken; in the case of a die, it is randomR (1,6).

In case we are willing to use the IO monad, the implementation is quite simple, using the IO version of randomR:

import Control.Monad
import System.Random

rollDiceIO :: IO (Int, Int)
rollDiceIO = liftM2 (,) (randomRIO (1,6)) (randomRIO (1,6))


Here, liftM2 is used to make the non-monadic two-argument function (,) work within a monad, so that the two numbers will be returned (in IO) as a tuple.

Exercises
1. Implement a function rollNDiceIO :: Int -> IO [Int] that, given an integer, returns a list with that number of pseudo-random integers between 1 and 6.

### Getting Rid of the IO Monad

Now, suppose that for any reason we do not want to use the IO monad: we might want the function to stay pure, or need a sequence of numbers that is the same in every run, for repeatability.

To do that, we can produce a generator using the mkStdGen function in the System.Random library:

> mkStdGen 0
1 1


The argument to mkStdGen is an Int that functions as a seed. With that, we can generate a pseudo-random integer number in the interval between 1 and 6 with:

> randomR (1,6) (mkStdGen 0)
(6,40014 40692)


We obtained a tuple with the result of the dice throw (6) and the new generator (40014 40692). A simple implementation that produces a tuple of two pseudo-random integers is then:

clumsyRollDice :: (Int, Int)
clumsyRollDice = (n, m)
where
(n, g) = randomR (1,6) (mkStdGen 0)
(m, _) = randomR (1,6) g


When we run the function, we get:

> clumsyRollDice
(6, 6)


The implementation of clumsyRollDice works, but we have to manually write the passing of generator g from one where clause to the other. This is pretty easy now, but will become increasingly cumbersome if we want to produce large sets of pseudo-random numbers. It is also error-prone: what if we pass one of the middle generators to the wrong line in the where clause?

Exercises
1. Implement a function rollDice :: StdGen -> ((Int, Int), StdGen) that, given a generator, return a tuple with our random numbers as first element and the last generator as the second.

### Introducing State

We will now try to solve the clumsiness of the previous approach introducing the State StdGen monad. For convenience, we give it a name with a type synonym:

import Control.Monad.Trans.State
import System.Random

type GeneratorState = State StdGen


Remember, however, that a GeneratorState Int is in essence a StdGen -> (Int, StdGen) function, so it is not really the generator state, but a processor of the generator state. The generator state itself is produced by the mkStdGen function. Note that GeneratorState does not specify what type of values we are going to extract, only the type of the state.

We can now produce a function that, given a StdGen generator, outputs a number between 1 and 6:

rollDie :: GeneratorState Int
rollDie = do generator <- get
let (value, newGenerator) = randomR (1,6) generator
put newGenerator
return value


The do notation is in this case much more readable; let's go through each of the steps:

1. First, we take out the pseudo-random generator with <- in conjunction with get. get overwrites the monadic value (The 'a' in 'm a') with the state, and thus generator is bound to the state. (If in doubt, recall the definition of get and >>= above).
2. Then, we use the randomR function to produce an integer between 1 and 6 using the generator we took; we also store the new generator graciously returned by randomR.
3. We then set the state to be the newGenerator using the put function, so that the next call will use a different pseudo-random generator;
4. Finally, we inject the result into the GeneratorState monad using return.

We can finally use our monadic die:

> evalState rollDie (mkStdGen 0)
6


At this point, a legitimate question is why we have involved monads and built such an intricate framework only to do exactly what fst $randomR (1,6) does. The answer is illustrated by the following function: rollDice :: GeneratorState (Int, Int) rollDice = liftM2 (,) rollDie rollDie  We obtain a function producing two pseudo-random numbers in a tuple. Note that these are in general different: > evalState rollDice (mkStdGen 666) (6,1)  That is because, under the hood, the monads are passing state to each other. This used to be very clunky using randomR (1,6), because we had to pass state manually; now, the monad is taking care of that for us. Assuming we know how to use the lifting functions, constructing intricate combinations of pseudo-random numbers (tuples, lists, whatever) has suddenly become much easier. Exercises 1. Similarly to what was done for rollNDiceIO, implement a function rollNDice :: Int -> GeneratorState [Int] that, given an integer, returns a list with that number of pseudo-random integers between 1 and 6. ## Pseudo-random values of different types Until now, absorbed in the die example, we considered only Int as the type of the produced pseudo-random number. However, already when we defined the GeneratorState monad, we noticed that it did not specify anything about the type of the returned value. In fact, there is one implicit assumption about it, and that is that we can produce values of such a type with a call to random. Values that can be produced by random and similar function are of types that are instances of the Random class (capitalised). There are default implementations for Int, Char, Integer, Bool, Double and Float, so you can immediately generate any of those. Since we noticed already that the GeneratorState is "agnostic" in regard to the type of the pseudo-random value it produces, we can write down a similarly "agnostic" function, analogous to rollDie, that provides a pseudo-random value of unspecified type (as long as it is an instance of Random): getRandom :: Random a => GeneratorState a getRandom = do generator <- get let (value, newGenerator) = random generator put newGenerator return value  Compared to rollDie, this function does not specify the Int type in its signature and uses random instead of randomR; otherwise, it is just the same. What is notable is that getRandom can be used for any instance of Random: > evalState getRandom (mkStdGen 0) :: Bool True > evalState getRandom (mkStdGen 0) :: Char '\64685' > evalState getRandom (mkStdGen 0) :: Double 0.9872770354820595 > evalState getRandom (mkStdGen 0) :: Integer 2092838931  Indeed, it becomes quite easy to conjure all these at once: allTypes :: GeneratorState (Int, Float, Char, Integer, Double, Bool, Int) allTypes = liftM (,,,,,,) getRandom ap getRandom ap getRandom ap getRandom ap getRandom ap getRandom ap getRandom  Here we are forced to used the ap function, defined in Control.Monad, since there exists no liftM7. As you can see, its effect is to fit multiple computations into an application of the (lifted) 7-element-tuple constructor, (,,,,,,). To understand what ap does, look at its signature: >:type ap ap :: (Monad m) => m (a -> b) -> m a -> m b  remember then that type a in Haskell can be a function as well as a value, and compare to: >:type liftM (,,,,,,) getRandom liftM (,,,,,) getRandom :: (Random a1) => State StdGen (b -> c -> d -> e -> f -> (a1, b, c, d, e, f))  The monad m is obviously State StdGen (which we "nicknamed" GeneratorState), while ap's first argument is function b -> c -> d -> e -> f -> (a1, b, c, d, e, f). Applying ap over and over (in this case 6 times), we finally get to the point where b is an actual value (in our case, a 7-element tuple), not another function. To sum it up, ap applies a function-in-a-monad to a monadic value (compare with liftM, which applies a function not in a monad to a monadic value). So much for understanding the implementation. Function allTypes provides pseudo-random values for all default instances of Random; an additional Int is inserted at the end to prove that the generator is not the same, as the two Ints will be different. > evalState allTypes (mkStdGen 0) (2092838931,9.953678e-4,'\825586',-868192881,0.4188001483955421,False,316817438)  Exercises 1. If you are not convinced that State is worth using, try to implement a function equivalent to evalState allTypes without making use of monads, i.e. with an approach similar to clumsyRollDice above. ## Notes 1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read. 2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.  Print version Solutions to exercises Monads Understanding monads (May 04, 2010) >> Maybe >> List >> do Notation >> IO >> State >> Additive monads (MonadPlus) >> Monad transformers >> Practical monads edit this chapter Haskell edit book structure # Additive monads (MonadPlus) ﻿  Print version (Solutions) Monads edit this chapter You may have noticed, whilst studying monads, that the Maybe and list monads are quite similar, in that they both represent the number of results a computation can have. That is, you use Maybe when you want to indicate that a computation can fail somehow (i.e. it can have 0 or 1 result), and you use the list monad when you want to indicate a computation could have many valid answers (i.e. it could have 0 results -- a failure -- or many results). Given two computations in one of these monads, it might be interesting to amalgamate all valid solutions into a single result. Taking the list monad as an example, given two lists of valid solutions we can simply concatenate the lists together to get all valid solutions. In such a context, it is also useful, especially when working with folds, to require a value corresponding to "zero results" (i.e. failure). For lists, the empty list represents zero results. MonadPlus is a type class which captures these features in a general way. ## Definition MonadPlus defines two methods. mzero is the monadic value standing for zero results; while mplus is a binary function which combines two computations. class Monad m => MonadPlus m where mzero :: m a mplus :: m a -> m a -> m a  Here are the two instance declarations for Maybe and the list monad: instance MonadPlus [] where mzero = [] mplus = (++) instance MonadPlus Maybe where mzero = Nothing Nothing mplus Nothing = Nothing -- 0 solutions + 0 solutions = 0 solutions Just x mplus Nothing = Just x -- 1 solution + 0 solutions = 1 solution Nothing mplus Just x = Just x -- 0 solutions + 1 solution = 1 solution Just x mplus Just y = Just x -- 1 solution + 1 solution = 2 solutions, -- but as Maybe can only have up to one -- solution, we disregard the second one.  Also, if you import Control.Monad.Error, then (Either e) becomes an instance: instance (Error e) => MonadPlus (Either e) where mzero = Left noMsg Left _ mplus n = n Right x mplus _ = Right x  Remember that (Either e) is similar to Maybe in that it represents computations that can fail, but it allows the failing computations to include an error "message" (often, but not necessarily, a string). Typically, Left s means a failed computation carrying an error message s, and Right x means a successful computation with result x. ## Example: parallel parsing A traditional way of parsing an input is to write functions which consume it, one character at a time. That is, they take an input string, then chop off ('consume') some characters from the front if they satisfy certain criteria (for example, you could write a function which consumes one uppercase character). However, if the characters on the front of the string don't satisfy these criteria, the parsers have failed, and therefore they make a valid candidate for a Maybe. Here we use mplus to run two parsers in parallel. That is, we use the result of the first one if it succeeds, but if not, we use the result of the second. If that too fails, then our whole parser returns Nothing. -- | Consume a digit in the input, and return the digit that was parsed. We use -- a do-block so that if the pattern match fails at any point, fail of -- the Maybe monad (i.e. Nothing) is returned. digit :: Int -> String -> Maybe Int digit i s | i > 9 || i < 0 = Nothing | otherwise = do let (c:_) = s if read [c] == i then Just i else Nothing -- | Consume a binary character in the input (i.e. either a 0 or an 1) binChar :: String -> Maybe Int binChar s = digit 0 s mplus digit 1 s  Parser libraries often make use of MonadPlus in this way. If you are curious, check the (+++) operator in Text.ParserCombinators.ReadP, or (<|>) in Text.ParserCombinators.Parsec.Prim. ## The MonadPlus laws Instances of MonadPlus are required to fulfill several rules, just as instances of Monad are required to fulfill the three monad laws. Unfortunately, these laws aren't set in stone anywhere and aren't fully agreed on. The most common (but not universal) are that mzero and mplus form a monoid. By that, we mean: -- mzero is a neutral element mzero mplus m = m m mplus mzero = m -- mplus is associative -- (not all instances obey this law, because it makes some infinite structures impossible) m mplus (n mplus o) = (m mplus n) mplus o  There is nothing fancy about "forming a monoid": in the above, "neutral element" and "associative" are meant with exactly the same sense that addition of integer numbers is said to be associative and to have zero as neutral element. In fact, this analogy gives the names of mzero and mplus. The Haddock documentation for Control.Monad quotes additional laws: mzero >>= f = mzero m >> mzero = mzero  And the HaskellWiki page cites another (with controversy): (m mplus n) >>= k = (m >>= k) mplus (n >>= k)  There are even more sets of laws available, and therefore you'll sometimes see monads like IO being used as a MonadPlus. Consult All About Monads and the Haskell Wiki page on MonadPlus for extra more information about such issues. ## Useful functions Beyond the basic mplus and mzero themselves, there are two other important general-purpose functions involving MonadPlus: ### msum A very common task when working with instances of MonadPlus is to take a list of monadic values, e.g. [Maybe a] or [[a]], and fold it down with mplus. msum fulfills this role: msum :: MonadPlus m => [m a] -> m a msum = foldr mplus mzero  A nice way of thinking about this is that it generalises the list-specific concat operation. Indeed, for lists, the two are equivalent. For Maybe it finds the first Just x in the list, or returns Nothing if there aren't any. ### guard When discussing the list monad we noted how similar it was to list comprehensions, but we left the question of how to mirror list comprehension filtering in the list monad hanging. The guard function allows us to do exactly that. Our example for this section will be the following comprehension, which retrieves all pythagorean triples (that is, trios of integer numbers which taken together are the lengths of the sides for some right triangle) in the obvious, brute-force way. It uses a boolean condition for filtering; namely, Pythagoras' theorem: pythags = [ (x, y, z) | z <- [1..], x <- [1..z], y <- [x..z], x^2 + y^2 == z^2 ]  The translation of the comprehension above to the list monad is: pythags = do z <- [1..] x <- [1..z] y <- [x..z] guard (x^2 + y^2 == z^2) return (x, y, z)  guard looks like this: guard :: MonadPlus m => Bool -> m () guard True = return () guard False = mzero  Concretely, guard will reduce a do-block to mzero if its predicate is False. By the very first law stated in the 'MonadPlus laws' section above, an mzero on the left-hand side of an >>= operation will produce mzero again. As do-blocks are decomposed to lots of expressions joined up by (>>=), an mzero at any point will cause the entire do-block to become mzero. To further illustrate that, we will examine guard in the special case of the list monad, extending on the pythags function above. First, here is guard defined for the list monad: guard :: Bool -> [()] guard True = [()] guard False = []  guard blocks off a route. For example, in pythags, we want to block off all the routes (or combinations of x, y and z) where x^2 + y^2 == z^2 is False. Let's look at the expansion of the above do-block to see how it works: pythags = [1..] >>= \z -> [1..z] >>= \x -> [x..z] >>= \y -> guard (x^2 + y^2 == z^2) >>= \_ -> return (x, y, z)  Replacing >>= and return with their definitions for the list monad (and using some let-bindings to keep it readable), we obtain: pythags = let ret x y z = [(x, y, z)] gd z x y = concatMap (\_ -> ret x y z) (guard$ x^2 + y^2 == z^2)
doY z x   = concatMap (gd  z x) [x..z]
doX z     = concatMap (doY z  ) [1..z]
doZ       = concatMap (doX    ) [1..]
in doZ


Remember that guard returns the empty list in the case of its argument being False. Mapping across the empty list produces the empty list, no matter what function you pass in. So the empty list produced by the call to guard in the binding of gd will cause gd to be the empty list, and therefore ret to be the empty list.

To understand why this matters, think about list-computations as a tree. With our Pythagorean triple algorithm, we need a branch starting from the top for every choice of z, then a branch from each of these branches for every value of x, then from each of these, a branch for every value of y. So the tree looks like this:

   start
|__________________...
| |     |
z  1 2     3
| |____ |__________
| |   | |     |   |
x  1 1   2 1     2   3
| |__ | |____ |__ |
| | | | | | | | | |
y  1 1 2 2 1 2 3 2 3 3


Each combination of x, y and z represents a route through the tree. Once all the functions have been applied, each branch is concatenated together, starting from the bottom. Any route where our predicate doesn't hold evaluates to an empty list, and so has no impact on this concat operation.

## Exercises

2. We could augment our above parser to involve a parser for any character:
 -- | Consume a given character in the input, and return the character we
--   just consumed, paired with rest of the string. We use a do-block  so that
--   if the pattern match fails at any point, fail of the Maybe monad (i.e.
--   Nothing) is returned.
char :: Char -> String -> Maybe (Char, String)
char c s = do
let (c':s') = s
if c == c' then Just (c, s') else Nothing

It would then be possible to write a hexChar function which parses any valid hexidecimal character (0-9 or a-f). Try writing this function (hint: map digit [0..9] :: [String -> Maybe Int]).
3. More to come...

## Relationship with monoids

When discussing the MonadPlus laws, we alluded to the mathematical concept of monoids. It turns out that there is a Monoid class in Haskell, defined in Data.Monoid.[77] A minimal definition of Monoid implements two methods; namely, a neutral element (or 'zero') and an associative binary operation (or 'plus').

class Monoid m where
mempty  :: m
mappend :: m -> m -> m


For example, lists form a simple monoid:

instance Monoid [a] where
mempty  = []
mappend = (++)


Sounds familiar, doesn't it? In spite of the uncanny resemblance to MonadPlus, there is a subtle yet key difference: note the usage of [a], not [], in the instance declaration. Monoids are not necessarily "containers" of anything, or parametrically polymorphic. The integer numbers, for instance, form a monoid under addition, with 0 as neutral element.

In any case, MonadPlus instances look very similar to monoids, as both feature concepts of zero and plus. Indeed, we could even make MonadPlus a subclass of Monoid if there was a real need to take the trouble:

 instance MonadPlus m => Monoid (m a) where
mempty  = mzero
mappend = mplus


Note

Due to the "free" type variable a in the instance definition, the snippet above is not valid Haskell 98. If you want to test it, you will have to enable the GHC language extension FlexibleInstances:

• If you are testing with GHCi, start it with the command line option -XFlexibleInstances.
• Alternatively, if you are running a compiled program, add {-# LANGUAGE FlexibleInstances #-} to the top of your source file.

However, they work at different levels. As noted, there is no requirement for monoids to be parameterized in relation to "contained" or related type. More formally, monoids have kind *, but instances of MonadPlus, as they are monads, have kind * -> *.

## Notes

1. The Wikipedia article on text editors is a reasonable place to start if you need suggestions. Popular ones include Vim, Emacs and, on Windows, Notepad++ (not plain old Notepad). Proper programming text editors will provide syntax highlighting, a feature which colourises code in relevant ways so as to make it easier to read.
2. As you can see, the names of variables may also contain numbers. Variables must begin with a lowercase letter, but for the rest, any string consisting of letter, numbers, underscore (_) or tick (') is allowed.

 Print version Solutions to exercises Monads Understanding monads  (May 04, 2010)  >> Maybe >> List  >> do Notation  >> IO >> State  >> Additive monads (MonadPlus)  >> Monad transformers  >> Practical monads edit this chapter Haskell edit book structure

 Print version (Solutions) Monads edit this chapter

Monads provide a clean means of embedding a domain specific parsing language directly into Haskell without the need for external tools or code generators.

 To do: write the page! In the meantime, here is a paper to read and a practical chapter on Parsing Monads in Haskell from this wikibook to get you started.
 Print version Solutions to exercises Monads Understanding monads  (May 04, 2010)  >> Maybe >> List  >> do Notation  >> IO >> State  >> Additive monads (MonadPlus)  >> Monad transformers  >> Practical monads edit this chapter Haskell edit book structure

By this point you should have grasped the concept of monad, and what different monads are used for: IO for impure functions, Maybe for values that can be there or not, and so on. With monads providing such useful general-purpose functionalities, it is very natural that sometimes we would like to use the capabilities of several monads at once - for instance, a function which uses both I/O and Maybe exception handling. Sure, we can use a type like IO (Maybe a), but that forces us to do pattern matching within the do` blocks to extract the values you want: the point of monads was also to get rid of that.