# How this book came to be: instead of an introduction

Feel free to skip this somewhat philosophical intro right to the Chapter 1

Lately, I’ve been spending quite a bit of time with Type Theory, lambda cube (e.g., implementing SystemF-omega-ish type library in coffeescript for the fun of finding out how things work as Dr. Feynman used to say), toy functional languages (Haskell is great, but what would a great functional language look like if it were designed today - with all that we’ve learned from state-of-the-art Haskell development in the last 20 years?), parsers, GUI libraries design, and functional reactive programming.

Two thoughts persisted:

• “I need to write another monad tutorial” — haha, got ya— “how do I teach functional programming to my kids?”  — so that it’s light, comprehensible, fun and conveys the beauty of the concepts involved,  —  and:
• “we’ve been doing it all wrong”

Ok, the latter may be a gross exaggeration, but I need to drive the point across. Like many unfortunate souls before me, I have gone down the painful but oh-so-typical road of coming from imperative OO background and trying to build bridges from the patterns learned there to a completely, absolutely different functional world.

The problem with this is, your brain works against you
(do read the classic “Thinking, Fast and Slow” by Dr. Kahneman if you haven’t yet — at the very least, it will help you detect and fight your own biases)

Our brain’s “System 1” works in patterns. It is efficient, extremely fast — much faster than the conscious “System 2”, which we are using when studying Category Theory, — it is in contrast subconscious, it deconstructs and shreds all the elephants you see in your life into basic shapes and curves and colors, which makes recognizing elephants a very easy task, until one day you glimpse upon a cloud like this —

— and your brain goes: “Oh, look, it’s an elephant!”

### Well, no, it’s a cloud.

And this is exactly the trap laid out for programmers trying to escape to the wonderful world of Haskell from the less elegant Java, C++, or that terrible-which-shall-not-be-named-but-ends-in-…avaScript backgrounds. The thought goes: C++ is a programming language, as is Haskell, so the concepts should be pretty similar, right? Isn’t it like Indian and African elephants: sure, one has bigger ears but aren’t they essentially the same?

No, they are not: one is an elephant and the other one is a cloud. You cannot reason about clouds using your elephant knowledge. You can ride an elephant, but attempting to ride a cloud would end really badly. Your brain constantly tries to find analogies between your previous experience and new knowledge - that’s how we learn, it is a good thing - but in this case the unfortunate fact that both Haskell and JavaScript are called “programming languages” is a disservice, as trying to find similar patterns in both leads to consequences only slightly less disastrous than trying to ride a cloud thinking it’s an elephant.

To master the power of Haskell, you need to set aside everything you know about imperative programming and build upon a totally different foundation. One of pure abstract math, starting from category theory, from set theory, from type theory, from Henri Poincare; you have to build Natural numbers with pen and paper starting with an empty set, construct a Ring on them, abstract away and discover Monoids, start appreciating the beauty of the internal structure of types and categories manifested in polymorphic functions and laws on them, be amazed at how one abstraction can elegantly describe seemingly unrelated entities…

### But who in the world has time for this?!

Well, exactly. This is one of the primary reasons there’s still a certain bias against Haskell among practitioners — a big part of the Haskell community give the impression of very smart monad-stacking-arrow-composing-scientist-magicians sitting in their ivory towers with too much time on their hands working out some abstract problems, while we need to code stuff that works even if it’s ugly.

Take heart, there’s hope! But before I outline a proposed (yet another) way out, let me illustrate the elephant vs cloud conundrum with a pretty typical “where is my for loop?!” situation.

Let’s say you are building a GUI library based on some low-level API and need to render lines of text on the screen. You construct an array of textures where each corresponds to a new line of text and need to render each one under the other, incrementing an y coordinate, like so:

Something you’ve done a thousand times in different situations. You switch to Haskell where you have a list of textures textures :: [Texture] and your System 1 starts screaming “where is my for loop?!” and then “how do I pass the state when iterating?” Should I use mutable variables, IORef, MVar, TVar ? I know I am supposed to map over lists, so should I use a State monad to increment y while mapping?

And only when you tell your System 1 to stop getting ideas from your imperative experience, consider abstractions, realize it is best to fold and traverse structures in the functional world, you come to something like:

… which gives you goosebumps after all the braces and semicolons in the c-land, but patterns have nothing in common: you iterate over arrays with mutable variables in one and you fold structures with computations in the other. Elephants and clouds.

### So, how do we teach Haskell to kids or help adults master its’ power faster and more efficiently?

First, stop trying to build bridges from the imperative world - it messes with people’s brains. In fact, explaining design patterns for program architecture on the real-world problems and contrasting approaches with the imperative world is very much needed and largely missing now - but this needs to be done only after a student has learned the functional approach and small patterns from scratch. However, starting teaching Haskell in the bottom-up imperative way, with “Hello World” and calculator in ghci is in my strong opinion absolutely counterproductive - it leads right into the elephants vs clouds trap and it takes much longer to start appreciating the power and beauty of Haskell.

We need to start from Type Theory, make people understand and fall in love with Types, their structure, their daily lives, their relations with each other; a student has to become fluent in writing and understanding type-level functions before she even thinks about writing “real” functions. It is actually a pretty traveled road as well — but only  if you have a reasonably strong math background.

Catch up on your Category Theory reading (Bartosz Milewski’s “Category Theory for Programmers” is out and it’s excellent), study Typeclassopedia, use amazing Stephen Diehl’s “What I Wish I Knew When Learning Haskell” as a reference, and get a feeling on how to structure real programs using monad transformer stacks, but whatever you do, under any circumstances, DO NOT read monad tutorials!

Monads are very simple, it’s just another beautiful and powerful math abstraction, one among many, and you will confuse and break your brain with false analogies if you read the tutorials — study it in the context of Functor-Applicative-Monad hierarchy, so, again, Typeclassopedia.

Then, what was the point of all this? I am a very visual person when studying, and I’ve been thinking for some time now that math concepts underlying modern functional programming and type theory can be explained in a more accessible way using visuals, magic, and various cute animal-related analogies. Existing literature on the concepts mentioned above is often too technical, even for practicing programmers, let alone kids, even in their teens, and focuses on formal proofs a lot.

However, if we ease on trying to rigorously prove stuff and just illustrate and explain those same concepts, from lambda calculus in SystemFw (even the name is a mouthful, but the machinery is again extremely simple!) to types and their relations to category theory and arrows, in a top-down visual approach, this may help people build new patterns, completely unrelated to imperative programming, and then gradually move to more technical, more in-depth study while already becoming productive in writing elegant Haskell code for practical tasks. Something in the spirit of excellent “Learn you a Haskell” but building up upon math foundation of types and typeclass hierarchy vs “hello world” and ghci REPL.

Hence, the idea of “Magical Haskell” was born. In fact, it should probably go well hand in hand with “Learn you a Haskell” - with the latter providing for a lot of basic language foundation in much more detail, and “Magical Haskell” focusing on trying to create a system of typed functional patterns, typeclass hierarchy, and explaining abstract technical concepts in an accessible comprehensible way.

# Short Introduction: How To Read This Book

In this book, we make an attempt to introduce very advanced and very technical concepts that underly Haskell language in an accessible way, using everyday analogies, pictures, and real-world examples where possible. We have a strong conviction that having a strong grasp on Types and the bigger picture, in general, makes any practical Haskell programmer much more efficient and productive, while also being able to fully appreciate the beauty of the mathematical concepts involved. We start with these foundations and then move to the advanced concepts of how to structure real-world big applications (a topic a lot of Haskell students struggle with) using a variety of abstractions, libraries, and design patterns.

We start Chapter 1 with basic types and functions, introducing the functional approach to solving problems along the way. Chapter 2 continues with algebraic data types introduction, parametric types (“type functions”), recursion, and pattern match. We also look at a concise program that deals a deck of cards and uses a lot of these concepts. At the end of this chapter, we “invent” a Functor. Chapter 3M focuses on a semi-formal introduction to the (Martin-Loef) Type Theory and very lightly touches Category Theory. Haskell uses a weaker type system called “system-FC”, but again - understanding the bigger picture helps to both grasp Haskell abstractions as well as transition to languages such as Idris or Coq with less difficulty.

Chapter 4 focuses on basic Typeclasses using ubiquitous algebraic examples (such as Monoid) to explain the concept. Chapter 5 follows up and focuses on how to introduce Functor and Bifunctor naturally.

Further chapters are still work-in-progress, but you can get a feeling of the direction in which we are taking this work from the chapter names.

# Chapter 1: Wizards, Types, and Functions

In this chapter: functional programming is magic * solving problems the functional way * putting something on the screen * introducing types * introducing type functions

## Functional Programming Is Magic

You wake up in a cheap motel room in Brooklyn. You hear police sirens outside, the ceiling is grey and dirty, and a big cockroach is slowly crawling across. You don’t remember much, except that you need to be in a fancy restaurant in Manhattan at 7 pm — oh my, it’s in 15 minutes! — and that you are a wizard. The problem is, you only remember three spells:

You are obviously in a hurry, so, looking attentively at a cockroach, you wave your hands and go: “Moz’bah’da!” — and puff! — there is a beautiful white horse right beside the bed! In a short moment, you realize that riding a horse through New York might not get you there on time, plus you are not sure you even can ride a horse, so you wave your hands again and shout: “Gimmeraree!”

The horse disappears in a puff of smoke, and all you get is a pile of stinking goo on the floor. Oh-oh. You forgot that “Gimmeraree” works only on pigs! Poor horse. You catch another cockroach, go outside, and this time say your spells in the correct order — “K’hee-bah!”, “Gimmeraree!” — and there you go, red Ferrari Spider, in mint condition standing right there besides the garbage bins. Hastily, you jump into the car and drive away to a meeting that will hopefully clear up your past, present, and future destiny.

Typed functional programming is just like magic. Your spells are functions. They turn something into something else (like pigs to Ferraris or strings to numbers) or produce some effect (like putting Ogres to sleep or drawing a circle on the screen), and you need to combine different spells in the right order to get to the desired result.

What happened in the story above was dynamically typed magic: you used a spell that works only on pigs on a horse instead, and your “program” crashed, killing the horse in the process (that’s what often happens with JavaScript programs). With the statically typed magic that we will use from now on, a powerful being called Typechecker would see your mistake, stop your hand, and would tell you that “Gimmeraree” expects a pig as its’ input — which would indeed make you a much more powerful and successful mage than if you go around killing other people’s horses.

## Solving Problems As Wizards Do

So how do we approach solving problems the functional way? The mage above needed to get a fast mode of transportation, and all he had were cockroaches. So, he had to think about how he could get from a cockroach to a Ferrari using the spells he knows by combining them in the correct order. He started with an end-result — Ferrari — and worked backward from there, getting to the correct sequence by composing spells that work on the right subjects.

Now, let’s get real and say you need to calculate an average of a list of numbers and show it on the screen. Please resist the imperative urge to start telling the computer what to do and how to do it and let’s focus on the problem as a wizard would instead.

You have a human, and you need to turn her into a line on the screen, hopefully without killing anyone. Let’s work backward from our desired result — “a line on the screen with the average printed out”.

To get to this result, we need something that takes a number and shows it on the screen. Let’s call this something “print”:

Now our problem is a bit easier - we need to get from a human to a number. “Print”, whatever it is, needs a number to operate. Since our goal is to calculate and show an average value, the previous step needs to take a list of numbers and, uhm, calculate their average:

Our problem became easier still - now we just need to extract the list of numbers from our human and the solution is ready!

What did we just do? We have broken down or decomposed our problem into smaller pieces, solutions to which we already know or can come up with easily enough, and combined them in the right order to get to the desired result. Just like with the mage above, we cannot give a string or a horse to our calculate average - it expects a list of numbers, so it will not compose with functions that produce something other than a list of numbers! Thus, we came up not only with “spells”-functions that we would need to solve our problem, but also with types that our functions should work with: a list of numbers, a number, an effect of putting something on screen. From the start, just by thinking about how to solve a problem, we “invented” two most important concepts in programming, computer science and mathematics itself: Functions and Types.

We’ll start looking at them more closely later on in this chapter. For now, here’s a bit of real magic: as it turns out, Haskell code maps virtually 1:1 to this approach! Look at the first line of the code below (don’t mind the rest for now) and try it live in your browser:

The first line is our sequence of “spells”, or, really, functions, that start with a human sitting at the keyboard and get us to the line on the screen with calculated average shown. First, we ask a human to enter the list of numbers with ask and pass whatever was entered to the next function - toList - to convert the string she enters to an actual List. Resulting list is given to calcAverage function that in turn passes its’ result to printResult. Exactly what we came up with, reasoning about the problem in the passage above.

We started with the desired end result, worked backwards by decomposing the big problem into smaller pieces and connected or composed them so that the output type of one function fits to the input types of the next. This is the essence of every functional program. Lines of code after where further detail how the “spells” we are chaining should behave - don’t concern yourself with them yet. What’s important, is to get a feel of the process:

• What is our desired result?
• What is our input?
• Work backwards from the result to the input, decomposing the problem into smaller pieces that we know how to solve, by thinking about how to convert between types along the way
• Compose smaller functions, each of which solves a smaller problem piece, together into a bigger function, making sure the types fit

Practice this approach on some other problems you would like to solve. We will use it on quite a bit of the real-world problems further along.

## Let’s Fall In Love With Types!

To become a really powerful Haskell mage, you need to get to know and understand Types intimately: Their birth, their daily lives, and their relations with each other. Types come before functions and functions are meaningless without types. So, what are types? Intuitively, a type is just a bunch of objects (or subjects) sharing some common characteristic(s) thrown in a pot together. You could say that the type of your Ferrari Spider is Car, as is the type of that Toyota Prius. The type of both a pig and a horse is Animal. The type of 1, 114, 27428 is Natural Number. The type of the letters ‘a’, ‘C’, ‘e’, ‘z’ is Char (that’s an actual type for characters in Haskell). The last two are ubiquitous in programming, so let’s look at them more closely.

Let’s not forget that we are creative wizards, so we will be building a whole new world from scratch. We will call this world Category ‘Hask’ (for reasons that will become apparent much later), or simply Hask from now on, and it will be inhabited by Types, their members, or values, Functions, and some other interesting characters. As we create this world, we will gradually become more and more powerful, and eventually will be able to write efficient, elegant, and robust Haskell programs for any practical task.

### A Char and an Int walk into a bar and make a Function

Natural numbers is the most intuitive math concept that everyone is familiar with - that is what you use when counting times you read confusing monad tutorial articles. Natural numbers start with nothing, zero, and go on forever. Once you subtract the number of times you read confusing monad tutorials (at least 37) from the number of times it made sense (0), you realize the need for negative numbers, which together with naturals give you another infinite type - Integers. Various “ints” that you might be used to from the C-land, as well as the type Int in Haskell, is the teeny-tiny subset of real Integers, limited by 231-1 or 263-1 on the 64-bit systems. What is 9,223,372,036,854,775,807 compared to infinity? Same as nothing, but, fortunately, it is enough for most practical purposes, including counting all the money in the world if you want to write a new Automated Banking System at some point.

Char is our second foundational type - a pot of all characters we use when writing stuff in different languages, including mathematics or Egyptian pictography if we want to get fancy. It is also a finite type and quite a bit smaller than Int at that.

By themselves, these types are quite boring, and we can’t do much with them - it’s just lonely numbers and characters hanging out all by themselves. They are sort of like prisoners in solitary confinement with no way to interact with others. Let’s break them out and set them free to roam the world and talk to each other! To do this, we are going to need functions.

Let’s line up our characters from the Char type as they do in an English alphabet. Then we can ask each letter - “which number are you?” - like in the picture on the right.

We have just constructed our first function! What it does is match each letter with a number. Now, this is very important: it matches all letters with a number. We could match just some letters to certain numbers - then our function would be called a partial function since it would be defined not for all members of our type. A function can map several members of a type to one member of another type, e.g., we could define a function that matches ‘a’ and ‘b’ to 1, ‘c’ and ‘d’ to 2 etc - it would be a perfectly fine, even if a bit weird, function. However, what we cannot do is define a function that matches a member of a type to several members of another type:

On the other hand, an imperative procedure, for instance, a JavaScript “function”, can take an ‘a’ and return 1 on one day, ‘32’ on another, or crash if the temperature in Alaska drops below zero. Our function, as any other real mathematical function, always gives you the same result for the same arguments, no matter what.

You can trust the word of a function. You cannot trust anything from a procedure.

Our function (let’s call it whichNumberIsChar) takes a Char and gives an Int as a result. In Haskell, we write: whichNumberIsChar :: Char -> Int. You can try it live in your browser here. Just type (cheating a bit since we are using another, much more powerful polymorphic Haskell function to define ours): let whichNumberIsChar = fromEnum :: Char -> Int. Then you can type whichNumberIsChar 'Z' (or some other character) and see what happens.

Our world is still very boring. We have two basic types living in it, and we have one function between them, but we can’t even add two numbers together! That’s not enough to solve a simple calculate an average for some numbers problem, let alone do something more interesting. However, it turns out that Int, Char, and some imagination can get us much further than may appear. Let’s make some more magic!

### How FunTypical!

First, we need strings. What is a string? It’s basically just a bunch of characters put together in order. Like “abc”, “xyz” or “Hello World”. This hints at how we can introduce the type String: it’s a set of all possible combinations of characters from Char. Easy, right? We can also introduce another obvious useful function right away, between String and Int, that gives us the length of a string: length :: String -> Int. Try it out in our repl and you should see something like:

How about “some numbers” type? We can use the same logic as above and introduce the type ListInt - a set of all possible ordered combinations of numbers from Int - such as [1, 2, 3] or [10, 500, 42, -18]. What if we treat each string as a sentence and want to define a type Book that would contain a bunch of sentences together? Again, we can define it as a set of all possible ordered combinations of strings from String.

Do you notice a pattern? We take a type, apply some manipulation we describe as “a set of all possible ordered combinations” to members of this type, and get a new type as a result. Doesn’t it look just like a function?!

Let’s call it List. Just like we have defined functions whichNumberIsChar :: Char -> Int and length :: String -> Int, List is also indeed a function, one that takes a Type and creates a different Type! In Haskell, we write List :: * -> * (‘*’ is a synonym for concrete Type in Haskell). They are sometimes more formally called type level functions and they are the core of the beauty, elegance, and efficiency of Haskell. We will be doing a lot of type creating down the line!

### Curry and Recurse!

We saw that a function takes one argument of one type and maps it to exactly one value of another or the same type. A couple of questions should arise at this point:

• what do we do with functions of several arguments that exist even in math – like f(x,y) = x + y, – not just C-land programming languages, and
• how do we actually define functions without resorting to writing explicit tables that map arguments to results, something like “1 + 1 = 2, 0 + 1 = 1, 1 + 0 = 1..” etc for all numbers. You could certainly do that, but it’s a bit unpractical, where by “unpractical” I mean totally nuts.

There are a couple of ways to tackle the first one. Let’s say we want a function that takes a Char and an Int, converts a character to a number with the help of our whichNumberIsChar function and calculates the sum of the two, something like anotherWeirdFunction c i = (whichNumberIsChar c) + i – but we can’t do this since the functions we defined take only one argument! One way to work around it would be to somehow combine our Char and an Int into a new type of pairs (Int, Char) – similar to what we have done with List above – and write:

Now we are ok - our function takes just one argument. This is a perfectly valid way to do it, and we will see how to construct pairs from other types in the next chapter. However, it turns out it is much more powerful to use a different approach. First, one very important point:

Functions are first-class citizens in functional programming (that’s kind of why it’s called functional programming) - which means you can pass a function to another function as an argument and return a different function as a result.

Let’s say you want to take a ListInt we defined above and multiply all its elements by some number n. In the imperative world, you would write a function that takes a list and a number n, goes through some cycle, and multiplies it. An ok approach; but what if later on you’d need to add a number n to all list elements? You’d need to write a different cycle that now adds numbers. Wouldn’t it be great if we could just pass a function that we want to apply to all list elements, whatever it is? In Haskell, you can, by simply writing:

We pass a function that adds 2 - (+2) - or multiplies by 2 - (*2) - and a ListInt to a function called map (we will define it in the next chapter), and get another list as a result. Much more concise and powerful than writing a lot of boilerplate imperative cycles, and also hints at how we can define functions of multiple arguments by looking at some further examples:

• (+) x y is a function of one argument that takes an Int and returns a new function that takes an Int and returns an Int. So, (+2) is a function that adds 2 to any number, (+18) adds 18 etc - and we create these functions on the fly by passing only 1 argument to (+) first. So, the type signature of (+) is: (+) :: Int -> Int -> Int, which you can read as (+) :: Int -> (Int -> Int), i.e. exactly what we just wrote - it takes an Int and returns a function that converts Int to Int!
• Our anotherWeirdFunction is defined similarly: it’s a function of one argument that takes a Char and returns a function of one argument that takes an Int and returns an Int:

So, in general, function of many arguments can be defined as a function that takes 1 argument and returns a function that takes 1 argument and returns a function… etc. This technique is called currying, because that’s how you make chicken curry. No, not really. It was invented by Haskell Curry, in whose name both the currying and the Haskell language itself was named.

Currying is extremely useful, because it makes the math much simpler and allows you to create different functions on the fly and pass them around as you see fit. If you want to dig into details, there’s a special optional chapter on Lambda Calculus - it’s quite educational if you want to look into internal machinery of pretty much any modern functional language.

Now, let’s turn to the second question - how to construct functions efficiently.

It is in fact possible to start with an empty set and define all of Haskell - first, you construct natural numbers starting with an empty set as we showed in an aside above, then you can define operation of addition on these numbers, invent integers once you realize you need negative numbers, then define multiplication via addition etc. It is a long and winding road.

We won’t be doing that (although it’s a pretty cool exercise to do on your own after you read the Lambda Calculus chapter) - since existing computer architectures provide a bunch of operations, such as addition, multiplication and memory manipulation for us much more efficiently than if we were to implement them from scratch by translating pure functional lambda calculus logic. Haskell additionally defines a bunch of so called primops, or primitive operations, on top of those, and then a bunch of useful functions in its’ standard library called Prelude - so you can obviously use all of them when writing your functions and do things like:

Having said that, a very important concept in functional programming that we need to understand is recursion. It allows you to write concise code for some pretty complex concepts. Let’s start with “hello world” of functional programming - factorial function. Factorial of a number n is defined simply as a product of all numbers from 1 to n, and in Haskell you can in fact simply write product [1..n] - but it’s too easy, so let’s create our factorial function recursively:

What’s happening in recursion is that a function calls itself under certain conditions - in the case of fact, when n > 0. Try executing fact function on the piece of paper with different arguments. E.g., if we call fact 1 here’s what’s going to happen:

Phew. Now try expanding fact 10 on paper when you have half an hour to waste. There’s nothing complex about recursion - you (or a computer) simply substitutes function call to the function’s body as normal, and the main thing to remember is that your recursion needs to terminate at some point. That’s why you need boundary cases, such as fact 0 = 1 in the case of factorial, but it can be completely different conditions in other functions.

That’s in essence all there’s to it.

## Conclusion

In a very short time, starting with barely anything but an empty set, we have come up with the concepts of:

• Types, inhabited by their members, also called values, and introduced two very simple types: Int and Char
• Functions, which map values of one type to values of another (or the same) type
• Type Functions, which create new types from other types, and introduced a couple of useful complex types: String and ListInt
• Composition, an essence of a functional program (and, incidentally, Category Theory, but we won’t be going there for quite some time yet)
• Ways to construct functions via currying and recursion

This is pretty much everything we need to gradually build our magical Hask world!

What is currently missing is details on how exactly we can define and use functions such as length, that give us the length of a String (or, as we have seen above, any other Type constructed with the help of our List type function!) and List, that creates new types. We can’t just tell a computer “give me a set of all possible combinations” - we need to be slightly more specific. In the next chapter, we will continue following the adventures of our mage and will focus on card magic, while constructing different types and intimately understanding their internal structure - which is an absolute requirement for designing efficient functions.

Let’s roll up our sleeves and get to it!

## Mage’s Toolbox 1: Simple Types, Simple Functions, Recursion, If-Else

Download and install haskell from http://haskell.org or via package system on your OS. Run ghci to work with repl, or use online repl.

Type names and Type Constructor names always start with a capital letter.
Functions are defined as:

Definition via pattern matching:

Functions of many arguments defined via currying - as a function of one argument that returns a function of one argument that returns a function…:

# Chapter 2: Type Construction

In this chapter: * house of cards * algebraic data types * Maybe yes, maybe nothing * Lists and recursive types * Function definition

You got to the “Nobu” right on time, barely past seven. As you were approaching the hostess, a hurricane of a girl ran into you - “Thank goodness you got here on time, Curry!” - Curry, that must be my name? But why, I don’t even like curry?! Must be a different reason… - grabbed your hand and dragged you past the busy tables somewhere deep in the restaurant, through the kitchen, out the back door, down the stairs, and into a dimly lit room. In the middle of the room stood a card table - green cloth, some barely visible inscription that spelled “Monoid” in golden letters on it, - surrounded by six massive chairs that looked really old. The girl abruptly stopped, turned around, looked you right in the eye - hers were green - and asked almost irritably, but with a pinch of hope in her voice:

– Ahem… Ready for what?! Can you explain what’s going on?
– Did you hit your head or something? And where were you last week? I called like 27 times! And why do you smell like manure - did you suddenly decide to visit a farm?! Ah, no time for this! - she dragged you to the table and pushed you into one of the chairs - she was strangely strong despite being so miniature - so that you had no choice but to sit. BOOM! Suddenly, a small fireworks exploded in your eyes, and you felt more magic coming back to you.

Out of thin air, you create a new deck of cards with intricate engravings on their backs, nothing short of a work of art. You feel them - they are cold and pleasant to touch, - but then rest your hands on the table, close your eyes, and.. it’s as if the cards became alive and burst in the air above the table, shuffling, flying around, dancing. You open your eyes, enjoying your newly mastered power over cards, barely nod your head, and the deck, obeying your wishes, forms into a fragile house of cards right beside the Monoid sign.

You turn to look at your mysterious companion. She is smiling and there are twinkles in her eyes.

## House of cards with a little help from Algebra

Looks like our mage is making friends and remembering new spells, this time without risking any animal’s wellbeing. Let’s follow in his footsteps and learn some card magic!

First, try dealing some cards in our repl. Just follow the link and click “run”. You’ll see some beautiful (well, not as beautiful as the ones Curry created, but still..) cards being dealt: … 9♥ 10♥ J♥ Q♥ K♥ A♥ - with barely more than 20 lines of code. Let’s understand what’s going on here step by step.

To start doing card magic - or card programming for that matter - we also need to create a deck of cards out of thin air. Here’s one way of doing it in Haskell:

We have created three new types above. CardSuite is inhabited by 4 values: Clubs, Spades, Diamonds and Hearts - just like Int is inhabited by 264 values - that correspond to playing card suites. CardValue is inhabited by 13 values that represent card face values. All of it is pretty self-explanatory. Type Card represents a specific card and is a little bit more interesting - here we are combining our two existing types into one new type. Don’t mind deriving statements for now - we’ll explain them down the road, suffice it to say that here Haskell automatically makes sure we can compare values of these types (Eq) and we can convert them to numbers (Enum).

Open the repl linked above again, run it (if you don’t run it first you’ll get an error as the types won’t be loaded into the interpreter), and then play with these types on the right:

Here we bound val to CardValue Jack, suite to CardSuite Spades and created 2 cards - Ace of Hearts directly from values and Jack of Spades from our bound variables - both approaches are perfectly valid. Create some other cards in the repl to practice.

Looking at the types definition above, the syntax for type creation should become apparent: to create a new type in Haskell, we write data <Name of Type> = <Name of Constructor1> <Type of value 11> ... <Type of value 1n> | <Name of Constructor2> <Type of value 21> ... <Type of value 2m> | ... . You can read symbol | as “or” - it is used for combining different types together into a new Sum Type, values of which can be taken from a bunch of different types we are combining, but every value is taken from only one of them. Types like data Card = Card CardValue CardSuite are called Product Types, and every value of such type contains values of all of the types we are combining.

### Algebraic Data Types

Sum and Product types together are called Algebraic Data Types, or ADT in short (which may be confused with abstract data types that are quite a different matter, but we will call algebraic types ADT), and they form a foundation of any modern type system.

Understanding ADTs is extremely important in Haskell programming, so let’s spend more time discussing them and looking at different ways to define them. Here are several more examples.

Type Person, containing their name and age: data Person = Person String Int. We are combining a String to represent a name and an Int to represent an age, and then we can create people like so: Person "Curry" 37 or Person "Jane" 21. This is a product type.

What if we wanted to represent built-in JavaScript types in Haskell - String, Number, Date, Object, etc? Here is one way of doing it (it does not allow us to represent everything, we will look at the full javascript values type definition once we learn recursion):

Now, this is interesting - we have combined three types that “store” a String, an Int and a Double inside a constructor and add a simple value “Undefined” as the fourth possible value. It means that values like JString "hello world", JDate 12345225 (which is quite a long time ago), JNumber 3.141 and Undefined are all valid values of the type JSValues. This is a sum type.

By now, you should get a pretty good feel on what sum and product types are. When you create a product type, you combine values of several different types in one value, represented as a tuple $$(x_1:t_1, x_2:t_2, ... )$$. When you create a sum type, each value of your type can be from a different type, but only one.

### Records

What if you wanted to extend the type Person to include more information about a person, which would be much more useful in the real world situation? E.g., place of birth, parents, first and last names, education, address etc. If you had to write something like

it would be extremely confusing to use - you’d need to remember the order of fields, you’d confuse fields with the same types etc. Thankfully, Haskell supports records - basically, a Product Type with named fields. We can define our Person type properly like this:

Before we move on to the other ways of constructing new types, here is yet another example, which illustrates how Product and Sum types work together. Let’s say we want to model different 2-dimensional geometric shapes. We can define the following Product types:

Now, let’s say we want a function that calculates the area of our shapes. We can start like this:

This works fine. But we need to calculate the area of Circles as well, and potentially other shapes. How do we do it? If you try to define another function area :: Circle -> Double the typechecker will complain because you already have a function named area that has a different type. So what do we do? Create a differently named function for every type? Now, that would be ugly!

There are several approaches to solving this. (This is actually one of the problems with Haskell - you can solve one task in many different ways, there’s just so much power and flexibility that sometimes it is getting overwhelming, and not just for beginners. We will see many examples of this down the road). One of them is using Typeclasses, another one - using Type Families (probably overkill in this case) - we’ll get there soon. But now we will simply use a Sum Type. We can combine three of our shapes (product types) into one sum type. There are actually two ways to do it! The first one is if we want to keep Rectangle, Circle, and Point as separate concrete types in our system. Then we will use the definition above and will combine them into one type like so:

Then, our area function can be defined as follows with the help of pattern matching:

Now everything will work like a charm. When you write somewhere in your code area myShape it will check which exact shape is myShape - Point, Circle or Rectangle - and will choose the respective formula.

An alternative way to achieve the same result is to define Shapes type directly as a Sum Type like so:

I leave the area function definition for this case as an excersize. The difference with the previous approach is that in this case you will not have types Point, Circle and Rectangle in your type system - they are simply constructor functions now. So trying to define a function from Rectangle to Double will be a typechecker error.

Using Sum and Product types you can express a lot of real world modeling already, but why stop there? We want to be even more efficient and polymorphic! Read on.

## Type Functions Are Functions, Maybe?

Let’s spend some time reviewing a statement we made in the last chapter: type and data constructors are functions. This is extremely important to understand and internalize, so that you do not become confused when writing real world programs that use monad transformer stacks and similar concepts that may look overwhelming, but are in fact quite simple. You need to be able to easily read type signatures and understand their structure at first glance.

Let’s look at a couple of types, renaming type names now to distinguish between types and constructors: data Cards = Card CardValue CardSuite and data People = Person String Int. The part on the left is the name of the type, the part on the right is the name of the data constructor and types of the values that this constructor contains. So, in reality Card is a function that should be read in Pascal-ish pseudocode as function Card (x:CardValue, y:CardSuite) : Cards - i.e., it’s a (constructor) function from 2 variables, one of type CardValue, another of type CardSuite that returns a value of type Cards! Same with Person, which turns into function Person (x:String, y:Int) : People.

In fact, using so called GADT (generalized algebraic data types) Haskell notation, this becomes even more apparent - below is the solution to the task of extending Card type with the Joker card written first with the familiar syntax and then using GADT:

Looking at the 2nd variant, the point about Card and Joker being functions becomes much clearer. In effect, it tells us that we are defining type Cards which contains 2 constructors: Card, which takes 2 arguments of type CardValue and CardSuite and returns a value of type Cards, and Joker, that takes no arguments (or strictly, an empty Type - ()) and is a value of type Cards.

We can take it one step further, and take the analogy of Product and Sum types to it’s fullest - we can treat the type Cards as type function that is a sum of 2 product types: typeFunction Cards = Card (x : CardValue, y : CardSuite) + Joker()! Of course + here has nothing in common with arithmetic sum but has the same meaning as | in real Haskell syntax.

So, Card is a constructor that creates a value of type Cards from 2 other values, and Cards, even though we just called it a type function, is in fact a “type constant”. It is a concrete type - you cannot apply it to another type and construct a new one. Unlike type function List that we hinted at in the last chapter, that actually creates new types from other types - and as such is much more interesting and important for us. We’ll get to it in the next section and we’ll need to understand it to follow along our card dealing program further.

For now, let’s start with the simplest real (non-constant) type function, one that constructs new types from other types: Maybe, which is used ubiquitously in Haskell.

This notation is very often a source of confusion for beginners and even advanced Haskell students - we are using variable a on both sides of the type definition, but they mean very different things on the left and on the right! In fact, this confusion should be cleared if you read this sort of definition in the spirit of what we’ve described above: typeFunction Maybe (a : Type) = Just (x : a) + Nothing (). Please dwell on it until you fully understand the issue - it will save you lots of hours of frustration with more complex types down the road. We are in effect saying here: Maybe is a type function that takes a type variable a and creates a new type from it (Maybe a) with help of 2 constructor functions - Just, that takes a “regular” variable of type a and creates a value of type Maybe a, and Nothing, a function of zero arguments that is a value of type Maybe a. So, a is a variable of type “Type” - it can take a value of any Type we defined - on the left, and it is used in the type signature for constructor function Just variable on the right.

Take a breath and let it sink in.

Why is the type (which really is a type function) Maybe a useful? It’s the easiest way to handle failure in your functions. E.g., you could define a function divide like so:

This function would return Nothing if you try to divide by zero, instead of crashing or throwing an exception, and Just <result of division> if you divide by non-zero. This is a trivial example, but Maybe is of course very useful in thousands of other more complex situations - basically, anywhere you need to handle possibility of failure, think about Maybe (there are other ways to handle failure and errors in Haskell as we shall see further on).

Note also function definition syntax - a very powerful and concise pattern matching approach. Under the hood, this definition translates into something like:

.. but in Haskell you can avoid writing complex nested case statements and simply repeat your function definition with different variable patterns, which makes the code much more readable and concise.

So, what if we also have a function square x = x * x that works on Doubles and need to apply it to the result of our safe divide function? We would need to write something like:

To do the “somehow extract” part in Haskell, we again use pattern matching, so the function above would be written as:

We pattern match the result of divide and return Nothing if it’s Nothing or Just square of the result if it’s Just something. This makes sense, and we just found a way to incorporate functions that work between Doubles into functions that also work with Maybe Doubles. But come on, is it worth such an complicated verbose syntax?! Isn’t it better to just catch an exception once somewhere in the code and allow for division by zero?

Yes, it is worth it and no, it’s not better to use exceptions, because you can rewrite the above as simply divideSquare x y = square <$> divide x y, or looking at all the function definitions together: Wow. Isn’t it great - we are using practically the same syntax for both unsafe functions between Doubles and safe functions between Maybe Doubles, and what’s more, we can use the same approach for a lot of other different type functions, not just Maybe a. Here, operator $ is just function application with lowest precedence, so allows to avoid brackets – without it you’d write square (divide' x y) – and <$> is another operator that also applies functions defined for concrete types, but it does so for values built with special class of type functions from these types, e.g. Maybe a or List a - without the need to write nested case statements. It is a key part of something called a Functor in both Haskell and Category Theory and we will look at it in more detail in the next chapter. In the end, it’s all about making function composition easy - remember Chapter 1 and how to think about solving problems functionally? You break the problem down into smaller parts and then compose functions together, making sure input and output types “fit together”. All the complicated scary terms you may have heard or read about in Haskell - Functors, Monads, Arrows, Kleisli Arrows etc - they are all needed (mostly) to make composing different functions easy and concise, which, in turn, makes our task to decompose a problem in smaller pieces so much more easy and enjoyable. So don’t be scared, it will all make perfect sense. ### Advanced: On Data Constructors and Types It pays to understand how Haskell (GHC specifically) treats and stores values of different types. One of the powerful features of Haskell is that it is a statically typed language - which means there is no type information available at runtime! This leads to much better performance with at the same time guaranteed type-safety, but is a source of confusion for many programmers coming from the imperative background. Let’s look at the type Person we defined above, changing the names a bit to avoid confusion: The name of the type is Person and it has only one data constructor called PersonConstructor that takes an Int and a String and returns a “box” of them that has a type Person. Technically, it creates a tuple (n :: String, a :: Int) - so for instance when you write something like john = PersonConstructor "John" 42 you get a tuple ("John", 42) :: Person. The beauty of Haskell is that during type checking stage the compiler has only one job - make sure your functions compose - i.e., input types to all the functions correspond to what the function expects, an output type of all the functions corresponds to their type as well. Once this check is done - all the type information gets erased, it is not needed, as we have guaranteed and proved it mathematically that every function will receive correct values at runtime. This immensely increases performance. So, if you have a function showPerson :: Person -> String that is defined something like showPerson (PersonConstructor name age) = name ++ " is " ++ show age ++ " years old" and then you call it on John person from above, here is what happens: Under the hood it is simply a tuple (“John”, 42), and according to the pattern match on the left side of showPerson definition we get name = "John" and age = 42. Then these values get substituted in the function body and we get a resulting string. But nowhere is type Person used! Typechecker gurantees that the first value in a tuple received by showPerson will have type String, while the second - Int, so it is impossible to have a runtime error! In a general case, if you have a data constructor <Cons> T1 T2 ... TN that belongs to some type MyType and then when you call it to create a value as <Cons> x1 x2 ... xn - Haskell stores it as a tuple (x1,x2, ..., xn) of type MyType. The type is used during type-checking stage and then gets erased, while type-safety is guaranteed by math. For sum types, the situation is a bit trickier. Recall our type of Shapes: It has in this case two data constructors - Point and Circle, which behave exactly as described above. Function area during compilation gets re-written using case-analysis, in pseudo-code: This means that for sum types of more than one data constructor Haskell needs a way to distinguish between data constructors with which this or that tuple has been created. So, even though information that s is a Shapes type is erased, we keep the constructor number inside the type together with our tuple. So that Point 4 0 would be represented something like (0, 4, 0) (in reality it’s done differently, but this illustrates the concept well enough) where first value of the tuple is the constructor tag. GHC uses a lot of very smart optimizations under the hood where this approach has minimal possible overhead at runtime as well. ## List, Recursion on Types and Patterns Let’s come back to our card-dealing magic spells, available in the repl here. To (almost) fully understand it, there’s only one extremely important piece missing: List type function and recursively defined types, which are extensively used in functional programming to implement various data structures. If you recall, in Chapter 1 we described List as some manipulation that takes all possible ordered combinations of values of another type and creates a new type from it. Try to define the List type function yourself before reading further. Here is how we can operationalize this definition: Remember how you need to read type definitions as type functions? In pseudocode: type function List (a : Type) = Cell (x:a, y: (List a)) + Nil()List a is a type function from the variable a : Type, or a : * in real Haskell, that consists of 2 constructors: Nil, that is itself a value of type List a, and Cell, that takes a value of type a and a value of type List a (here is where recursion comes into play!) and returns another List a. Here Nil is the boundary case of our recursion, and Cell is a recursive function that constructs a list until it is terminated by Nil value as the last element. Let’s construct a list from values 1, 2 and 3. You have a constructor function Cell that takes 2 values - a value of some concrete type, in this case an Int, and a List of Int. What is the simplest List? Of course it’s a Nil! So, we can write Cell 3 Nil - and it will be a List Int! Now, we can pass this list to another Cell call, like this: Cell 2 (Cell 3 Nil) - and we have yet another List Int. Apply it one more time, and you get Cell 1 (Cell 2 (Cell 3 Nil)). Easy, right? Basically, if you keep in mind that type functions and constructors are, well, functions, which follow all the same principles as “real” functions, this type building business should become very clear and easy for you. We looked at the principle of how List a type function can be defined, but since in Haskell it is such a fundamental data structure, it is in fact defined with different constructor names and some syntactic sugar on top: Nil is in fact [] and Cell is defined as an operator :, so that our list of 1,2 and 3 would be written as 1:2:3:[] or simply [1,2,3] - much more concise and convenient. You can write ["abc", "hello", "world"] - a value of type List String, or [1.0, 3.1415926, 2.71828] - List Double, etc. In the last chapter we introduced the type String as a set of all possible ordered combinations of characters, or simply List Char. In fact, it is exactly how String is defined in Haskell - which is quite logical from the math point of view, but at the same time very inefficient in practice. It’s kind of the same as if we defined our numbers as 0 being an empty set, 1 being a set containing an empty set and so on, and then defined operations of addition and multiplication on top of it etc - instead of using CPU supported ints, floats and operations on them. It would work, and might look very beautiful mathematically, but would be thousands times slower than asking a CPU to compute 2 + 2 via native operations. In a similar fashion, when you are writing real world programs that do any noticeable amount of string manipulation, you should always use the type Text instead of String. It is represented much more efficiently internally, fully supports Unicode, and will make performance generally better. With lists, the fastest operation is appending an element in the beginning of the list - x:list. Inserting something in the middle or in the end is a pain that grows proportionally to the length of the list (in computer science we write $$O(n)$$). To use Text, you should write: The first line here tells the GHC compiler to treat string literals as Text. Without it, you’d have to write a lot of annoying String-to-Text-and-back conversion statements. The second explicitly imports the Text datatype for you. Strings aside, lists are still very convenient for a lot of common tasks, as we will see down the line. Now that we know how to define functions and types, including recursive ones, we have a very powerful toolbox at our disposal already to start building something serious. Let’s practice by defining some useful functions. #### Length of a List Remember this from Chapter 1? Now that you know how a list is built, you can easily write it: Here we use recursion (length calls itself) and pattern matching: if a length is applied to an empty list ([]), it returns 0 (boundary case for the recursion!), and if it is applied to a list, we return 1 plus length applied to the remainder of the list. (x:xs) is pattern matching a specific constructor - x here is simply the first element of the list and xs is the rest of the list. It might make more sense if we wrote it via Cell constructor: length (Cell x xs) = 1 + length xs. If you recall, Cell (or :) takes 2 values: a value of type a and a List a, and creates a list from them. Here we are telling Haskell - if our length function receives a value that is constructed with the help of : constructor, take it apart into constituents and do something with them. We have just defined our first polymorphic function - it works on lists of any type of values! We don’t care what kind of value is stored in our list, we only care about the list’s structure - you will see this as a recurring theme now and again, and this is another powerful aspect of typed functional programming. You define functions that work on types created from other “argument” types regardless of what these argument types are - you only care about the structure that the type function gives to a new type. Just like a “circle” is an abstraction of the sun, vinyl record and a coin, a type function is an abstraction of a lot of different potentially not very related entities. If you already started learning Haskell and got stuck on monad tutorials - this might be exactly the issue, being mislead by false analogies. A circle is not the sun or a vinyl record, but it captures some common quality of both - they are round, even though at the same time the sun is hot and huge and a vinyl record can make you feel sad. The same with Monads - it’s merely an abstraction of some useful quality of very-very different types. You need to learn to see this abstraction, without being lost in and confused by analogies, recognize it as such, and it will all make sense. Just wait till chapter 3. #### Map a function over a list of values Do you recall a mapping example from the last chapter? Here’s how we can define the extremely useful map function: Type signature tells us that map takes a function that converts values of type a to values of type b, whatever they are, and a list of type a ([a]) and gives us a list of type b ([b]) by applying our function to all values of the first list. It is likewise defined recursively with the boundary case being an empty list []. When map gets a value that is constructed with the help of : from x of type a and xs - remainder of the list - of type [a], we construct a new list by applying f to x and gluing it together with recursively calling map f over the remainder of the list. Let us expand by hand what is going to happen if we call map (+2) [1,2]. (+2) is a function from Int to Int - it takes any number and adds 2 to it. We want to apply it to all the elements of the list [1,2] - if you recall, internally it is represented as 1:2:[]: The end result of such a call is what’s called a thunk. In GHC, it will actually be stored this way until the actual values of the list will be required by some other function - e.g., showing it on the screen. Once you do that, GHC runtime will calculate all the sums and will produce the final list as [3,4]. This is part of haskell being lazy and using the evaluation model “call by need”. More on it later. Dwell on the expansion above a bit, and it should become very clear and easy to use. So much so that you can work on some more exercises. One other piece of Haskell goodies that is worth noting in our card dealing code: It is called a comprehension, which might be familiar to you from math, and it basically tells Haskell to build a list of Card x y from all possible combinations of x and y from the right side of |. [Clubs .. Hearts] creates a list of all values of type CardSuite starting with Clubs and until Hearts, the same for [Two .. Ace] for values of type CardValue. You can obviously use the same syntax for Int: [1 .. 100] would create a list of all values from 1 to 100. You can in fact use this syntax with all types that belong to so called Enum typeclass - that’s what the deriving (Enum) statement is about, it makes sure we can convert values of our type to Ints (details about typeclasses in the next chapter). Practice writing some comprehensions yourself - e.g., for pairs of int values, just pretend you want to put some dots on the screen and need to quickly create their coordinates in a list. Ok, so here’s the full text of the card-dealing program, which you should now be able to fully comprehend: We looked at how the card deck types are constructed in the beginning of the chapter in detail. Keyword data is used to define new types and new type functions, as we already learned. Keyword type defines a so called type synonym. In this case we write type Deck = [Card], and then we can use Deck everywhere instead of [Card] - so it’s mostly a convenience syntax for better readability, not so much useful here, but when down the line you will write State instead of StateT m Identity or some such, you’ll come to appreciate it. Syntax instance <Typeclass> <Type name> where ...  is used to make a type part of a certain typeclass. We will look at typeclasses in detail in the next chapter, but a basic typeclass is similar to an interface in Java (they can be much more powerful than Java interfaces, but that’s a future topic). A typeclass defines the type signatures and possibly default implementation of some functions, and then if you want to make a type part of a certain typeclass, you have to implement these functions for your type. Haskell also provides a powerful deriving mechanism, where a lot of typeclasses can be implemented automatically for your types. In the code above, we are deriving typeclasses Eq - that allows comparing values of our type - and Enum - that allows to convert values of our type to Int and we hand-code typeclass Show to types CardValue, CardSuite and Card. We could have asked Haskell to derive Show as well, but then our cards wouldn’t look as pretty. Try it out - delete the instance Show... lines and add deriving (Show) to all types and run the program again! Typeclass is another convenient and powerful mechanism to support abstraction and function polymorphism: without it, once you have defined a function show :: CardValue -> String it would be strictly bound by its’ type signature. Then if you tried to define a function show :: CardSuite -> String, converting a different type to a string with a function of the same name, our Typechecker would complain and not let you do that. Luckily, there’s a typeclass Show that has a polymorphic (working on different types) function show :: a -> String, which lets us work around this limitation. A lot of the power of Haskell in the form of Monads, Arrows, Functors, Monoids etc is implemented with the help of typeclasses. Further down, fullDeck function creates a full 52 card deck with the help of a list comprehension, and smallDeck is just three random cards thrown together in a list. Then, our main function follows an already familiar logic of break the problem down into smaller pieces and connect functions together making sure the types fit. We are showing the smallDeck, waiting for a user to press “enter”, and then mapping print over all elements of the fullDeck list to “deal” the full deck. mapM_ is almost like map that we already defined - it maps a function over the list elements. The difference is that print is not really a mathematical function with the likes of which we dealt with so far - it does not map a value of one type to a value of another type, it puts something on the screen via calling an operating system procedure that in turn manipulates GPU memory via a complex cascade of operations. That’s why mapM_ is constructed differently from map - even though its’ high-level purpose is the same. We call such kind of manipulation “side effects”. Side effects mess up our cute perfect mathematical world, but real world is messy and all real world programs are about side effects, so we must learn to deal with them! As we shall very soon see, just like the sun, vinyl record and a coin are all round, Maybe a, List a, procedures with side effects and a bunch of other very useful type functions have something in common as well - something that we will abstract away and which will make writing efficient and robust Haskell code for us an enjoyable and elegant endeavor. ## Lift me up! Recall how we changed this for a function square x = x * x: into this: This transformation hints that operator <$> somehow applies the function square to any value of type Maybe Double (since that’s what divide returns) using some sort of pattern matching or case statements.

What happens here is that we take a function between values of two concrete types (in case of square - Doubles, but it can be anything) a and b and magically receive the ability to apply it to values of type Maybe a and return Maybe b. We can say that we took a function a -> b and lifted it into Maybe type function, turning it into a function Maybe a -> Maybe b.

By looking at this attentively and thinking hard, we can easily design operator <$> for Maybe like this (note also operator definition syntax): It takes a function between some concrete types a and b and turns it into a function between corresponding Maybes. Now, Maybe a is a type function that you should read as Maybe (a:Type) = ... What other type functions do we know by now? Of course it’s a List, or [a]! Can you define operator <$> that would take a function a -> b and turn it into a function between corresponding lists - [a] -> [b]? I know you can. Do it. Do it now.

Good. As they say, one is a coincidence, two is a trend, so why don’t we take it further – what if we had some type function type function f (a : Type) = ... or, in haskell, f :: * -> * (a function that takes a concrete type and creates another concrete type from it) that may have different constructor functions, just like Maybe or List – can we define operator <$> for it? Something like: We want this operator, just like in case of Maybe and List, to take a function g :: a -> b and turn it into a function that is lifted into our type function f - h :: f a -> f b. Let’s write out full signatures in our pseudocode to make sure we are clear on what means what. Operator <$> takes a function g (x:a) : b (takes a value of type a and returns a value of type b) and turns it into a function h ( x : f(a:Type) ) : f(b:Type) (takes a value of type f a and returns a value of type f b)

Congratulations, you just invented a Functor. A Functor is merely any type function f :: * -> *, like Maybe a or list [a] that has an operator <$> defined, which “converts” or lifts functions between two concrete types – g :: a -> b – to functions between values of types constructed from a and b with the help of our type function fh :: f a -> f b. In Haskell, Functor is a typeclass - more on this in a bit in Chapter 3. Why do we need functors? So that we can write g <$> lst and be sure our “simple” g function works on complicated values constructed with some type function – instead of resorting to verbose boilerplate nested case statements for each new type we define. Ah, yes - if you wrote operator <$> for list above you should have noticed that’s it’s merely the map function. So Functor is our first very useful abstraction. If you define some type function f that takes some concrete type and creates another concrete type from it - similar to Maybe or List - always check if you can turn it into Functor by defining a <$> operator. This will make your code concise, easy to read, breeze to maintain.

Most of the confusion in Haskell stems from the fact that the concepts here are named almost exactly as they are in category theory, which is about the only actively developing area of mathematics nowadays, cutting edge of the most abstract of science. I mean, why not call this an elevator or injector or something less scary than a Functor?! But it is what it is: really really simple elegant concept named so as to confuse us simple folk. However, for those who wish to be more technical, we will have a chapter on light Category Theory intro as well.

Let us open the floodgates!

# Chapter 3M: Very Gentle Type Theory and Category Theory Intro

Occasionally, we will have the chapters with M (for “Math”) attached. In them, we will be a bit more strict and try to give a bit more technical description of the concepts involved. You may skip or skim such chapters on the first reading, but they may prove useful for deeper understanding or as some basic reference material.

We will give some more strict Type Theory introduction in this chapter. We will not focus on proofs too much (or at all) but will be providing definitions and a hierarchy of concepts, which you can use as reference material as you delve deeper into the Haskell world. Strictly speaking, Haskell does not use the intuitionistic Type Theory per se but is rather based on so-called system-FC. However, we have found that having a more complete context of the type theory helps to reason about more advanced Haskell abstractions as well. Plus, in case you may want to venture to Idris, Coq, or invent your own language, the bigger picture will be even more helpful.

Also, one of the common misconceptions is that Haskell does not have dependent types. Strictly speaking and from the type-theoretical point of view, it does - for instance, typeclasses and type families are nothing less or more than a $$\sum$$-type. So, understanding some basic type theory will prove useful for any serious Haskell student.

## Types and Functions

We have already defined a Type intuitively as “a bunch of stuff with similar characteristics thrown in a pot together”. In intuitionistic Type Theory types are usually referred to with capital letters - such as $$A$$, $$B$$, etc. In Haskell, concrete types (such as Int, Char, etc), as well as type functions (we will define them strictly below), that take a type and construct a different type out of it, such as Maybe a have to start from the capital letter as well.

In math, we construct all types from nothing, starting with Natural numbers $$\mathbb{N}$$, which are usually called $$Nat$$ in programming languages. The way to construct natural numbers is as follows:

We postulate the existence of zero as a value of type $$\mathbb{N}$$ and define a function from $$\mathbb{N}$$ to $$\mathbb{N}$$. This way, we can say that $$1 \equiv succ \: 0$$, $$2 \equiv succ \: 1$$, etc. In a similar way, we can construct integers, reals, rings, fields, and basically all of math.

In Haskell (or any other real life language for that matter) we have a bunch of primitive types already provided to us as a basis from the Operating System or a specific compilation target, upon which we build our Type Universe using a subset of tools and rules detailed below.

Functions between types are written as $$f : A \rightarrow B$$, a notation which denotes a function with domain equal to type $$A$$ and codomain equal to type $$B$$. In Haskell, the notation is very similar, with the only difference being that Haskell uses a double colon for the type signature. So, the function f above would be written as f :: A -> B, or recalling an example from the previous chapters, function whichNumberIsChar :: Char -> Int.

TK – picture of this function

This notation also means that if we have a value $$a$$ of type $$A$$, denoted as $$a:A$$, we can apply a function $$f : A \rightarrow B$$ to it, and will get a value $$f(a):B$$ - f(a) of type B. As an example, having a value Z of type Char we can apply a function whichNumberIsChar to it and receive a number - a value of type Int. In both type theory and Haskell, function application does not require parenthesis and is written simply as f a.

Important to note that Functions are also first-class citizens of the type theory, which means we can pass them around as values, apply other functions to them, and classify them in types. So, $$A \rightarrow B$$ is a type of all functions from A to B, just as $$Char \rightarrow Int$$ is a type of all functions from Char to Int.

Types in type theory are also categorized in the hierarchy of Universes denoted with the letter $$U$$. So, a type of a type is a universe - we can write $$A:U$$ or $$Char:U$$ to explicitly denote this fact. In math, universes are organized in an hierarchy $$U_0:U_1:...$$, and we can build a similar hierarchy for Haskell. By definition all types that belong to a universe $$U_n$$ also belong to a universe $$U_{n+1}$$, so we will often write $$a:U$$ without referencing number for the universe explicitly, when we want to talk about a type.

## Maybe and Advanced Generalized Functions

So far, we have described concrete types where certain objects - values - live as well as functions between them - converting a Char to an Int, or an Int to an Int with something like f x = x * x etc. But what other “arrows” can we introduce in our universe?

One example that we have already seen is what we’ve called a “type function” - a function that takes a type and creates another type from it. We have defined two new types this way - String from Char and ListInt from Int. Since the type of any concrete type is a universe U, the type of all such “type functions” is U -> U. Haskell in fact uses asterisk * to denote something very close to U in type theory (even though there are some important differences, which we will discuss below), so this type in Haskell is written as * -> *, which simply means it’s a function that takes some concrete type and creates another concrete type out of it. Ubiqitous example is the two “type functions” we defined in the previous chapters - Maybe a and List a. Using GADT syntax, as it shows the relevant concepts better:

In effect, Maybe : U -> U is a function that takes any concrete type a:U and creates a new concrete type Maybe a out of it. We can even write in Haskell type MaybeInt = Maybe Int where we give a new name - MaybeInt - to the type that we get by applying a type function Maybe to a concrete type Int. In practice, no-one really does it with Maybe, as it’s easier to just write Maybe Int, but you may want to introduce such named new types with more complicated type functions such as monad transformer stacks, which we will look at much later.

A very important question is what is Just and Nothing in our case? Let’s stare at the type signatures attentively. We can re-write Just as Just(x:a) : Maybe a, which means that Just takes a concrete value of type a and returns a concrete value of type Maybe a as a result. It is also called a constructor function. Same with Nothing - it takes no arguments and returns a value of type Maybe a, or we can even say that Nothing is a value of type Maybe a.

You have to internalize this distinction: Maybe is a “type function” - it takes a concrete type as an argument and returns a new concrete type as a result. Under the hood, it combines 2 data constructors - Just and Nothing, which can be treated as “normal” functions.

So what happens if we do write type MaybeInt = Maybe Int? You simply need to substitute type variable a for the concrete type Int in our Maybe a definition:

So, our type function Maybe got a specific type (Int) and produced a new concrete type with 2 data constructors as a result. This is exactly what happens under the hood when you write something like x = Just 4 - Haskell deduces that 4 is of some concrete numeric type (can be Int, Short etc), creates a concrete type Maybe Int from Maybe a based on this information and then applies a data constructor Just :: Int -> Maybe Int to our 4:Int value.

Here we can also note an important difference between U in type theory and * in Haskell. The latter only means a concrete type, but cannot mean a type function such as Maybe itself or List - i.e., you cannot write Maybe List in Haskell, but you can - Maybe (List Int). In type theory however, both Maybe and List have a type U since they live in our universe of types. To be strict and respect this distinction, we can say that * is equal to U0 in type theory, and then we can say that all type functions with signature * -> * or, equivivalently, U0 -> U0 will live in the next level universe U1.

TK – picture illustrating this concept

How about functions of type U -> A, i.e. functions that take a concrete type as an argument and produce a value of a concrete type as a result? E.g., what if we want to have a function EN : U -> Nat that enumerates all the types that live in our universe and returns a specific type’s number? We could say that EN Char = 0, EN Int = 1 etc. This is a perfectly valid construct in Type Theory, but not in Haskell.

What if we go the other direction and want a function of type A -> U? This is what is normally called “dependent types” in CS, or type families in Type Theory, and is implemented in lanugages such as Idris, but not strictly speaking in Haskell, even though there is pretty active research in this direction. Everyone’s favorite example is the type of Vectors of a given length n. Ensuring safety of a bunch of operations that require vectors of the same size would work much better if we could do it at type level rather than by introducing runtime checks. This topic brings us to

## Dependent Function Types (PI-types)

A dependent function type is a type of functions, whose codomain (type of their result) depends on the argument. It is written as

and means that a function of this type takes an argument x of type A and returns a result of type B(x) - so a type may vary depending on what x is. In case the type B(x) does not depend on x and is constant, it becomes our “regular” function type A -> B.

As an example, let’s say we have a type Vec n:Nat a:U of vectors of a given length n that store elements of type a. E.g., we can use Vec 3 Double to store coordinates of physical bodies. Then, we should be able to define a concatenation function that takes two vectors of type Vec n:Nat a:U and Vec m:Nat a:U and produce a vector of new type Vec (n+m):Nat a:U that depends on the exact type of its arguments. The full type signature of such a function would be written as:

This is a mouthful. In this specific case since the result type depends on type of the arguments, not on the arguments themselves, we can actually use our “regular” function signature type and write Vec n:Nat a:U -> Vec m:Nat a:U -> Vec (n+m):Nat a:U - which is in fact quite similar to how it is done in Idris.

UNFINISHED CHAPTER. TBD: Formal ADT, $$\sum$$-types, connection to haskell concepts (typeclasses, type and data families), better examples.

# Part 2: I Wanna Be FAMIOus!

Let us recap our high-level Hask picture so far. We have values grouped together in types, such as Char or Int. We have functions: mapping of one, or several, values of one type to one and only one value of a type. Such as length :: String -> Int or fact :: Int -> Int. Functions are values, too - you can pass them to a function and define types consisting of functions as values. We have type functions or parametric types - they take a whole type as a value and construct a different type out of it. Such as a list type [a] or Maybe a, where variable a can be of any concrete type. We have polymorphic functions - those that are defined on parametric types and work regardless of what exact type a is. Such as length :: [a] -> Int or map :: (a -> b) -> [a] -> [b].

The real power (and complexity) of Haskell comes from various abstractions, like polymorphic functions we looked at in the last chapter. The next such mechanism is Typeclasses.

What is a typeclass? If you read the “Type Theory intro” chapter, you already know that typeclasses, multiparameter typeclasses, as well as type and data families are all simply cases of $$\sum$$-types. In essence, they take a type (or several in case of multiparameter typeclasses) and add some structure to the type (or between types for multiparameter typeclasses). In the next few chapters, we will look at different kinds of important typeclasses using ubiquitous Haskell examples and try to show their increasing power of abstraction.

# Chapter 4: Basic Typeclasses or “Show Me A Monoid”

In this chapter: * Show typeclass * Magma, Semigroup, Monoid * Defining Monoid as a typeclass

– What’s your name? - She looks at you in amazement, but with warm concern – You really don’t remember anything?
– I.. It starts to come back to me slowly - like I could deal those cards now - but it’s all very vague still..
– Well, not to worry. I will guide you! And by the way, my name is Lambda, and we are friends! - she says after a slight hesitation. “Lambda and Curry” - you think to yourself. Sounds strangely fitting. – Let me help you remember more of your magic, Curry! You’ll need it for the contest… Let’s start with discussing how the same spells can do different things depending on the types you apply them to!

The idea of some “contest” gives you another reason to worry, but you decide to learn as much as you can for now.

## Show Typeclass

The simplest case of a typeclass is a single-parameter typeclass that works on concrete types (as opposed to type functions such as Maybe a). Such typeclasses are similar to Java interfaces, but even the next kind of typeclass, described in the “Functor” section, which works on type functions (with type such as * -> *) rather than concrete types, does not exist in widespread imperative languages, making them much less concise and powerful than they could have been.

An easy example of typeclass is Show:

It basically defines a function show that takes a value of some type a :: * and converts it to String. If we take our type Person defined above, we can make it an instance of this typeclass as follows:

Notice how there is no implicit type conversions in haskell, so we cannot simply write ... ++ age ++, since age is of type Int, not String that operator ++ expects. Otherwise it’s pretty straightforward - you define function bodies for your types that you want to make a part of a certain typeclass, and these functions start working on your types.

Typeclass Show defines a useful (and straightforward) abstraction “how should a value of any type be represented as a string?”. In our type-theoretic notation from “Chapter 2M” Show would be represented as:

which basically says “it’s some type with the function show :: a -> String defined”.

## Algebra is cool

Now let’s get more abstract and consider some type $$A:U$$ - a bunch of objects with some similar properties grouped together. It can be Nat - natural numbers, or String, or Char or whatever else you may think of. Taken in itself it’s nothing more than than - a bunch of objects thrown together. We can’t really do anything with them, there is no structure. What if we add some to make things more interesting?

Let us define some binary operation $$\cdot$$ for our type, i.e. a function that takes two members of our type and turns it into a third - $$\cdot : A \rightarrow A \rightarrow A$$. It can be addition for Nat or integers, string concatenation for String, or operation of “having sex” for people, which tends to produce other people. A type with such a binary operation is called a Magma in algebra. Such an operation adds a certain structure to our type and makes it more interesting to work with - as now we can combine any two elements of our type and get a third one. If the abstraction of type describes pretty much everything we can think of, the abstraction of Magma is much more specific - not every type allows introduction of such an operation - but still general enough to describe both natural numbers as well as humanity. If we continue adding structure via additional operations, functions, and laws on them - we gradually get more and more specific.

The art of programming is finding the right level of abstraction for our model so that it is general enough to use concise and powerful abstractions, yet specific enough to be able to correctly describe all the objects in our model and relationships between them. You can code anything in JavaScript or any other language by being very specific - but it tends to be verbose, difficult to understand and maintain, and error-prone. If you use good abstractions in a strongly and statically typed language such as Haskell - your code will be concise, efficient, fast, and easy to maintain.

In type theory, Magma is defined as:

which says exactly what we described above - it’s a type $$A$$ with a binary operation defined on it.

What if we want even more structure and require our $$\cdot$$ operation to be associative, i.e. make sure that $$x \cdot (y \cdot z) = (x \cdot y) \cdot z$$ for all $$x, y, z$$? We turn our Magma into Semigroup. It is easy to see that natural numbers with addition is a Semigroup, as well as String with string concatenation. However, people with operation “having sex” is definitely not a Semigroup and the equation itself is against the law in most of the countries. I think we’ve taken this analogy far enough.

Let us also add something that is called an identity element to our Semigroup type and we get a Monoid - type $$A$$ with an associative binary operation and an identity element $$e$$. For an identity element to be called that we require that $$x \cdot e = x$$ as well as $$e \cdot x = x$$. For natural numbers with addition an identity element is 0, for strings with concatenation - an empty string “”. What is the identity element for natural numbers with multiplication?

Monoids are a cool abstraction because it has an interesting structure, but at the same time they describe a lot of very different types in Haskell - not just all the numerical types and strings, but also any lists and some special kinds of computations.

Monoid can be defined as:

which summarizes everything we discussed above. But how do we define and use these cool $$\sum$$-types in Haskell?

In Haskell, we could define Monoid as follows:

In reality, it is defined a bit differently, as typeclasses can extend each other - a very useful feature, since we can follow along our gradual structure build up in the same logic as described in the previous section. I.e., we could define a Magma, then extend it and get a Semigroup, then add an identity element and get a Monoid. As of this writing in the base haskell libraries, Monoid extends Semigroup roughly as follows:

Here we notice several interesting things. First of all, there is unfortunately no way to put the mathematical Semigroup or Monoid laws in typeclass definition - Haskell has no way of enforcing or checking them. So, typeclasses can only define function type signatures and default implementations for these functions. However, it is extremely important to make sure your types follow relevant typeclass laws that are always mentioned in the typeclass documentation if you make your type an instance of a certain typeclass. The reason for this is that Haskell compiler (at least GHC) often uses these laws when running various optimizations - so if your type does not respect them, your code may start doing things which you do not expect.

Second, Monoid typeclass in Haskell has the mconcat function added with the default definition provided. This function folds a list of values of a monoidal type using its (<>) binary operation. The beauty of default definitions is that when you make your type an instance of Semigroup and then Monoid you only need to define the binary op (<>) and the identity element mempty. Once you’ve done that, you can safely apply mconcat to lists of your type and it will work “automagically”. As we will see down the road, a lot of more complex typeclasses provide many more methods with default definition, which makes a programmer’s life much easier - once you learn to use the typeclasses well. You can also provide your own definition for a default method if you think you can make it more efficient.

The other very powerful feature of typeclasses is that you can define instances for parametric types as well as concrete ones. E.g., you can make sure that a list of any type [a] is a Semigroup by defining the binary operation to be equal to list concatenation:

Once you have this definition (and it is a part of the base library), you can use operator (<>) for any lists - e.g., [1,2] <> [4,5] should produce [1,2,4,5] as a result. If you design a library or a complex enough project, it really pays to think through the typeclass structure and define as many “automagical” instances as makes sense to help you avoid boilerplate and make code more readable and maintanable.

Now that we have seen a glimpse of what basic typeclasses can do for us, let’s move on to typeclasses defined for the type functions or parametric types such as Maybe a or [a]. The first such typeclass, which we have “invented” in the end of Chapter 2 is Functor.

# Chapter 5: Functor, Bifunctor, Trifunctor enter an elevator…

In this chapter: * Functor definition * 3D vector example * Our Cards program extended * Either and a Bifunctor

– Thank you Lambda! - you say enthusiastically as this beautiful hierarchy starts to make sense. - I really like the idea of using the same operators for different types using typeclasses. Now, about that contest…
– Ah, yes, the contest. You’ll have to conjure a card game using your spells and then hopefully win it. You created the deck already! Now you just have to finish the game! How about Blackjack?
– Wow, wow, hold on there, Lambda! I don’t remember nearly enough to finish the game yet! - you start doubting yourself again.
– Don’t you worry, silly! We’ll move slow. Let’s first add the way to track players to our magical code now using a Functor and type functions?

## Functor Typeclas Definition

We will help our mage to further extend our card playing program further in this chapter. Before we do, we need to look at some theory in pictures.

Recall how we “invented” a Functor a couple of chapters ago when we needed a good way to apply functions between two concrete types $$a \rightarrow b$$ to values of List a or Maybe a? To recap, a Functor is merely any type function f :: * -> *, like Maybe a or list [a] that has an operator <$> defined, which “converts” or lifts functions between two concrete types – g :: a -> b – to functions between values of types constructed from a and b with the help of our type function fh :: f a -> f b. In Haskell, Functor is also a typeclass, which we will fully define and discuss below. Making your types an instance of a Functor is a very good way to avoid writing a lot of boilerplate code. If you have some experience with imperative languages such as C++ or Java, it can be a good exercise to try to define a Functor for C++ templates or Java generics - which are as close as those can get to typeclasses. Once you spend an hour writing <$> for a List and a Maybe, and trying to generalize the approach - you will start appreciating the beauty and consizeness of Haskell a great deal.

Since both Maybe a and [a] are instances of a Functor, you can write extremely concise code such as (*2) <$> (Just 4) or (++ " hello") <$> ["world", "me"] instead of writing a separate function with nested case analysis statements each time. That’s one of the reasons you can’t evaluate Haskell programmers performance in “lines of code” - if anything, we pride ourselves in having as few lines of code as possible to achieve a goal.

Strictly speaking, a Functor is not only a type function $$f : * \rightarrow *$$ with the “lifting” operator <$> defined, but it also has some laws - just like a Monoid which we discussed in the previous section. The laws are as follows: 1. (<$>) id == id – identity function should remain an identity function even when lifted.
2. (<$>) (f • g) == (<$>) f • (<$>) g – function composition must be respected - whether we lift a composed function f•g or lift f first, then g, and then compose them - the result should be the same. Formal(ish) type-theoretic definition of our Functor can be given as follows: Here fmap is simply a synonim for the infix operator <$>. As a Haskell typeclass, Functor is defined with a bunch of additional useful functions with default implementation - again, just as we have seen with Monoid:

## You are Either Functor or a Bifunctor

By now you should have seen that there is nothing scary about a Functor save for a name. In this section, we will look at type functions from several type parameters and will easily invent something sounding even scarier than a Functor - a Bifunctor, which is again just another useful and natural abstraction.

So far, we have looked at the type functions with the type * -> * such as Maybe a or [a]. However, nothing prevents us from writing type functions with several type parameters - however many we want, just as with “real” functions. The simplest example of such a type is Either:

If we write down Either with full type it will be Either :: * -> * -> *, which reads simply as “give me two concrete types and I’ll make a new concrete type out of it”. It has two data constructors and is very similar to Maybe with the key difference being that instead of Nothing as the second data constructor, here we can actually store some info along with it. As such, its most popular use is also for handling errors or failure, just as Maybe, but with additional information. Consider the safeDivide function we defined before:

In case someone divides by zero using our safeDivide it won’t break or throw an exception, it will simply return Nothing - so we can continue our safe computations. But what if we want to return some more information in case of errors, or if we want to distinguish between errors? We can use Either a b! For instance:

This function takes two doubles and returns a value of type Either String Double. From its definition the pattern of usage is pretty obvious - errors are marked with Left data constructor, “regular” results - with Right. Pretty cool and more useful than Maybe in many more complex cases (of course we don’t really need Either for safe arithmetic, but in many real-world cases it it very handy).

However, here we immediately run into an issue by asking the same question as in chapter 2 - what if I want to square the result of our verySafeDivide operation? For Maybe, we have a Functor instance, so we can simply write square <$> safeDivide 4 3 and it will work like a charm. But we cannot turn Either a b into a Functor, it has the wrong type for it! Functor expects * -> *, and Either has * -> * -> *. So what do we do? Are we forced to write long and ugly “case” analysis functions to be able to use Either? Of course not, that is not the Haskell way! We can design another generic function and see if it can be abstracted away into some other typeclass similar to a Functor. What we need to apply square to verySafeDivide is a function that takes Double -> Double and makes it work with Either String Double, so that we can write something like eitherLift square (verySafeDivide 5 2) and it would work. Let’s write it: If we are giving an error (Left value) to eitherLift it simply translates the error further, if we are giving a proper result (Right value) - we apply the function and put it under another Right value. Clean and easy, and very similar to what we did when we invented lifting for Maybe Double, right? So how about we generalize it further, by looking at the above and squinting hard, and deciding that what we really want is the function with the following type signature: Just like our maybeMap took a function between a -> b and “lifted it” into Maybe by turning it into a function between Maybe a -> Maybe b, here we take two functions - a -> c and b -> d and lifting them into Either by turning both of them into one function between Either a b -> Either c d. But why stop here? Let’s take it a step further and make such a function not just for Either, but for any type function with type * -> * -> *? This is very similar to what we did when inventing a Functor, only now we invented a Bifunctor: So, Bifunctor is a type function with type * -> * -> *, i.e. the one that takes two type parameters and creates a new type, with function bimap defined, which lifts two functions f :: a -> c and g :: b -> d into our Bifunctor by turning them into one function h :: bf a b -> bf c d. Here is a picture to help you visualize the concept: Do you notice a beautiful symmetry? bf takes two types and “lifts” them in a new type; bimap takes two functions and “lifts” them into new one. Making Either an instance of Bifunctor is very straightforward: Again, in a couple of paragraphs of text, by solving a practical task of dealing with errors in our computations, we invented something with a very scary name, which is nothing more than a really cool and useful abstraction for type functions with two type parameters. You should now be able to rewrite our pretty specific eitherLift function from above via general bimap function. Try doing it on your own before reading further. To solve this task, you need to realize that we were lifting b -> d function, in our case Double -> Double, while the a -> c one, in our case String -> String was added implicitly in the body of the Left case. Realizing this, we can define eitherLift as follows: Please dwell on these two lines for some time until you fully understand what is happening here and you are fully convinced this definition is equivalent to the one given above. Strictly speaking, the function above will have a more generic type: eitherLift :: (a -> b) -> Either String a -> Either String b, so we will be able to lift not just functions of type Double -> Double with its help, but functions between any two types a -> b. Convince yourself why this is the case. ## PlayerData as a Bifunctor We defined a Functor instance for our PlayerData a type. However, values of this type store two values - a Deck and a state of type a. We may want to manipulate not just the the state, but also the deck in our player data. Isn’t it exactly what Bifunctor allows us to do? # Chapter 6: Applicative Application of the Applicants By now, you should have a basic intuition on how various Haskell abstractions arise. We have concrete types, such as Int and Char; we have “type functions” that create new types from other types, such as Maybe a and List a. In the real-world application, that is where you should always start - design your types. Once you have, the only thing left is decomposing the problem into smaller pieces and solving it by composing functions together. Just like in the simplified picture from Chapter 1: All of the abstractions we have and will still look at in this book serve just this purpose: they help make our functions compose in a beautiful, concise, and maintainable fashion. In the previous chapter, we “invented” a Functor when we saw a need to apply functions between two concrete types - a->b - to the lifted values, in effect turning them to another function of type f a -> f b. For instance, apply a function square :: Double -> Double to Just 4.0::Maybe Double. Then, we naturally generalized this useful abstraction to a Bifunctor by asking ourselves a question “how do we lift such functions to parametric types that take two type parameters instead of one?” using Either a b as an example of such a type. These two allowed us to write concise code such as (*2) <$> (Just 4) or (++"hello") <$> ["world", "me"] instead of defining a new clumsy function for each new similar case. ## We need a bigger Lift But what do we do with “normal” functions that take two parameters? For instance, what if instead of multiplying Just 4 by 2, as written above, we wanted to multiply Just 4 by Just 2, if both arise as a result of some safe operation? We would need a function with the type Maybe Double -> Maybe Double -> Maybe Double - and we have no way to construct such a function easily. As another example, what if we have two lists of strings - ["hello", "goodby"] and ["world", "friend"] - and we want to stitch the strings in these lists together using (++) :: String -> String -> String? Here, the type of the function we need to construct is [String] -> [String] -> [String]. You should see how we would want to generalize it using the same approach as with Functor and Bifunctor. We need some operator, similar to <$> from Functor, that takes a function with the type a -> b -> c and lifts it into our functorial types by turning it into a function with the type f a -> f b -> f c. If you recall the picture for Functor from the previous chapter, it is a very simple extension of it to three types from two:

In Haskell, this is the operation liftA2 :: (a -> b -> c) -> f a -> f b -> f c, which is a part of the Applicative typeclass. However, it turns out that such a simple, bordering on the trivial extension of the Functor actually gives us more power than may seem at first glance. Before we discuss it, let’s define the liftA2 function for the Maybe type:

If you wrote it for Doubles, general variant should be straightforward for you.

Now what if rather than multiplying Just 4 with Just 2 as in the example above, we have a function inside our Maybe type? As you know, functions are first class citizens in Haskell, so we may pass them around and return them as a result of other functions. So what if we have Just (+2) - a function that adds 2 to any other number we pass to it inside a Maybe - and we want to apply it to Just 2?

Or, looking at List instead of Maybe, what if we take a list of functions such as [(+2), (*2)] and we want to apply it to a list of numbers [1,2,3]? These two cases may seem unrelated at first glance, but you should already be able to identify type signatures quite well and be able to say that in the first case we have a value of type Maybe (Int -> Int) - i.e., a function Int -> Int lifted in our Maybe type, and we want to apply it to a value of type Maybe Int. In the second the list of functions is a value of type List (Int->Int), written in Haskell as [Int->Int] and we need to apply them all to a value of type [Int].

A pattern emerges again - we have a type of functions lifted in a type function and we need a way to apply them to a value, also lifted in the same type function (Maybe in the first case, List in the second). To do this, we neead another operator, also a part of the Applicative: (<*>) :: f (a -> b) -> f a -> f b. The signature tells us that we have a function a -> b lifted inside our Functor type function f :: * -> *, and we need to somehow apply it to a value of type f a and get a value of type f b as a result.

On one hand, what it does is not really connected to our liftA2 function, but it turns out you can express one in terms of the other. Which hints that two abstractions - “lift a function of 2 parameters into a type function” and “apply a lifted function of 1 parameter to a lifted value” are very closely connected.

## Action! Apply! Cut!

If you read the docs on Applicative in the Haskell base library you probably noticed there is a lot of talk about actions and computations. It seems that it somehow helps to sequence them efficiently. To be able to see what the fuss is all about, let us invent another interesting type function of one parameter f :: * -> *, as you are probably tired of Maybe and List by this point.

As we recalled in the beginning of the chapter, the functional way to solve a problem is to compose different functions together, making sure the types “fit”. So what if we want to record some information as we are executing our functions and then read it at the end of our program? Haskell is a pure language, and even though we can work with mutable variables, it should be our last resort for a lot of the reasons, which we will discuss later on. But if we do not use mutable state - how do we record this information?! Our functions are pure mathematical functions, they take one value and give another value, there are no side effects. Well, this hints that we need to somehow pass our information along the whole chain of our functions. Let us count the number of functions in our program as the simplest possible example.

So, we have a bunch of functions in our program of types a -> b, b -> c, c -> d etc, that compose quite well. We need some way to amend our types to also pass the counter. We can do it with a tuple! E.g., if we have a function String -> Int, and we apply it to “hello” and get 5 as a result, we need it to turn to a function that receives (counter, "hello") and gives us (counter+1, 5) as a result. This means we can define our new type function as:

Now if we have some way to turn our a -> b functions into Countable a -> Countable b functions - we should be able to do what we want. What operator do we have that takes a -> b and turns it into f a -> f b? Of course this is simply fmap from the Functor! Let’s make our type an instance of one: