This eBook will be updated occasionally so please periodically check the leanpub.com web page for this book for updates.

If you found a copy of this book on the web and find it of value then please consider buying a copy at leanpub.com/haskell-cookbook to support the author and fund work for future updates.

## Preface

This is the preface to the new second edition released summer of 2019.

It took me over a year learning Haskell before I became comfortable with the language because I tried to learn too much at once. There are two aspects to Haskell development: writing pure functional code and writing impure code that needs to maintain state and generally deal with the world non-deterministically. I usually find writing pure functional Haskell code to be easy and a lot of fun. Writing impure code is sometimes a different story. This is why I am taking a different approach to teaching you to program in Haskell: we begin techniques for writing concise, easy to read and understand efficient pure Haskell code. I will then show you patterns for writing impure code to deal with file IO, network IO, database access, and web access. You will see that the impure code tends to be (hopefully!) a small part of your application and is isolated in the impure main program and in a few impure helper functions used by the main program. Finally, we will look at a few larger Haskell programs.

### Additional Material in the Second Edition

In addition to updating the introduction to Haskell and tutorial material, I have added a few larger projects to the second edition.

The project knowledge_graph_creator helps to automate the process of creating Knowledge Graphs from raw text input and generates data for both the Neo4J open source graph database as well as RDF data for use in semantic web and linked data applications.

The project HybridHaskellPythonNlp is a hybrid project: a Python web service that provides access to the SpaCy natural language processing (NLP) library and select NLP deep learning models and a Haskell client for accessing this service. It sometimes makes sense to develop polyglot applications (i.e., applications written in multiple programming languages) to take advantage of language specific libraries and frameworks. We will also use a similar hybrid example HybridHaskellPythonCorefAnaphoraResolution that uses another deep learning model to replace pronouns in text with the original nouns that the pronouns refer to. This is a common processing step for systems that extract information from text.

I spent time writing this book to help you, dear reader. I release this book under the Creative Commons “share and share alike, no modifications, no commercial reuse” license and set the minimum purchase price to $6.00 in order to reach the most readers. You can also read this (and all of my books) for free on my my website. Under this license you can share a PDF version of this book with your friends and coworkers. If you would like to support my work please consider purchasing my books on Leanpub and star my git repositories that you find useful on GitHub. You can also interact with me on social media on Mastodon and Twitter. I enjoy writing and your support helps me write new editions and updates for my books and to develop new book projects. Thank you! ### Structure of the Book The first section of this book contains two chapters: • A tutorial on pure Haskell development: no side effects. • A tutorial on impure Haskell development: dealing with the world (I/O, network access, database access, etc.). This includes examples of file IO and network programming as well as writing short applications: a mixture of pure and impure Haskell code. After working through these tutorial chapters you will understand enough of Haskell development to understand and be able to make modifications for your own use of the cookbook examples in the second section. Some of the general topics will be covered again in the second book section that contains longer sample applications. For example, you will learn the basics for interacting with Sqlite and Postgres databases in the tutorial on impure Haskell code but you will see a much longer example later in the book when I provide code that implements a natural language processing (NLP) interface to relational databases. The second section contains the following recipes implemented as complete programs: • Textprocessing CSV Files • Textprocessing JSON Files • Natural Language Processing (NLP) interface to relational databases, including annotating English text with Wikipedia/DBPedia URIs for entities in the original text. Entities can be people, places, organizations, etc. • Accessing and Using Linked Data • Querying Semantic Web RDF Data Sources • Web scraping data on web sites • Using Sqlite and Postgres relational databases • Play a simple version of Blackjack card game A new third section (added in 2019 for the second edition) has three examples that were derived by my own work. ### Code Examples The code examples in this book are licensed under two software licenses and you can choose the license that works best for your needs: Apache 2 and GPL 3. To be clear, you can use the examples in commercial projects under the Apache 2 license and if you like to write Free (Libre) software then use the GPL 3 license. We will use stack as a build system for all code examples. The code examples are provided as 22 separate stack based projects. These examples are found on github. ### Functional Programming Requires a Different Mind Set You will learn to look at problems differently when you write functional programs. We will use a bottom up approach in most of the examples in this book. I like to start by thinking of the problem domain and decide how I can represent the data required for the problem at hand. I prefer to use native data structures. This is the opposite approach to object oriented development where considerable analysis effort and coding effort is required to define class hierachies to represent data. In most of the code we use simple native data types like lists and maps. Once we decide how to represent data for a program we then start designing and implementing simple functions to operate on and transform data. If we find ourselves writing functions that are too long or too complex, we can break up code into simpler functions. Haskell has good language support for composing simple functions into more complex operations. I have spent many years engaged in object oriented programming starting with CLOS for Common Lisp, C++, Java, and Ruby. I now believe that in general, and I know it is sometimes a bad idea to generalize too much, functional programming is a superior paradigm to object oriented programming. Convincing you of this belief is one of my goals in writing this book! ### eBooks Are Living Documents I wrote printed books for publishers like Springer-Verlag, McGraw-Hill, and Morgan Kaufman before I started self-publishing my own books. I prefer eBooks because I can update already published books and update the code examples for eBooks. I encourage you to periodically check for free updates to both this book and the code examples on the leanpub.com web page for this book. ### Setting Up Your Development Environment I strongly recommend that you use the stack tool from the stack website. This web site has instructions for installing stack on OS X, Windows, and Linux. If you don’t have stack installed yet please do so now and follow the “getting started” instructions for creating a small project. Appendix A contains material to help get you set up. It is important for you to learn the basics of using stack before jumping into this book because I have set up all of the example programs using stack. The github repository for the examples in this book is located at github.com/mark-watson/haskell_tutorial_cookbook_examples. Many of the example listings for code examples are partial or full listing of files in my github repository. I show the file name, the listing, and the output. To experiment with the example yourself you need to load it and execute the main function; for example, if the example file is TestSqLite1.hs in the sub-directory Database, then from the top level directory in the git repository for the book examples you would do the following: If you don’t want to run the example in a REPL in order to experiment with it interactively you can then just run it via stack using: I include README.md files in the project directories with specific instructions. I now use VSCode for most of my Haskell development. With the Haskell plugins VSCode offers auto-completion while typing and highlights syntax errors. Previously I use other editor for Haskell development. If you are an Emacs user I recommend that you follow the instructions in Appendix A, load the tutorial files into an Emacs buffer, build an example and open a REPL frame. If one is not already open type control-c control-l, switch to the REPL frame, and run the main function. When you make changes to the tutorial files, doing another control-c control-l will re-build the example in less than a second. In addition to using Emacs I occasionally use the IntelliJ Community Edition (free) IDE with the Haskell plugin, the TextMate editor (OS X only) with the Haskell plugin, or the GNU GEdit editor (Linux only). Appendix A also shows you how to setup the *stack Haskell build tool. Whether you use Emacs/VSCode or run a REPL in a terminal window (command window if you are using Windows) the important thing is to get used to and enjoy the interactive style of development that Haskell provides. ### Why Haskell? I have been using Lisp programming languages professionally since 1982. Lisp languages are flexible and appropriate for many problems. Some might dissagree with me but I find that Haskell has most of the advantages of Lisp with the added benefit of being strongly typed. Both Lisp and Haskell support a style of development using an interactive shell (or “repl”). What does being a strongly typed language mean? In a practical sense it means that you will often encounter syntax errors caused by type mismatches that you will need to fix before your code will compile (or run in the GHCi shell interpreter). Once your code compiles it will likely work, barring a logic error. The other benefit that you can get is having to write fewer unit tests - at least that is my experience. So, using a strongly typed language is a tradeoff. When I don’t use Haskell I tend to use dynamic languages like Common Lisp or Python. ### Enjoy Yourself I have worked hard to make learning Haskell as easy as possible for you. If you are new to the Haskell programming language then I have something to ask of you, dear reader: please don’t rush through this book, rather take it slow and take time to experiment with the programming examples that most interest you. ### Acknowledgements I would like to thank my wife Carol Watson for editing the manuscript for this book. I would like to thank Roy Marantz, Michel Benard, and Daniel Kroni for reporting an errors. ## Section 1 - Tutorial The first section of this book contains two chapters: • A tutorial on pure Haskell development: no side effects. • A tutorial on impure Haskell development: dealing with the world (I/O, network access, database access, etc.) After working through these two tutorial chapters you will have sufficient knowledge of Haskell development to understand the cookbook examples in the second section and be able to modify them for your own use. Some of the general topics will be covered again in the second book section that contains longer example programs. ## Tutorial on Pure Haskell Programming Pure Haskell code has no side effects and if written properly is easy to read and understand. I am assuming that you have installed stack using the directions in Appendix A. It is important to keep a Haskell interactive repl open as you read the material in this book and experiment with the code examples as you read. I don’t believe that you will be able to learn the material in this chapter unless you work along trying the examples and experimenting with them in an open Haskell repl! The directory Pure in the git repository contains the examples for this chapter. Many of the examples contain a small bit of impure code in a main function. We will cover how this impure code works in the next chapter but let’s look at a short example of impure code that is contained inside a main function: The function main is the entry point of this short two line program. When the program is run, the main function will be executed. Here the function main uses the do notation to execute a single IO action, but do can also execute a sequence of actions. The putStrLn function prints a string to the console. The printed string is constructed by concatenating three parts: “1 + 2 = “, the result of the expression 1 + 2 (which is 3), and the string representation of this result, which is obtained by calling the function show. It’s worth noting that putStrLn writes a string to the standard output and also writes a new line character to the console. In general, the function show is used to convert any value to a string, here it is converting the result of 1+2 to string to concatenate it with the previous string. Pure Haskell code performs no I/O, network access, access to shared in-memory data structures, etc. The first time you build an example program with stack it may take a while since library dependencies need to be loaded from the web. In each example directory, after an initial stack build or stack ghci (to run the repl) then you should not notice this delay. ### Interactive GHCi Shell The interactive shell (often called a “repl”) is very useful for learning Haskell: understanding types and the value of expressions. While simple expressions can be typed directly into the GHCi shell, it is usually better to use an external text editor and load Haskell source files into the shell (repl). Let’s get started. Assuming that you have installed stack as described in Appendix A, please try: If you are working in a repl and edit a file you just loaded with :l, you can then reload the last file loaded using :r without specifying the file name. This makes it quick and easy to edit a Haskell file with an external editor like Emacs or Vi and reload it in the repl after saving changes to the current file. Here we have evaluated a simple expression “1 + 2” in line 10. Notice that in line 12 we can always place parenthesis around an expression without changing its value. We will use parenthesis when we need to change the default orders of precedence of functions and operators and make the code more readable. In line 14 we are using the ghci :t command to show the type of the expression (1 + 2). The type Num is a type class (i.e., a more general purpose type that other types can inherit from) that contains several sub-types of numbers. As examples, two subtypes of Num are Fractional (e.g., 3.5) and Integer (e.g., 123). Type classes provide a form of function overloading since existing functions can be redefined to handle arguments that are instances of new classes. In line 16 we are using the ghci command :l to load the external file Simple.hs. This file contains a function called main so we can execute main after loading the file. The contents of Simple.hs is: Line 1 defines a module named Main. The rest of this file is the definition of the module. This form of the module do expression exports all symbols so other code loading this module has access to sum2 and main. If we only wanted to export main then we could use: The function sum2 takes two arguments and adds them together. I didn’t define the type of this function so Haskell does it for us using type inference. What if you want to build a standalone executable program from the example in Smple.hs? Here is an example: Most of the time we will use simple types built into Haskell: characters, strings, lists, and tuples. The type Char is a single character. One type of string is a list of characters [Char]. (Another type ByteString will be covered in later chapters.) Every element in a list must have the same type. A Tuple is like a list but elements can be different types. Here is a quick introduction to these types, with many more examples later: The GHCi repl command :t tells us the type of any expression or function. Much of your time developing Haskell will be spent with an open repl and you will find yourself checking types many times during a development session. In line 1 you see that the type of ’s‘ is ’s’ :: Char and in line 3 that the type of the string “tree” is [Char] which is a list of characters. The abbreviation String is defined for [Char]; you can use either. In line 9 we see the “cons” operator : used to prepend a character to a list of characters. The cons : operator works with all types contained in any lists. All elements in a list must be of the same type. The type of the list of numbers [1,2,3,4] in line 11 is [1,2,3,4] :: Num t ⇒ [t]. The type Num is a general number type. The expression Num t ⇒ [t] is read as: “t is a type variable equal to Num and the type of the list is [t], or a list of Num values”. It bears repeating: all elements in a list must be of the same type. The functions head and tail used in lines 19 and 21 return the first element of a list and return a list without the first element. You will use lists frequently but the restriction of all list elements being the same type can be too restrictive so Haskell also provides a type of sequence called tuple whose elements can be of different types as in the examples in lines 25-31. Tuples of length 2 are special because functions fst and snd are provided to access the first and second pair value: Please note that fst and snd will not work with tuples that are not of length 2. Also note that if you use the function length on a tuple, the result is always one because of the way tuples are defined as Foldable types, which we will use later. Haskell provides a concise notation to get values out of long tuples. This notation is called destructuring: Here, we defined a tuple geoData with values: index, street address, zip code, and temperature. In line two we extract the zip code and temperature. Another reminder: we use let in lines 1-2 because we are in a repl. Like all programming languages, Haskell has operator precedence rules as these examples show: The examples in lines 1-4 illustrate that the multiplication operator has a higher precedence than the addition operator. Note that the function length starts with a lower case letter. All Haskell functions start with a lower case letter except for type constructor functions that we will get to later. A Foldable type can be iterated through and be processed with map functions (which we will use shortly). We saw that the function + acts as an infix operator. We can convert infix functions to prefix functions by enclosing them in parenthesis: In this last example we also saw how a prefix function div can be used infix by enclosing it in back tick characters. Usually we define functions in files and load them as we need them. Here is the contents of the file myfunc1.hs: The first line is a type signature for the function and is not required; here the input arguments are two lists and the output is the two lists concatenated together. In line 1 note that a is a type variable that can represent any type. However, all elements in the two function input lists and the output list are constrained to be the same type. Please note that the stack repl auto-completes using the tab character. For example, when I was typing in “:l myfunc1.hs” I actually just typed “:l myf” and then hit the tab character to complete the file name. Experiment with auto-completion, it will save you a lot of typing. In the following example, for instance, after defining the variable sentence I can just type “se” and the tab character to auto-complete the entire variable name: The function head returns the first element in a list and the function tail returns all but the first elements in a list: We can create new functions from existing arguments by supplying few arguments, a process known as “currying”: In this last example the function + takes two arguments but if we only supply one argument a function is returned as the value: in this case a function that adds 1 to an input value. We can also create new functions by composing existing functions using the infix function . that when placed between two function names produces a new function that combines the two functions. Let’s look at an example that uses . to combine the partial function (+ 1) with the function length: Note the order of the arguments to the inline function .: the argument on the right side is the first function that is applied, then the function on the left side of the . is applied. This is the second example where we have seen the type Foldable which means that a type can be mapped over, or iterated over. We will look at Haskell types in the next section. ### Introduction to Haskell Types This is a good time to spend more time studying Haskell types. We will see more material on Haskell types throughout this book so this is just an introduction using the data expression to define a Type MyColors defined in the file MyColors.hs: This code defines a new data type in Haskell named MyColors that has five values: Orange, Red, Blue, Green or Silver. The keyword data is used to define a new data type, and the “|” symbol is used to separate the different possible values (also known as constructors) of the type. The deriving (Show) clause at the end of the line tells the compiler to automatically generate an implementation of the Show type class for the MyColors type. In other words, we are asking the Haskell compiler to automatically generate a function show that can convert a value to a string. show is a standard function and in general we want it defined for all types. show converts an instance to a string value. This allows instances of MyColors to be converted to strings using the function show. The MyColors type defined here is an enumeration (i.e., it is a fixed set of values), it’s an algebraic data type with no associated fields. This means that the type MyColors can only take one of the five values defined: Orange, Red, Blue, Green or Silver. There is another way to think about this. This code defines a new data type called MyColors with five constructors Orange, Red, Blue, Green or Silver. What went wrong here? The infix function == checks for equality and we did not define equality functions for our new type. Let’s fix the definition in the file colors.hs: Because we are deriving Eq we are also asking the compiler to generate code to see if two instances of this class are equal. If we wanted to be able to order our colors then we would also derive Ord. Now our new type has show, ==, and /= (inequality) defined: Let’s also now derive Ord to have the compile generate a default function compare that operates on the type MyColors: Because we are now deriving Ord the compiler will generate functions to calculate relative ordering for values of type MyColors. Let’s experiment with this: Notice that the compiler generates a compare function for the type MyColors that orders values by the order that they appear in the data expression. What if you wanted to order them in string sort order? This is very simple: we will remove Ord from the deriving clause and define our own function compare for type MyColors instead of letting the compiler generate it for us: In line 5 I am using the function show to convert instances of MyColors to strings and then the version of compare that is called in line 5 is the version the compiler wrote for us because we derived Show. Now the ordering is in string ascending sort order because we are using the compare function that is supplied for the type String: Our new type MyColors is a simple type. Haskell also supports hierarchies of types called Type Classes and the type we have seen earlier Foldable is an example of a type class that other types can inherit from. For now, consider sub-types of Foldable to be collections like lists and trees that can be iterated over. I want you to get in the habit of using :type and :info (usually abbreviated to :t and :i) in the GHCi repl. Stop reading for a minute now and type :info Ord in an open repl. You will get a lot of output showing you all of the types that Ord is defined for. Here is a small bit of what gets printed: Lines 1 through 8 show you that Ord is a subtype of Eq that defines functions compare, max, and min as well as the four operators <, <=, >=, and >=. When we customized the compare function for the type MyColors, we only implemented compare. That is all that we needed to do since the other operators rely on the implementation of compare. Once again, I ask you to experiment with the example type MyColors in an open GHCi repl: The following diagram shows a partial type hierarchy of a few types included in the standard Haskell Prelude (this is derived from the Haskell Report at haskell.org): Here you see that type Num and Ord are sub-types of type Eq, Real is a sub-type of Num, etc. We will see the types Monad and Functor in the next chapter. ### Functions Are Pure Again, it is worth pointing out that Haskell functions do not modify their inputs values. The common pattern is to pass immutable values to a function and modified values are returned. As a first example of this pattern we will look at the standard function map that takes two arguments: a function that converts a value of any type a to another type b, and a list of type a. Functions that take other functions as arguments are called higher order functions. The result is another list of the same length whose elements are of type b and the elements are calulated using the function passed as the first argument. Let’s look at a simple example using the function (+ 1) that adds 1 to a value: In the first example, types a and b are the same, a Num. The second example used a composed function that adds 1 and then converts the example to a string. Remember: the function show converts a Haskell data value to a string. In this second example types a and b are different because the function is mapping a number to a string. The directory haskell_tutorial_cookbook_examples/Pure contains the examples for this chapter. We previously used the example file Simple.hs. Please note that in the rest of this book I will omit the git repository top level directory name haskell_tutorial_cookbook_examples and just specify the sub-directory name: For now let’s just look at the mechanics of executing this file without using the REPL (started with stack ghci). We can simply build and run this example using stack, which is covered in some detail in Appendix A: This command builds the project defined in the configuration files Pure.cabal and stack.yaml (the format and use of these files is briefly covered in detail in Appendix A and there is more reference material here). This example defines two functions: sum2 and main. sum2 is a pure Haskell function with no state, no interaction with the outside world like file IO, etc., and no non-determinism. main is an impure function, and we will look at impure Haskell code in some detail in the next chapter. As you might guess the output of this code snippet is To continue the tutorial on using pure Haskell functions, once again we will use stack to start an interactive repl during development: In this last listing I don’t show the information about your Haskell environment and the packages that were loaded. In repl listings in the remainder of this book I will continue to edit out this Haskell environment information for brevity. Line 4 shows the use of the repl shortcut :t to print out the type of a string which is an array of [Char], and the type of the function main is of type IO Action, which we will explain in the next chapter. An IO action contains impure code where we can read and write files, perform a network operation, etc. and we will look at IO Action in the next chapter. ### Using Parenthesis or the Special$ Character and Operator Precedence

We will look at operator and function precedence and the use of the $character to simplify using parenthesis in expessions. By the way, in Haskell there is not much difference between operators and function calls except operators like +, etc. which are by default infix while functions are usually prefix. So except for infix functions that are enclosed in backticks (e.g., 10 div 3) Haskell usually uses prefix functions: a function followed by zero or more arguments. You can also use$ that acts as an opening parenthesis with a not-shown closing parenthesis at the end of the current expression (which may be multi-line). Here are some examples:

I use the GHCi command :info (:i is an abbreviation) to check both operator precedence and the function signature if the operator is converted to a function by enclosing it in parenthessis:

Notice how + has lower precedence than *.

Just to be clear, understand how operators are used as functions and also how functions can be used as infix operators:

Especially when you are just starting to use Haskell it is a good idea to also use :info to check the type signatures of standard functions that you use. For example:

### Lazy Evaluation

Haskell is refered to as a lazy language because expressions are not evaluated until they are used. Consider the following example:

In line 2 we are creating a list with 11 elements. In line 4 we are doing two things:

• Creating an infinitely long list containing ascending integers starting with 0.
• Fetching the first 11 elements of this infinitely long list. It is important to understand that in line 4 only the first 11 elements are generated because that is all the take function requires.

In line 6 we are assigning another infinitely long list to the variable xs but the value of xs is unevaluated and a placeholder is stored to calculate values as required. In line 7 we use GHCi’s :sprint command to show a value without evaluating it. The output in line 8 _ indicated that the expression has yet to be evaluated.

Lines 9 through 12 remind us that Haskell is a functional language: the take function used in line 9 does not change the value of its argument so xs as seen in lines 10 and 12 is still unevaluated.

### Understanding List Comprehensions

Effectively using list comprehensions makes your code shorter, easier to understand, and easier to maintain. Let’s start out with a few GHCi repl examples. You will learn a new GHCi repl trick in this section: entering multiple line expressions by using :{ and :} to delay evaluation until an entire expression is entered in the repl (listings in this section are reformatted to fit the page width):

The list comprehension on line 1 assigns the elements of the list [“cat”, “dog”, “bird”] one at a time to the variable x and then collects all these values of x in a list value that is the value of the list comprehension. The list comprehension in line 1 is hopefully easy to understand but when we bind and collect multiple variables the situation, as seen in the example in lines 4 and 5, is not as easy to understand. The thing to remember is that the first variable gets iterated as an “outer loop” and the second variable is iterated as the “inner loop.” List comprehensions can use many variables and the iteration ordering rule is the same: last variable iterates first, etc.

In this last example we are generating all combinations of [0..3] and [1,3..10] and storing the combinations as two element tuples. You could also store then as lists:

List comprehensions can also contain filtering operations. Here is an example with one filter:

Here is a similar example with two filters (we are also filtering out all possible values of x that start with the character ‘d’):

For simple filtering cases I usually use the filter function but list comprehensions are more versatile. List comprehensions are extremely useful - I use them frequently.

Lists are instances of the class Monad that we will cover in the next chapter (check out the section “List Comprehensions Using the do Notation”).

List comprehensions are powerful. I would like to end this section with another trick that does not use list comprehensions for building lists of tuple values: using the zip function:

The function zip is often used in this way when we have a list of objects and we want to operate on the list while knowing the index of each element.

### Haskell Rules for Indenting Code

When a line of code is indented relative to the previous line of code, or several lines of code with additional indentation, then the indented lines act as if they were on the previous line. In other words, if code that should all be on one line must be split to multiple lines, then use indentation as a signal to the Haskell compiler.

Indentation of continuation lines should be uniform, starting in the same column. Here are some examples of good code, and code that will not compile:

If you use C style braces and semicolons to mark end of expressions, then indenting does not matter as seen in lines 20 through 24. Otherwise, uniform indentation is a hint to the compiler.

The same indenting rules apply to other types of do expressions which we will see throughout this book for do, if, and other types of do expressions.

### Understanding let and where

At first glance, let and where seem very similar in that they allow us to create temporary variables used inside functions. As the examples in the file LetAndWhere.hs show, there are important differences.

In the following code notice that when we use let in pure code inside a function, we then use in to indicate the start of an expression to be evaluated that uses any variables defined in a let expression. Inside a do code block the in token is not needed and will cause a parse error if you use it. do code blocks are a syntactic sugar for use in impure Haskell code and we will use it frequently later in the book.

You also do not use in inside a list comprehension as seen in the function testLetComprehension in the next code listing:

Compare the let do expressions starting on line 4 and 24. The first let occurs in pure code and uses in to define one or more do expressions using values bound in the let. In line 24 we are inside a monad, specifically using the do notation and here let is used to define pure values that can be used later in the do do expression.

Loading the last code example and running the main function produces the following output:

This output is self explanatory except for line 7 that is the result of calling testLetComprehension that retuns an example list comprehension [(a,b)|a<-[0..5],letb=10*a]

### Conditional do Expressions and Anonymous Functions

The examples in the next three sub-sections can be found in haskell_tutorial_cookbook_examples/Pure/Conditionals.hs. You should read the following sub-sections with this file loaded (some GHCi repl output removed for brevity):

#### Simple Pattern Matching

We previously used the built-in functions head that returns the first element of a list and tail that returns a list with the first element removed. We will define these functions ourselves using what is called wild card pattern matching. It is common to append the single quote character to built-in functions when we redefine them so we name our new functions head’ and tail’. Remember when we used destructuring to access elements of a tuple? Wild card pattern matching is similar:

The underscore character _ matches anything and ignores the matched value. Our head and tail definitions work as expected:

Of course we frequently do not want to ignore matched values. Here is a contrived example that expects a list of numbers and doubles the value of each element. As for all of the examples in this chapter, the following function is pure: it can not modify its argument(s) and always returns the same value given the same input argument(s):

In line 1 we start by defining a pattern to match the empty list. It is necessary to define this terminating condition because we are using recursion in line 2 and eventually we reach the end of the input list and make the recursive call doubleList []. If you leave out line 1 you then will see a runtime error like “Non-exhaustive patterns in function doubleList.” As a Haskell beginner you probably hate Haskell error messages and as you start to write your own functions in source files and load them into a GHCi repl or compile them, you will initially probably hate compilation error messages also. I ask you to take on faith a bit of advice: Haskell error messages and warnings will end up saving you a lot of effort getting your code to work properly. Try to develop the attitude “Great! The Haskell compiler is helping me!” when you see runtime errors and compiler errors.

In line 2 notice how I didn’t need to use extra parenthesis because of the operator and function application precedence rules.

This function doubleList seems very unsatisfactory because it is so specific. What if we wanted to triple or quadruple the elements of a list? Do we want to write two new functions? You might think of adding an argument that is the multiplier like this:

is better, being more abstract and more general purpose. However, we will do much better.

Before generalizing the list manipuation process further, I would like to make a comment on coding style, specifically on not using unneeded parenthesis. In the last exmple defining bumpList if you have superfluous parenthesis like this:

then the code still works correctly and is fairly readable. I would like you to get in the habit of avoiding extra uneeded parenthesis and one tool for doing this is running hlint (installing hlint is covered in Appendix A) on your Haskell code. Using hlint source file will provide warnings/suggestions like this:

hlint is not only a tool for improving your code but also for teaching you how to better program using Haskell. Please note that hlint provides other suggestions for Conditionals.hs that I am ignoring that mostly suggest that I replace our mapping operations with using the built-in map function and use functional composition. The sample code is specifically to show examples of pattern matching and is not as concise as it could be.

Are you satisfied with the generality of the function bumpList? I hope that you are not! We should write a function that will apply an arbitrary function to each element of a list. We will call this function map’ to avoid confusing our map’ function with the built-in function map.

The following is a simple implementation of a map function (we will see Haskell’s standard map functions in the next section):

In line 2 we do not need parenthesis around f x because function application has a higher precidence than the operator : which adds an element to the beginning of a list.

Are you pleased with how concise this definition of a map function is? Is concise code like map’ readable to you? Speaking as someone who has written hundreds of thousands of lines of Java code for customers, let me tell you that I love the conciseness and readability of Haskell! I appreciate the Java ecosystem with many useful libraries and frameworks and augmented like fine languages like Clojure and JRuby, but in my opinion using Haskell is a more enjoyable and generally more productive language and programming environment.

Let’s experiment with our map’ function:

Lines 1 and 3 should be understandable to you: we are creating a partial function like (* 7) and passing it to map’ to apply to the list [0..5].

The syntax for the function in line 5 is called an anonymous function. Lisp programers, like myself, refer to this as a lambda expression. In any case, I often prefer using anonymous functions when a function will not be used elsewhere. In line 5 the argement to the anonymous inline function is x and the body of the function is (x + 1) * 2.

I do ask you to not get carried away with using too many anonymous inline functions because they can make code a little less readable. When we put our code in modules, by default every symbol (like function names) in the module is externally visible. However, if we explicitly export symbols in a module do expression then only the explicitly exported symbols are visible by other code that uses the module. Here is an example:

In this example map’ and testFunc are hidden: any other module that imports Test2 only has access to doubler. It might help for you to think of the exported functions roughly as an interface for a module.

#### Pattern Matching With Guards

We will cover two important concepts in this section: using guard pattern matching to make function definitions shorter and easier to read and we will look at the Maybe type and how it is used. The Maybe type is mostly used in non-pure Haskell code and we will use it heavily later. The Maybe type is a Monad (covered in the next chapter). I introduce the Maybe type here since its use fits naturally with guard patterns.

Guards are more flexible than the pattern matching seen in the last section. I use pattern matching for simple cases of destructuring data and guards when I need the flexibility. You may want to revisit the examples in the last section after experimenting with and understanding the examples seen here.

The examples for this section are in the file Guards.hs. As a first simple example we will implement the Ruby language “spaceship operator”:

Notice on line 1 that we do not use an = in the function definition when using guards. Each guard starts with |, contains a condition, and a value on the right side of the = sign.

Remember that a literal negative number as seen in line 1 must be wrapped in parenthesis, otherwise the Haskell compiler will interpret - as an operator.

#### Case Expressions

Case do expressions match a value against a list of possible values. It is common to use the wildcard matching value _ at the end of a case expression which can be of any type. Here is an example in the file Cases.hs:

The code in lines 3-7 defines the function numberOpinion that takes a single argument “n”. We use a case expression to match the value of n against several possible cases. Each of these cases is defined using the -> operator, followed by an expression to be evaluated if the case is matched.

The first case, 0 -> ‘Too low’ matches the value of n against 0, if the value of “n” is 0, the function will return the string “Too low”. The second case, 1 -> ‘just right’ matches the value of n against 1, if the value of n is 1, the function will return the string “just right”. The last case is different in that it is a catch all case using the ** as a wild card match. So, ** -> ‘OK, that is a number’ matches any other values of n: if the value of nn is not 0 or 1 the function will return the string “OK, that is a number”.

#### If Then Else expressions

Haskell has if then else syntax built into the language - if is not defined as a function. Personally I do not use if then else in Haskell very often. I mostly use simple pattern matching and guards. Here are some short examples from the file IfThenElses.hs:

All if statements must have both a then expression and a else expression.

### Maps

Maps are simple to construct using a list of key-value tuples and are by default immutable. There is an example using mutable maps in the next chapter.

We will look at the module Data.Map first in a GHCi repl, then later in a few full code examples. There is something new in line 1 of the following listing: I am assigning a short alias M to the module Data.Map. In referencing a function like fromList (which converts a list of tuples to a map) in the Data.Map module I can use M.fromList instead of Data.Map.fromList. This is a common practice so when you read someone else’s Haskell code, one of the first things you should do when reading a Haskell source file is to make note of the module name abbreviations at the top of the file.

The keys in a map must all be the same type and the values are also constrained to be of the same type. I almost always create maps using the helper function fromList in the module Data.Maps. We will only be using this method of map creation in later examples in this book so I am skipping coverage of other map building functions. I refer you to the Data.Map documentation.

The following example shows one way to use the Just and Nothing return values:

The function getNumericValue shows one way to extract a value from an instance of type Maybe. The function lookup returns a Maybe value and in this example I use a case statement to test for a Nothing value or extract a wrapped value in a Just instance. Using Maybe in Haskell is a better alternative to checking for null values in C or Java.

The output from running the main function in module MapExamples is:

### Sets

The documentation of Data.Set.Class can be found here and contains overloaded functions for the types of sets defined here.

For most of my work and for the examples later in this book, I create immutable sets from lists and the only operation I perform is checking to see if a value is in the set. The following examples in GHCI repl are what you need for the material in this book:

Sets and Maps are immutable so I find creating maps using a lists of key-value tuples and creating sets using lists is fine. That said, coming from the mutable Java, Ruby, Python, and Lisp programming languages, it took me a while to get used to immutability in Haskell.

### More on Functions

In this section we will review what you have learned so far about Haskell functions and then look at a few more complex examples.

We have been defining and using simple functions and we have seen that operators behave like infix functions. We can make operators act as prefix functions by wrapping them in parenthesis:

and we can make functions act as infix operators:

This back tick function to operator syntax works with functions we write also:

Because we are working in a GHCi repl, in line 1 we use let to define the function myAdd. If you defined this function in a file and then loaded it, you would not use a let.

In the map examples where we applied a function to a list of values, so far we have used functions that map input values to the same return type, like this (using both partial function evaluation and anonymous inline function):

We can also map to different types; in this example we map from a list of Num values to a list containing sub-lists of Num values:

As usual, I recommend that when you work in a GHCi repl you check the types of functions and values you are working with:

In line 2 we see that for any type t the function signature is t -> [t] where the compiler determines that t is constrained to be a Num or Enum by examining how the input variable is used as a range parameter for constructing a list. Let’s make a new function that works on any type:

Notice in line 3 that the function make3 takes any type of input and returns a list of elements the same type as the input. We used makes3 both with a string argument and a fractional (floating point) number) argument.

### Comments on Dealing With Immutable Data and How to Structure Programs

If you program in other programming languages that use mutable data then expect some feelings of disorientation initially when starting to use Haskell. It is common in other languages to maintain the state of a computation in an object and to mutate the value(s) in that object. While I cover mutable state in the next chapter the common pattern in Haskell is to create a data structure (we will use lists in examples here) and pass it to functions that return a new modified copy of the data structure as the returned value from the function. It is very common to keep passing the modified new copy of a data structure through a series of function calls. This may seem cumbersome when you are starting to use Haskell but quickly feels natural.

The following example shows a simple case where a list is constructed in the function main and passed through two functions doubleOddElements and times10Elements:

Notice that the expressions being evaluated in lines 11 and 13 are the same. In line 11 we are applying function doubleOddElements to the value of aList and passing this value to the outer function times10Elements. In line 13 we are creating a new function from composing two existing functions: times10Elements . doubleOddElements. The parenthesis in line 13 are required because the . operator has lower precedence than the application of function doubleOddElements so without the parenthesis line 13 would evaluate as times10Elements (doubleOddElements aList) which is not what I intended and would throw an error.

The output is:

Using immutable data takes some getting used to. I am going to digress for a minute to talk about working with Haskell. The steps I take when writing new Haskell code are:

• Be sure I understand the problem
• How will data be represented - in Haskell I prefer using built-in types when possible
• Determine which Haskell standard functions, modules, and 3rd party modules might be useful
• Write and test the pure Haskell functions I think that I need for the application
• Write an impure main function that fetches required data, calls the pure functions (which are no longer pure in the sense they are called from impure code), and saves the processed data.

I am showing you many tiny examples but please keep in mind the entire process of writing longer programs.

### Error Handling

We have seen examples of handling soft errors when no value can be calculated: use Maybe, Just, and Nothing. In bug free pure Haskell code, runtime exceptions should be very rare and I usually do not try to trap them.

Using Maybe, Just, and Nothing is much better than, for example, throwing an error using the standard function error:

and then, in impure code catching the errors, here is the documentation for your reference.

In impure code that performs IO or accesses network resources that could possibly run out of memory, etc., runtime errors can occur and you could use the same try catch coding style that you have probably used in other programming languages. I admit this is my personal coding style but I don’t like to catch runtime errors. I spent a long time writing Java applications and when possible I preferred using uncaught exceptions and I usually do the same when writing impure Haskell code.

Because of Haskell’s type safety and excellent testing tools, it is possible to write nearly error free Haskell code. Later when we perform network IO we will rely on library support to handle errors and timeouts in a clean “Haskell like” way.

If you use stack to create a new project then the framework for testing is generated for you:

This stack generated project is more complex than the project I created manually in the directory haskell_tutorial_cookbook_examples/Pure. The file Setup.hs is a placeholder and uses any module named Main in the app directory. This module, defined in app/Main.hs, imports the module Lib defined in src/Lib.hs.

The generated test does not do anything, but let’s run it anyway:

In the generated project, I made a few changes:

• removed src/Lib.hs
• added src/MyColors.hs providing the type MyColors that we defined earlier
• modified app/Main.hs to use the MyColors type

Here is the contents of TestingHaskell/src/MyColors.hs:

And the new test/Spec.hs file:

Notice how two of the tests are meant to fail as an example. Let’s run the tests:

In line one with stack test we are asking stack to run app tests in the subdirectory test. All Haskell source files in subdirectory test are assumed to be test files. In the listing for file test/Spec.hs we have two tests that fail on purpose and you see the output for the failed tests at lines 12-15 and 17-20.

Because the Haskell compiler does such a good job at finding type errors I have fewer errors in my Haskell code compared to languages like Ruby and Common Lisp. As a result I find myself writing fewer tests for my Haskell code than I would write in other languages. Still, I recommend some tests for each of your projects; decide for yourself how much relative effort you want to put into writing tests.

I hope you are starting to get an appreciation for using composition of functions and higher order functions to enable us to compose programs from smaller pieces that can be joined together.

This composition is made easier when using pure functions that always return the same value when called with the same type of arguments.

We will continue to see examples of how lazy evaluation simplifies code because we can use infinitely large lists with the assurance that values are not calculated until they are needed.

In addition to Haskell code generally having fewer errors (after it gets by the compiler!) other advantages of functional programming include more concise code that is easy to read and understand once you get some experience with the language.

## Tutorial on Impure Haskell Programming

One of the great things about Haskell is that the language encourages us to think of our code in two parts:

• Pure functional code (functions have no side effects) that is easy to write and test. Functional code tends to be shorter and less likely to be imperative (i.e., more functional, using maps and recursion, and less use of loops as in Java or C++).
• Impure code that deals with side effects like file and network IO, maintaining state in a typesafe way, and isolate imperative code that has side effects.

In his excellent Functional Programming with Haskell class at eDX Erik Meijer described pure code as being islands in the ocean and the ocean representing impure code. He says that it is a design decision how much of your code is pure (islands) and how much is impure (the ocean). This model of looking at Haskel programs works for me.

My use the word “impure” is common for refering to Haskell code with side effects. Haskell is a purely functional language and side effects like I/O are best handled in a pure functional way using by wrapping pure values in Mondads.

In addition to showing you reusable examples of impure code that you will likely need in your own programs, a major theme of this chapter is handling impure code in a convenient type safe fashion. Any Monad, which wraps a single value, is used to safely manage state. I will introduce you to using Monad types as required for the examples in this chapter. This tutorial style introduction will prepare you for understanding the sample applications later.

I showed you many examples of pure code in the last chapter but most examples in source files (as opposed to those shown in a GHCi repl) had a bit of impure code in them: the main function like the following that simply writes a string of characters to standard output:

The type of function main is:

The IO () monad is an IO value wrapped in a type safe way. Because Haskell is a lazy evaluation language, the value is not evaluated until it is used. Every IO () action returns exactly one value. Think of the word “mono” (or “one”) when you think of Monads because they always return one value. Monads are also used to connnect together parts of a program.

What is it about the function main in the last example that makes its type an IO ()? Consider the simple main function here:

and its type:

OK, now you see that there is nothing special about a main function: it gets its type from the type of value returned from the function. It is common to have the return type depend on the function argument types. The first example returns a type IO () because it returns a print do expression:

The function print shows the enclosing quote characters when displaying a string while putStrLn does not. In the first example, what happens when we stitch together several expressions that have type IO ()? Consider:

Function main is still of type IO (). You have seen do expressions frequently in examples and now we will dig into what the do expression is and why we use it.

The do notation makes working with monads easier. There are alternatives to using do that we will look at later.

One thing to note is that if you are doing bindings inside a do expression using a let with a in expression, you need to wrap the bindings in a new (inner) do expression if there is more than one line of code following the let statement. The way to avoid requiring a nested do expression is to not use in in a let expression inside a do block of code. Yes, this sounds complicated but let’s clear up any confusion by looking at the examples found in the file ImPure/DoLetExample.hs (you might also want to look at the similar example file ImPure/DoLetExample2.hs that uses bind operators instead of a do statement; we will look at bind operators in the next section):

You should use the pattern in function example1 and not the pattern in example2. The do expression is syntactic sugar that allows programmers to string together a sequence of operations that can mix pure and impure code.

To be clear, the left arrow <- is used when the expression on the right side is some type of IO () that needs to be lifted before being used. A let do expression is used when the right side expression is a pure value.

On lines 6 and 12 we are using function read to converting a string read out of IO String () to an integer value. Remember that the value of s (from calling readLine) is an IO () so in the same way you might read from a file, in this example we are reading a value from an IO () value.

### A Note About >> and >>= Operators

So far in this book I have been using the syntactic sugar of the do expression to work with Monads like IO () and I will usually use this syntactic sugar for the rest of this book.

Even though I find it easier to write and read code using do, many Haskell programmers prefer >> and >>= so let’s go over these operators so you won’t be confused when reading Haskell code that uses them. Also, when we use do expressions in code the compiler generates similar code using these >> and >>= operators.

The Monad type class defines the operators >>= and return. We turn to the GHCi repl to experiment with and learn about these operators:

We start with the return function type return :: Monad m ⇒ a -> m a which tells us that for a monad m the function return takes a value and wraps it in a monad. We will see examples of the return function used to return a wrapped value from a function that returns IO () values. The bind operator (>>) is used to evaluate two expressions in sequence. As an example, we can replace this do expression:

with the following:

The operator >>= is similar to >> except that it evaluates the left hand expression and pipes its value into the right hand side expression. The left hand side expression is evaluated to some type of IO () and the expression on the right hand side typically reads from the input IO (). An example will make this simpler to understand:

Note that I could have used a do statement to define function example3 but used a bind operator instead. Let’s run this example and look at the function types. Please don’t just quickly read through the following listing; when you understand what is happening in this example then for the rest of your life programming in Haskell things will be easier for you:

The interesting part starts at line 11 when we define x to be the returned value from calling example3. Remember that Haskell is a lazy language: evaluation is postponed until a value is actually used.

Working inside a GHCi repl is like working interactively inside a do expression. When we evaluate x in line 12 then the code in function example3 is actually executed (notice this is where the user prompt to enter a number occurs). In line 18 we are re-evaluationg the value in x and passing the resulting IO String () value to the function example4.

Haskell is a “piecemeal” programming language as are the Lisp family of languages where a repl is used to write little pieces code that are collected into programs. For simple code in Haskell (and Lisp languages) I do sometimes directly enter code into a text editor but very ofter I start in a repl, experiment, debug, refine, and then copy into an edited file.

### Console IO Example with Stack Configuration

The directory CommandLineApps contains two simple applications that interact with STDIO, that is to write to the console and read from the keyboard. The first example can be found in file CommandLineApp/CommandLine1.hs:

Lines 3 and 4 import the entire System.IO module (that is, import all exported symbols from System.IO) and just the function toUpper from module Data.Char. System.IO is a standard Haskell module and we do not have to do anything special to import it. The Data.Char is stored in the package text. The package text is contained in the library package base which is specified in the CommandLineApp.cabal configuration file that we will look at soon.

Use of the <- assignment in line 8 in the last Haskell listing is important to understand. It might occur to you to leave out line 8 and just place the getLine function call directly in line 9, like this:

If you try this (please do!) you will see compilation errors like:

The type of getLine is an IO () that is a wrapped IO call. The value is not computed until it is used. The <- assignment in line 8 evaluates the IO call and unwraps the result of the IO operation so that it can be used.

I don’t spend much time covering stack project configuration files in this book but I do recommend that as you work through examples to also look for a file in each example directory ending with the file extension .cabal that specified which packages need to be loaded. For some examples it might take a while to download and configure libraries the first time you run either stack build or stack ghci in an example directory.

The Haskell stack project in the CommandLineApp directory has five target applications as we can see in the CommandLineApp.cabal file. I am not going to go into much detail about the project cabal and stack.yaml files generated by stack when you create a new project except for configuration data that I had to add manually; in this case, I added two executable targets at the end of the cabal file (note: the project in the github repository for this book has more executable targets, I just show a few here):

The executable name determines the compiled and linked executable file name. For line 1, an executable file “CommandLine1” (or “CommandLine1.exe”” on Windows) will be generated. The parameter hs-source-dirs is a comma separated list of source file directories. In this simple example all Haskell source files are in the project’s top level directory “../”. The build-depends is a comma separated list of module libraries; here we only use the base built-in modules packaged with Haskell.

Let’s use a GHCi repl to poke at this code and understand it better. The project defined in CommandLineApp/CommandLineApp.cabal contains many executable targets so when we enter a GHCi repl, the available targets are shown and you can choose one; in this case I am selecting the first target defined in the cabal file. In later GHCi repl listings, I will edit out this output for brevity:

In line 36 the function getLine is of type getLine :: IO String which means that calling getLine returns a value that is a computation to get a line of text from stdio but the IO operation is not performed until the value is used.

Please note that it is unusual to put five executable targets in a project’s cabal file. I am only doing so here because I wanted to group five similar examples together in this subdirectory of the github repo for this book. This repo has 16 example subdirectories, and the number would be much greater if I didn’t collect similar examples together.

We will use the example in file CommandLine2.hs in the next section which is similar to this example but also appends the user input to a text file.

### File IO

We will now look at a short example of doing file IO. We will write Haskell simple string values to a file. If you are using the more efficient Haskell Text values, the code is the same. Text values are more efficient than simple string values when dealing with a lot of data and we will later use a compiler setting to automatically convert between the underlying formats. The following listing shows CommandLineApp/CommandLine2.hs:

Note the use of recursion in line 11 to make this program loop forever until you use a COntrol-c to stop the program.

In line 10 we are using function appendFile to open a file, append a string to it, and then close the file. appendFile is of type appendFile :: FilePath -> String -> IO (). It looks like we are passing a simple string as a file name instead of type FilePath but if you look up the definition of FilePath you will see that it is just an alias for string: type FilePath = String.

Running this example in a GHCi repl, with much of the initial printout from running stack ghci not shown:

The file temp.txt was just created.

The next example used ReadTextFile.hs to read the file temp.txt and process the text by finding all words in the file:

readFile is a high-level function because it manages for you reading a file and closing the file handle it uses internally. The built in function words splits a string on spaces and returns a list of strings [String] that are printed on line 7:

What if the function readFile encounters an error? That is the subject for the next section.

### Error Handling in Impure Code

I know you have been patiently waiting to see how we handle errors in Haskell code. Your wait is over! We will look at several common types of runtime errors and how to deal with them. In the last section we used the function readFile to read the contents of a text file temp.txt. What if temp.txt does not exist? Well, then we get an error like the following when running the example program in ReadTextFile.hs:

Let’s modify this last example in a new file ReadTextFileErrorHandling.hs that catches a file not found error. The following example is derived from the first example in Michael Snoyman’s article Catching all exceptions. This example does not work inside threads; if you need to catch errors inside a thread then see the second example in Michael’s article.

I will run this twice: the first time without the file temp.txt present and a second time with temp.txt in the current durectory:

Until you need to handle runtime errors in a multi-threaded Haskell program, following this example should be sufficient. In the next section we look at Network IO.

### Network IO

We will experiment with three network IO examples in this book:

• A simple socket client/server example in this section.
• Reading web pages in the chapter “Web Scraping”
• Querying remote RDF endpoints in the chapter “Linked Data and the Semantic Web”

We start by using a high level library, network-simple for both the client and serve examples in the next two sub-sections. The client and sever examples are in the directory haskell_tutorial_cookbook_examples/ClientServer in the files Client.hs and Server.hs.

#### Server Using network-simple Library

The Haskell Network and Network.Simple modules use strings represented as Data.ByteString.Char8 data so as seen in line 1 I set the language type OverloadedStrings. The following example in file ClientServer/Server.hs is derived from an example in the network-simple project:

The server accepts a string, reverses the string, and returns the reversed string to the client.

I am assuming that you have done some network programming and are familiar with sockets, etc. The function reverseStringLoop defined in lines 9-13 accepts a socket as a parameter and returns a value of type MonadIO that wraps a byte-string value. In line 10 we use the T.recv function that takes two arguments: a socket and the maximum number of bytes to received from the client. The case expression reverses the received byte string, sends the reversed string back to the client, and recursively calls itself waiting for new data from the client. If the client breaks the socket connection, then the function retuns an empty MonadIO().

The main function defined in lines 15-21 listens on port 3000 for new client socket connections. In line 19, the function T.acceptFork accepts as an argument a socket value and a function to execute; the complete type is:

Don’t let line 3 scare you; the GHCi repl is just showing you where this type of MonadIO is defined. The return type refers to a thread ID that is passed to the function forever :: Monad m ⇒ m a -> m b that is defined in the module Control.Monad and lets the thread run until it teminates.

The network-simple package is fairly high level and relatively simple to use. If you are interested you can find many client/server examples on the web that use the lower-level network package.

We will develop a client application to talk with this server in the next section but if you want to immediately try the server, start it and then run telnet in another terminal window:

And run telnet:

In the next section we write a simple client to talk with this service example.

#### Client Using network-simple Library

I want to use automatic conversion between strings represented as Data.ByteString.Char8 data and regular [Char] strings so as seen in line 1 I set the language type OverloadedStrings in the example in file Client.hs:

The function T.connect in line 9 accepts arguments for a host name, a port, and a function to call with the connection socket to the server and the server’s address. The body of this inline function, defined in in the middle on line 9 and continuing in lines 10-15, prints the server address, sends a string “test123” to the server, and waits for a response back from the server (T.recv in line 12). The server response is printed, or a warning that no response was received.

While the example in file Server.hs is running in another terminal, we can run the client interactively:

### A Haskell Game Loop that Maintains State Functionally

The example in this section can be found in the file GameLoop2.hs in the directory haskell_tutorial_cookbook_examples/CommandLineApp. This example uses the random package to generate a seed random number for a simple number guessing game. An alternative implementation in GameLoop1.hs, which I won’t discuss, uses the system time to generate a seed.

This is an important example because it demonstrates one way to maintain state in a functional way. We have a read-only game state value that is passed to the function gameLoop which modifies the read-only game state passed as an argument and returns a newly constructed game state as the function’s returned value. This is a common pattern that we will see again later when we develop an application to play a simplified version of the card game Blackjack in the chapter “Haskell Program to Play the Blackjack Card Game.”

You notice in line 12 that since we are inside of a do expression we can lift (or unwrap) the IO String () value returned from getLine to a string value that we can use directly. This is a pattern we will use repeatedly. The value returned from getLine is not used until line 13 when we use function read to extract the value from the IO String () value getLine returned.

In the if expression in lines 14-16 we check if the user has input the correct value and can then simply return the input game state to the calling main function. If the user has not guessed the correct number then in line 16 we create a new game state value and call the function gameLoop recursively with the newly constructed game state.

The following listing shows a sample session playing the number guessing game.

We will use this pattern for maintaining state in a game in the later chapter “Haskell Program to Play the Blackjack Card Game.”

Except for the Client/Server example, so far we have been mostly using simple String values where String is a list of characters [Char]. For longer strings it is much more efficient to use the module Data.Text that is defined in package text (so text needs to be added to the dependencies in your cabal file).

Many Haskell libraries use the simple String type but the use of Data.Text is also common, especially in applications handling large amounts of string data. We have already seen examples of this in the client/server example programs. Fortunately Haskell is a strongly typed language that supports a language extension for automatically handling both simple strings and the more efficient text types. This language extension, as we have seen in a previous example, is activated by adding the following near the top of a Haskell source file:

As much as possible I am going to use simple strings in this book and when we need both simple strings and byte strings I will then use OverloadedStrings for automatic conversion. This conversion is performed by knowing the type signatures of data and functions in surrounding code. The compiler figures out what type of string is expected and does the conversion for you.

### A More Detailed Look at Monads

We have been casually using different types of IO () monads. In this section I will introduce you to the State monad and then we will take a deeper look at IO (). While we will be just skimming the surface of the topic of monads, my goal in this section is to teach you enough to work through the remaining examples in this book.

Monads are types belonging to the Monad type class that specifies one operator and one function:

The >>= operator takes two arguments: a monad wrapping a value (type a in the above listing) and a function taking the same type a and returning a monad wrapping a new type b. The return value of >>= is a new monad wrapping a value of type b.

The Monad type class function return takes any value and wraps it in a new monad. The naming of return is confusing because it does not alter the flow of execution in a program like a return statement in Java, rather, it wraps a value in a monad.

The definition for the constructor of a State monad is:

So far we have been using data to define new types and newtype is similar except newtype acts during compile time and no type information is present at runtime. All monads contain a value and for the State monad this value is a function. The >>= operator is called the bind operator.

The accessor function runState provides the means to access the value in the state. The following example is in the file StateMonad/State1.hs. In this example, incrementState is a state monad that increases its wrapped integer value by one when it is executed. Remember that the return function is perhaps poorly named because it does not immediately “return” from a computation block as it does in other languages; return simply wraps a value as a monad without redirecting the execution flow.

In order to make the following example more clear, I implement the increment state function twice, once using the do notation that you are already familiar with and once using the >>= bind operator:

Here we have used two very different looking, yet equivalent, styles for accessing and modifying state monad values. In lines 6-9 we are using the do notation. The function get in line 7 returns one value: the value wrapped in a state monad. Function put in line 8 replaces the wrapped value in the state monad, in this example by incrementing its numeric value. Finally return wraps the value in a monad.

I am using the runState function defined in lines 20-24 that returns a tuple: the first tuple value is the result of the computation performed by the function passed to runState (incrementState and incrementState2 in these examples) and the second tuple value is the final wrapped state.

In lines 12-15 I reimplemented increment state using the bind function (>>=). We have seen before that >>= passes the value on its left side to the computation on its right side, that is function calls in lines 13-15:

It is a matter of personal taste whether to code using bind or do. I almost always use the do notation in my own code but I wanted to cover bind both in case you prefer that notation and so you can also read and understand Haskell code using bind. We continue looking at alternatives to the do notation in the next section.

### Using Applicative Operators <$> and <*>: Finding Common Words in Files My goal in this book is to show you a minimal subset of Haskell that is relatively easy to understand and use for coding. However, a big part of using a language is reading other people’s code so I do need to introduce a few more constructs that are widely used: applicative operators. Before we begin I need to introduce you to a new term: Functor which is a typeclass that defines only one method fmap. fmap is used to map a function over an IO action and has the type signature: fmap can be used to apply a pure function like (a -> b) to an IO a and return a new IO b without unwrapping the original IO (). The following short example (in file ImPure/FmapExample.hs) will let you play with this idea: In lines 8-9 I am unwrapping the result of the IO [String] returned by the function fileToWords and then applying the pure function words to the unwrapped value. Wouldn’t it be nice to operate on the words in the file without unwrapping the [String] value? You can do this using fmap as seen in lines 10-11. Please take a moment to understand what line 10 is doing. Here is line 10: First we read the words in a file into an IO [String] monad: Then we apply the pure function reverse to the values inside the IO [String] monad, creating a new copy: Note that from the type of the fmap function, the input monad and output monad can wrap different types. For example, if we applied the function head to an IO [String] we would get an outut of IO [Char]. Finally we unwrap the [String] value inside the monad and set words2 to this unwrapped value: In summary, the Functor typeclass defines one method fmap that is useful for operating on data wrapped inside a monad. We will now implement a small application that finds common words in two text files, implementing the primary function three times, using: • The do notation. • The >>= bind operator. • The Applicative operators <$> and <*>

Let’s look at the types for these operators:

We will use both <$> and <*> in the function commonWords3 in this example and I will explain how these operators work after the following program listing. This practical example will give you a chance to experiment more with Haskell (you do have a GHCi repl open now, right?). The source file for this example is in the file ImPure/CommonWords.hs: The function fileToWords defined in lines 6-8 simply reads a file, as in the last example, maps contents of the file to lower case, uses words to convert a String to a [String] list of individual words, and uses the function Data.Set.fromList to create a set from a list of words that in general will have duplicates. We are retuning an IO (Data.Set.Base.Set String) value so we can later perform a set intersection operation. In other applications you might want to apply Data.Set.toList before returning the value from fileToWords so the return type of the function would be IO [String]. The last listing defines three similar functions commonWords, commonWords2, and commonWords3. commonWords defined in lines 10-13 should hopefully look routine and familiar to you now. We set the local variables with the unwrapped (i.e., extracted from a monad) contents of the unique words in two files, and then return monad wrapping the intersection of the words in both files. The function commonWords2 is really the same as commonWords except that it uses the bind >>= operator instead of the do notation. The interesting function in this example is commonWords3 in lines 20-23 which uses the applicative operators <$> and <*>. Notice the pure function defined inline in line 21: it takes two arguments of type set and returns the set intersection of the arguments. The operator <$> takes a function on the left side and a monad on the right side which contains the wrapped value to be passed as the argument f1. <*> supplies the value for the inline function arguments f2. To rephrase how lines 21-23 work: we are calling fileToWords twice, both times getting a monad. These two wrapped monad values are passed as arguments to the inline function in line 21 and the result of evaluating this inline function is returned as the value of the function commonWords3. I hope that this example has at least provided you with “reading knowledge” of the Applicative operators <$> and <*> and has also given you one more example of replacing the do notation with the use of the bind >>= operator.

### List Comprehensions Using the do Notation

We saw examples of list comprehensions in the last chapter on pure Haskell programming. We can use return to get lists values that are instances of type Monad:

We can get list comprehension behavior from the do notation (here I am using the GHCi repl :{ and :} commands to enter multiple line examples):

I won’t use this notation further but you now will recognize this pattern if you read it in other people’s code.

### Dealing With Time

In the example in this section we will see how to time a block of code (using two different methods) and how to set a timeout for code that runs in an IO ().

The first way we time a block of code uses getPOSIXTime and can be used to time pure or impure code. The second method using timeIt takes an IO () as an argument; in the following example I wrapped pure code in a print function call which returns an IO () as its value. The last example in the file TimerTest.hs shows how to run impure code wrapped in a timeout.

I wanted a function that takes a while to run so for anyCalculationWillDo (lines 7 to 11) I implemented an inefficient prime number generator.

When running this example on my laptop, the last two timeout calls (lines 26 and 31) are terminated for taking more than 100000 microseconds to execute.

The last line 32 of code prints out the first 5 prime numbers greater than 1 so you can see the results of calling the time wasting test function anyCalculationWillDo.

The timeout function is useful for setting a maximum time that you are willing to wait for a calculation to complete. I mostly use timeout for timing out operations fetching data from the web.

### Using Debug.Trace

Inside an IO you can use print statements to understand what is going on in your code when debugging. You can not use print statements inside pure code but the Haskell base library contains the trace functions that internally perform impure writes to stdout. You do not want to use these debug tools in production code.

As an example, I have rewritten the example from the last section to use Debug.Trace.trace and Debug.Trace.traceShow:

In line 3 we import the trace and showTrace functions:

trace takes two arguments: the first is a string that that is written to stdout and the second is a function call to be evaluated. traceShow is like *trace except that the first argument is cnverted to a tstring. The output from running this example is:

I don’t usually like using the trace functions because debugging with them involves slightly rewriting my code. My preference is to get low level code written interactively in the GHCI repl so it does not need to be debugged. I very frequently use print statement inside IOs since adding them requires no significant modification of my code.

### Wrap Up

I tried to give you a general fast-start in this chapter for using monads and in general writing impure Haskell code. This chapter should be sufficient for you to be able to understand and experiment with the examples in the rest of this book.

This is the end of the first section. We will now look at a variety of application examples using the Haskell language.

While I expect you to have worked through the previous chapters in order, for the rest of the book you can skip around and read the material in any order that you wish.

## Section 2 - Cookbook

Now that you have worked through the pure and impure Haskell coding tutorials in the first two chapters we will look at a “cookbook” of techniques and sample applications to solve some common programming tasks as well as implement a program to play the card game Blackjack.

I expect you, dear reader, to have studied and absorbed the tutorial material on pure and impure Haskell programming in the first two chapters. If you are new to Haskell, or don’t have much experience yet, carefully working through these tutorial chapters is a requirement for understanding the material in the rest of this book.

This section contains the following “recipe” applications:

• Textprocessing CSV Files
• Textprocessing JSON Files
• Using sqlite and Postgres databases
• REST Server Providing JSON Data
• REST Client
• Accessing and Using Linked Data
• Querying Semantic Web RDF Data Sources
• Annotating English text with Wikipedia/DBPedia URIs for entities in the original text. Entities can be people, places, organizations, etc.
• Play the Blackjack card game
• Machine Learning
• Probabilistic Graph Models

## Text Processing

In my work in data science and machine learning, processing text is a core activity. I am a practitioner, not a research scientist, and in a practical sense, I spend a fair amount of time collecting data (e.g., web scraping and using semantic web/linked data sources), cleaning it, and converting it to different formats.

We will cover three useful techniques: parsing and using CSV (comma separated values) spreadsheet files, parsing and using JSON data, and cleaning up natural language text that contains noise characters.

The comma separated values (CSV) format is a plain text format that all spreadsheet applications support. The following example illustrates two techniques that we haven’t covered yet:

• Extracting values from the Either type.
• Using destructuring to concisely extract parts of a list.

The Either type Either a b contains either a Left a or a Right b value and is usually used to return an error in Left or a value in Right. We will using the Data.Either.Unwrap module to unwrap the Right part of a call to the Text.CSV.parseCSVFromFile function that reads a CSV file and returns a Left error or the data in the spreadsheet in a list as the Right value.

The destructuring trick in line 15 in the following listing lets us separate the head and rest of a list in one operation; for example:

Here is how to read a CSV file:

Function readCsvFile reads from a file and returns a CSV. What is a CSV type? You could search the web for documentation, but dear reader, if you have worked this far learning Haskell, by now you know to rely on the GHCi repl:

So, a CSV is a list of records (rows in the spreadsheet file), each record is a list of fields (i.e., a string value).

The output when reading the CVS file test.csv is:

### JSON Data

JSON is the native data format for the Javascript language and JSON has become a popular serialization format for exchanging data between programs on a network. In this section I will demonstrate serializing a Haskell type to a string with JSON encoding and then perform the opposite operation of deserializing a string containing JSON encoded data back to an object.

The first example uses the module Text.JSON.Generic (from the json library) and the second example uses module Data.Aeson (from the aeson library).

In the first example, we set the language type to include DeriveDataTypeable so a new type definition can simply derive Typeable which allows the compiler to generate appropriate encodeJSON and decodeJSON functions for the type Person we define in the example:

Notice that in line 14 that I specified the expected type in the decodeJSON call. This is not strictly required, the Haskell GHC compiler knows what to do in this case. I specified the type for code readability. The Haskell compiler wrote the name and email functions for me and I use these functions in lines 16 and 17 to extract these fields. Here is the output from running this example:

The next example uses the Aeson library and is similar to this example.

Using Aeson, we set a language type DeriveGeneric and in this case have the Person class derive Generic. The School of Haskell has an excellent Aeson tutorial that shows a trick I use in this example: letting the compiler generate required functions for types FromJSON and ToJSON as seen in lines 12-13.

I use a short cut in line 19, assuming that the Maybe object returned from decode (which the compiler wrote automatically for the type FromJSON) contains a Just value instead of an empty Nothing value. So in line 19 I directly unwrap the Just value.

Here is the output from running this example:

Line 5 shows the result of printing the JSON encoded string value created by the call to encode in line 17 of the last code example. Line 6 shows the decoded value of type Person, and lines 7 and 8 show the inner wrapped values in the Person data.

### Cleaning Natural Language Text

I spend a lot of time working with text data because I have worked on NLP (natural language processing) projects for over 25 years. We will jump into some interesting NLP applications in the next chapter. I will finish this chapter with strategies for cleaning up text which is often a precursor to performing NLP.

You might be asking why we would need to clean up text. Here are a few common use cases:

• Text fetched from the web frequently contains garbage characters.
• Some types of punctuation need to be removed.
• Stop words (e.g., the, a, but, etc.) need to be removed.
• Special unicode characters are not desired.
• Sometimes we want white space around punctuation to make tokenizing text easier.

Notice the module statement on line 1 of the following listing: I am exporting functions cleanText and removeStopWords so they will be visible and available for use by any other modules that import this module. In line 6 we import intercalate which constructs a string from a space character and an [String] (i.e., a list of strings); here is an example where instead of adding a space character between the strings joined together, I add “*” characters:

The function cleanText removes garbage characters and makes sure that any punctuation characters are surrounded by white space (this makes it easier, for example, to determine sentence boundaries). Function removeStopWords removes common words like “a”, “the”, etc. from text.

This example should be extended with additional noise characters and stop words, depending on your application. The function cleanText simply uses substring replacements.

Let’s look more closely at removeStopWords that takes a single argument s, which is expected to be a string. removeStopWords uses a combination of several functions to remove stop words from the input string. The function words is used to split the input string s into a list of words. Then, the function filter is used to remove any words that match a specific condition. Here the condition is defined as a lambda function, which is passed as the first argument to the filter function. The lambda function takes a single argument x and returns a Boolean value indicating whether the word should be included in the output or not. The lambda function uses function notElem to check whether the lowercased version of the word x is present in a predefined list of stop words. Finally, we use the function intercalate to join the remaining words back into a single string. The first argument to function ** intercalate** is the separator that should be used to join the words, in this case, it’s a single space.

Here is the output from this example:

We will continue working with text in the next chapter.

## Natural Language Processing Tools

The tools developed in this chapter are modules you can reuse in your programs. We will develop a command line program that reads a line of text from STDIN and writes sematic information as output to STDOUT. I have used this in a Ruby program by piping input text data to a forked process and reading the output which is a semantic representation of the input text.

We will be using this example as an external dependency to a later example in the chapter Knowledge Graph Creator.

A few of the data files I provide in this example are fairly large. As an example the file PeopleDbPedia.hs which builds a map from people’s names to the Wikipedia/DBPedia URI for information about them, is 2.5 megabytes in size. The first time you run stack build in the project directory it will take a while, so you might want to start building the project in the directory NlpTool and let it run while you read this chapter.

Here are three examples using the NlpTool command line application developed in this chapter:

credit: news text from abcnews.com

### Resolve Entities in Text to DBPedia URIs

The code for this application is in the directory NlpTool.

The software and data in this chapter can be used under the terms of either the GPL version 3 license or the Apache 2 license.

There are several automatically generated Haskell formatted data files that I created using Ruby scripts operating the Wikipedia data. For the purposes of this book I include these data-specific files for your use and enjoyment but we won’t spend much time discussing them. These files are:

• CityNamesDbpedia.hs
• CompanyNamesDbpedia.hs
• CountryNamesDbpedia.hs
• PeopleDbPedia.hs
• PoliticalPartyNamesDbPedia.hs
• UniversityNamesDbPedia.hs

As an example, let’s look at a small sample of data in PeopleDbPedia.hs:

There are 35,146 names in the file PeopleDbPedia.hs. I have built for eight different types of entity names: Haskell maps that take entity names (String) and maps the entity names into relevant DBPedia URIs. Simple in principle, but a lot of work preparing the data. As I mentioned, we will use these data-specific files to resolve entity references in text.

The next listing shows the file Entities.hs. In lines 5-7 I import the entity mapping files I just described. In this example and later code I make heavy use of the Data.Map and Data.Set modules in the collections library (see the NlpTools.cabal file).

The operator isSubsetOf defined in line 39 tests to see if a value is contained in a collection. The built-in function all applies a function or operator to all elements in a collection and returns a true value if the function or operator returns true applied to each element in the collection.

The local utility function namesHelper defined in lines 41-53 is simpler than it looks. The function filter in line 42 applies the inline function in lines 43-45 (this function returns true for Maybe values that contain data) to a second list defined in lines 48-55. This second list is calculated by mapping an inline function over the input argument ngrams. The inline function looks up an ngram in a DBPedia map (passed as the second function argument) and returns the lookup value if it is not empty and if it is empty looks up the same ngram in a word map (last argument to this function).

The utility function namesHelper is then used to define functions to recognize company names, country names, people names, city names, broadcast network names, political party names, trade union names, and university names:

The following output is generated by running the test main function defined at the bottom of the file app/NlpTool.hs:

Note that entities that are not recognized as Wikipedia objects don’t get recognized.

### Bag of Words Classification Model

The file Categorize.hs contains a simple bag of words classification model. To prepare the classification models, I collected a large set of labelled text. Labels were “chemistry”, “computers”, etc. I ranked words based on how often they appeared in training texts for a classification category, normalized by how often they appeared in all training texts. This example uses two auto-generated and data-specific Haskell files, one for single words and the other for two adjacent word pairs:

• Category1Gram.hs
• Category2Gram.hs

In NLP work, single words are sometimes called 1grams and two word adjacent pairs are referred to as 2grams. Here is a small amount of data from Category1Gram.hs:

Here is a small amount of data from Category2Gram.hs:

It is very common to use term frequencies for single words for classification models. One problem with using single words is that the evidence that any word gives for a classification is independent of the surrounding words in text being evaluated. By also using word pairs (two word combinations are often called 2grams or two-grams) we pick up patterns like “not good” giving evidence for negative sentiment even with the word “good” in text being evaluated. For my own work, I have a huge corpus of 1gram, 2gram, 3gram, and 4gram data sets. For the purposes of the following example program, I am only using 1gram and 2gram data.

The following listing shows the file Categorize.hs. Before looking at the entire example, let’s focus in on some of the functions I have defined for using the word frequency data to categorized text.

stemScoredWordList is used to create a 1gram to word relevance score for each category. The keys are word stems.

Notice that “chemistri” is the stemmed version of “chemistry”, “bank” for “banks”, etc. stem2 is a 2gram frequency score by category mapping where the keys are word stems:

stem1 is like stem2, but for stemmed 1grams, not 2grams:

score is called with a list or words and a word value mapping. Here is an example:

This output is more than a little opaque. The pair (0, 8.2) means that the input words [“atom”, “molecule”] have a score of 8.2 for category indexed at 0 and the pair (25,2.4) means that the input words have a score of 2.4 for the category at index 25. The category at index 0 is chemistry and the category at index 25 is physics as we can see by using the higher level function bestCategories1 that caluculates categories for a word sequence using 1gram word data:

The top level function bestCategories uses 1gram data. Here is an example for using it:

Notice that these words were also classified as category “health_nutrition” but with a low score of 1.2. The score for “chemistry” is almost an order of magnitude larger. bestCategories sorts return values in “best first” order.

splitWords is used to split a string into word tokens before calling bestCategories.

Here is the entire example in file Categorize.hs:

Here is the output:

Given that the variable s contains some test text, line 4 of this output was generated by evaluating bestCategories1 (splitWords s), lines 5-6 by evaluating bestCategories1stem (splitWords s), lines 7-8 from score (splitWords s) onegrams, line 9 from core (bigram_s (splitWords s)) twograms, line 10 from bestCategories2 (splitWords s), line 11 from bestCategories2stem (splitWords s), and lines 12-13 from bestCategories (splitWords s).

I called all of the utility fucntions in function main to demonstrate what they do but in practice I just call function bestCategories in my applications.

### Text Summarization

This application uses both the Categorize.hs code and the 1gram data from the last section. The algorithm I devised for this example is based on a simple idea: we categorize text and keep track of which words provide the strongest evidence for the highest ranked categories. We then return a few sentences from the original text that contain the largest numbers of these important words.

Lazy evaluation allows us in function summarize to define summaries of various numbers of sentences, but not all of these possible summaries are calculated.

### Part of Speech Tagging

We close out this chapter with the Haskell version of my part of speech (POS) tagger that I originally wrote in Common Lisp, then converted to Ruby and Java. The file LexiconData.hs is similar to the lexical data files seen earlier: I am defining a map where keys a words and map values are POS tokens like NNP (proper noun), RB (adverb), etc. The file README.md contains a complete list of POS tag definitions.

The example code and data for this section is in the directory FastTag.

This listing shows a tiny representative part of the POS definitions in LexiconData.hs:

Before looking at the code example listing, let’s see how the functions defined in fasttag.hs work in a GHCi repl:

Function bigram takes a list or words and returns a list of word pairs. We need the word pairs because parts of the tagging algorithm needs to see a word with its preceeding word. In an imperative language, I would loop over the words and for a word at index i I would have the word at index i - 1. In a functional language, we avoid using loops and in this case create a list of adjacent word pairs to avoid having to use an explicit loop. I like this style of functional programming but if you come from years of using imperative language like Java and C++ it takes some getting used to.

tagHelper converts a word into a list of the word and its likely tag. substitute applies tagHelper to a list of words, getting the most probable tag for each word. The function fixTags will occasionally override the default word tags based on a few rules that are derived from Eric Brill’s paper A Simple Rule-Based Part of Speech Tagger.

Here is the entire example:

The README.md file contains definitions of the POS definitions. Here are the ones used in this example:

### Natural Language Processing Wrap Up

NLP is a large topic. I have attempted to show you just the few tricks that I use often and are simple to implement. I hope that you reuse the code in this chapter in your own projects when you need to detect entities, classify text, summarize text, and assign part of speech tags to words in text.

## Linked Data and the Semantic Web

I am going to show you how to query semantic web data sources on the web and provide examples for how you might use this data in applications. I have written two previous books on the semantic web, one covering Common Lisp and the other covering JVM languages Java, Scala, Clojure, and Ruby. You can get free PDF versions on the book page of www.markwatson.com. If you enjoy the light introduction in this chapter then please do download a free copy of my semantic web book for more material on RDF, RDFS, and SPARQL.

I like to think of the semantic web and linked data resources as:

• A source of structured data on the web. These resources are called SPARQL endpoints.
• Data is represented by data triples: subject, predicate, and object. The subject of one triple can be the object of another triple. Predicates are relationships; a few examples: “owns”, “is part of”, “author of”, etc.
• Data that is accessed via the SPARQL query language.
• A source of data that may or may not be available. SPARQL endpoints are typically available for free use and they are sometimes unavailable. Although not covered here, I sometimes work around this problem by adding a caching layer to SPARQL queries (access key being a SPARQL query string, the value being the query results). This caching speeds up development and running unit tests, and sometimes saves a customer demo when a required SPARQL endpoint goes offline at an inconvenient time.

DBPedia is the semantic web version of Wikipedia. The many millions of data triples that make up DBPedia are mostly derived from the structured “info boxes” on Wikipedia pages.

As you are learning SPARQL use the DBPedia SPARQL endpoint to practice. As a practitioner who uses linked data, for any new project I start by identifying SPARQL endpoints for possibly useful data. I then interactively experiment with SPARQL queries to extract the data I need. Only when I am satisfied with the choice of SPARQL endpoints and SPARQL queries do I write any code to automatically fetch linked data for my application.

Pro tip: I mentioned SPARQL query caching. I sometimes cache query results in a local database, saving the returned RDF data indexed by the SPARQL query. You can also store the cache timestamp and refresh the cache every few weeks as needed. In addition to making development and unit testing faster, your applications will be more resilient.

In the last chapter “Natural Language Processing Tools” we resolved entities in natural language text to DBPedia (semantic web SPAQL endpoint for Wikipedia) URIs. Here we will use some of these URIs to demonstrate fetching real world knowledge that you might want to use in applications.

### The SPARQL Query Language

Example RDF N3 triples (subject, predicate, object) might look like:

Element of triples can be URIs or string constants. Triples are often written all on one line; I split it to three lines to fit the page width. Here the subject is the URI for my web site, the predicate is a URI defining an ownership relationship, and the object is a string literal.

If you want to see details for any property or other URI you see, then “follow your nose” and open the URI in a web browser. For example remove the brackets from the owner property URI http://dbpedia.org/ontology/owner and open it in a web browser. For working with RDF data programatically, it is convenient using full URI. For humans reading RDF, the N3 notation is better because it supports defining URI standard prefixes for use as abbreviations; for example:

If you wanted to find all things that I own (assuming this data was in a public RDF repository, which it isn’t) then we might think to match the pattern:

And return all URIs matching the variable ?subject as the query result. This is the basic idea of making SPARQL queries.

The following SPARQL query will be implemented later in Haskell using the HSparql library:

In this last SPARQL query example, the triple patterns we are trying to match are inside a WHERE clause. Notice that in the two triple patterns, the subject field of each is the variable ?s. The first pattern matches all DBPedia triples with a predicate http://dbpedia.org/property/genre and an object equal to http://dbpedia.org/resource/Web_browser. We then find all triples with the same subject but with a predicate equal to http://xmlns.com/foaf/0.1/name.

Each result from this query will contain two values for variables ?s and ?name: a DBPedia URI for some thing and the name for that thing. Later we will run this query using Haskell code and you can see what the output might look like.

Sometimes when I am using a specific SPARQL query in an application, I don’t bother defining prefixes and just use URIs in the query. As an example, suppose I want to return the Wikipedia (or DBPedia) abstract for IBM. I might use a query such as:

If you try this query using the web interface for DBPedia SPARQL queries you get just one result because of the FILTER option that only returns English language results. You could also use FR for French results, GE for German results, etc.

### A Haskell HTTP Based SPARQL Client

One approach to query the DBPedia SPARQL endpoint is to build a HTTP GET request, send it to the SPARQL endpoint server, and parse the returned XML response. We will start with this simple approach. You will recognize the SPARQL query from the last section:

The function buildQuery defined in lined 11-13 takes any SPARQL query, URL encodes it so it can be passed as part of a URI, and builds a query string for the DBPedia SPARQL endpoint. The returned data is in XML format. In lines 23-24 I am using the XHT parsing library to extract the names (values bound to the variable ?o in the query in line 17). I covered the use of the HandsomeSoup parsing library in the chapter Web Scraping.

We use runX to execute a series of operations on an XML document (the doc variable). We first select all elements in doc that have the CSS class binding using the css function. Next we extract the value of the name attribute from each selected element using getAttrValue and also extract the text inside the element using the function deep. The &&& operator is used to combine the two values for the name attribute and the element text into a tuple.

In the main function, we use the utility function simpleHttp in line 20 to fetch the results as a ByteString and in line 21 we unpack this to a regular Haskell String.

### Querying Remote SPARQL Endpoints

We will write some code in this section to make the example query to get the names of web browsers from DBPedia. In the last section we made a SPARQL query using fairly low level Haskell libraries. We will be using the high level library HSparql to build SPARQL queries and call the DBPedia SPARQL endpoint.

The example in this section can be found in SparqlClient/TestSparqlClient.hs. In the main function notice how I have commented out printouts of the raw query results. Because Haskell is type safe, extracting the values wrapped in query results requires knowing RDF element return types. I will explain this matching after the program listing:

Notes on matching result types of query results:

You will notice how I have commented out print statements in the last example. When trying new queries you need to print out the results in order to know how to extract the wrapped query results. Let’s look at a few examples:

If we print the value for sq1:

we see that inside a Just we have a list of lists. Each inner list is a Bound wrapping types defined in HSparql. We would unwrap sq1 using:

In a similar way I printed out the values of sq2 and sq3 to see the form os case statement I would need to unwrap them.

The output from this example with three queries to the DBPedia SPARQL endpoint is:

### Linked Data and Semantic Web Wrap Up

If you enjoyed the material on linked data and DBPedia then please do get a free copy of one of my semantic web books on my website book page as well as other SPARQL and linked data tutorials on the web.

Structured and sematically labelled data, when it is available, is much easier to process and use effectively than raw text and HTML collected from web sites.

## Web Scraping

In my past work I usually used the Ruby scripting language for web scraping but as I use the Haskell language more often for projects both large and small I am now using Haskell for web scraping, data collection, and data cleaning tasks. If you worked through the tutorial chapter on impure Haskell programming then you already know most of what you need to understand this chapter. Here we will walk through a few short examples for common web scraping tasks.

Before we start a tutorial about web scraping I want to point out that much of the information on the web is copyright and the first thing that you should do is to read the terms of service for web sites to insure that your use of web scraped data conforms with the wishes of the persons or organizations who own the content and pay to run scraped web sites.

As we saw in the last chapter on linked data there is a huge amount of structured data available on the web via web services, semantic web/linked data markup, and APIs. That said, you will frequently find text (usually HTML) that is useful on web sites. However, this text is often at least partially unstructured and in a messy and frequently changing format because web pages are meant for human consumption and making them easy to parse and use by software agents is not a priority of web site owners.

Note: It takes a while to fetch all of the libraries in the directory WebScraping so please do a stack build now to get these examples ready to experiment with while you read this chapter.

### Using the Wreq Library

The Wreq library is an easy way to fetch data from the web. The example in this section fetches DBPedia (i.e., the semantic web version of Wikipedia) data in JSON and RDF N3 formats, and also fetches the index page from my web site. I will introduce you to the Lens library for extracting data from data structures, and we will also use Lens in a later chapter when writing a program to play Backjack.

We will be using function get in the Network.Wreq module that has a type signature:

We will be using the OverloadedStrings language extension to facilitate using both [Char] strings and ByteString data types. Note: In the GHCi repl you can use :set -XOverloadedStrings.

We use function get to return JSON data; here is a bit of the JSON data returned from calling get using the URI for my web site:

As an example, the Lens expression for extracting the response status code is (r is the IO Response data returned from calling get):

responseStatus digs into the top level response structure and statusCode digs further in to fetch the code 200. To get the actual contents of the web page we can use the responseBody function:

Here is the code for the entire example:

This example produces a lot of printout, so I a just showing a small bit here (the text from the body is not shown):

You might want to experiment in the GHCi repl with the get function and Lens. If so, this will get you started:

In the following section we will use the HandsomeSoup library for parsing HTML.

### Using the HandsomeSoup Library for Parsing HTML

We will now use the Handsome Soup library to parse HTML. Handsome Soup allows us to use CSS style selectors to extract specific elements from the HTML from a web page. The HXT lower level library provides modeling HTML (and XML) as a tree structure and an Arrow style interface for traversing the tree structures and extract data. Arrows are a generalization of monads to manage calculations given a context. I will touch upon just enough material on Arrows for you to understand the examples in this chapter. Handsome Soup also provides a high level utility function fromUrl to fetch web pages; the type of fromUrl is:

We will not work directly with the tree structure of the returned data, we will simply use the accessor functions to extract the data we need. Before looking at the example code listing, let’s look at this extraction process (doc is the tree structured data returned from calling fromUrl):

The runX function runs arrow computations for us. doc is a tree data structure, css allows us to pattern match on specific HTML elements.

Here we are using CSS style selection for all “a” anchor HTML elements and digging into the element to return the element attribute “href” value for each “a” anchor element. In a similar way, we can select all “img” image elements and dig down into the matched elements to fetch the “src” attributes:

We can get the full body text:

The operator //> applied to the function getText will get all text in all nested elements inside the body element. If we had used the operator /> then we would only have fetched the text at the top level of the body element.

Here is the full example source listing:

This example prints out several hundred lines; here is the first bit of output:

I find HandsomeSoup to be very convenient for picking apart HTML data fetched from web pages. Writing a good spider for any given web site is a process of understanding how the HTML for the web site is structured and what information you need to collect. I strongly suggest that you work with the web page to be spider open in a web browser with “show source code” in another browser tab. Then open an interactive GHCi repl and experiment using the HandsomeSoup APIs to get the data you need.

### Web Scraping Wrap Up

There are many Haskell library options for web scraping and cleaning data. In this chapter I showed you just what I use in my projects.

The material in this chapter and the chapters on text processing and linked data should be sufficient to get you started using online data sources in your applications.

## Using Relational Databases

We will see how to use popular libraries for accessing the sqlite and Postgres (sometimes also called PostgeSQL) databases in this chapter. I assume that you are already familiar with SQL.

### Database Access for Sqlite

We will use the sqlite-simple library in this section to access Sqlite databases and use the similar library postgresql-simple in the next section for use with Postgres.

There are other good libraries for database connectivity like Persistent but I like sqlite-simple and it has a gentle learning curve so that is what we will use here. You will learn the basics of database connectivity in this and the next section. Setting up and using sqlite is easy because the sqlite-simple library includes the compiled code for sqlite so configuration requires only the file path to the database file.

The type Only used in line 20 acts as a container for a single value and is defined in the simple-sqlite library. It can also be used to pass values for queries like:

To run this example start by creating a sqlite database that is stored in the file test.db:

Then build and run the example:

### Database Access for Postgres

Setting up and using a database in the last section was easy because the sqlite-simple library includes the compiled code for sqlite so configuration only requires the file path the the database file. The Haskel examples for Postgres will be similar to those for Sqlite. There is some complication in setting up Postgres if you do not already have it installed and configured.

In any case, you will need to have Postgres installed and set up with a user account for yourself. When I am installing and configuring Postgres on my Linux laptop, I create a database role markw. You will certainly create a different role/account name so subsitute your role name for markw in the following code examples.

If you are using Ubuntu you can install Postgres and create a role using:

We will need to install postgresql-server-dev-9.5 in order to use the Haskell Postgres bindings. Note that your version of Ubuntu Linux may have a different version of the server dev package which you can find using:

If you are using Mac OS X you can then install Postgres as an application which is convenient for development. A role is automatically created with the same name as your OS X “short name.” You can use the “Open psql” button on the interface to open a command line shell that functions like the psql command on Ubuntu (or other Linux distributions).

We will need to install postgresql-server-dev-9.5 in order to use the Haskell Postgres bindings. Note that your version of Ubuntu Linux may have a different version of the server dev package which you can find using:

You will then want to create a database named haskell and set the password for role/account markw to test1 for running the example in this section:

If you are not familiar with using Postgres then take a minute to experiment with using the psql command line utility to connect to the database you just created and peform practice queries:

You can change default database settings using ConnectInfo:

In the following example on lines 9-10 I use defaultConnectInfo that lets me override just some settings, leaving the rest set at default values. The code to access a database using simple-postgresql is similar to that in the last section, with a few API changes.

The type Only used in line 20 acts as a container for a single value and is defined in the simple-postgresql library. It can also be used to pass values for queries like:

The monad mapping function mapM_ using in line 22 is like mapM but is used when we do not need the resulting collection from executing the map operation. mapM_ is used for side effects, in this case extracting the value for a collection of Only values and printing them. I removed some output from building the example in the following listing:

Postgres is my default database and I use it unless there is a compelling reason not to. While work for specific customers has mandated using alternative data stores (e.g., BigTable while working at Google and MongoDB at Compass Labs), Postgres supports relational tables, free text search, and structured data like JSON.

## Haskell Program to Play the Blackjack Card Game

For much of my work using Haskell I deal mostly with pure code with smaller bits of impure code for network and file IO, etc. Realizing that my use case for using Haskell (mostly pure code) may not be typical, I wanted the last example “cookbook recipe” in this book to be an example dealing with changing state, a program to play the Blackjack card game.

The game state is maintained in the type Table that holds information on a randomized deck of cards, the number of players in addition to the game user and the card dealer, the cards in the current hand, and the number of betting chips that all players own. Table data is immutable so all of the major game playing functions take a table and any other required inputs, and generate a new table as the function result.

This example starts by asking how many players, besides the card dealer and the game user, should play a simulated Blackjack game. The game user controls when they want another card while the dealer and any other simulated players play automatically (they always hit when their card score is less than 17).

I define the types for playing cards and an entire card deck in the file Card.hs:

As usual, the best way to understand this code is to go to the GHCi repl:

So, we have a sorted deck of cards and a utility function for returning the numerical value of a card (we always count ace cards as 11 points, deviating from standard Blackjack rules).

The next thing we need to get is randomly shuffled lists. The Haskell Wiki has a good writeup on randomizing list elements and we are borrowing their function randomizedList (you can see the source code in the file RandomizedList.hs). Here is a sample use:

Much of the complexity in this example is implemented in Table.hs which defines the type Table and several functions to deal and score hands of dealt cards:

• createNewTable :: Players -> Table. Players is the integer number of other players at the table.
• setPlayerBet :: Int -> Table -> Table. Given a new value to bet and a table, generate a new modified table.
• showTable :: Table -> [Char]. Given a table, generate a string describing the table (in a format useful for development)
• initialDeal :: [Card] -> Table -> Int -> Table. Given a randomized deck of cards, a table, and the number of other players, generate a new table.
• changeChipStack :: Int -> Int -> Table -> Table. Given a player index (index order: user, dealer, and other players), a new number of betting chips for the player, and a table, then generate a new modified table.
• setCardDeck :: [Card] -> Table -> Table. Given a randomized card deck and a table, generate a new table containing the new randomized card list; all other table data is unchanged.
• dealCards :: Table -> [Int] -> Table. Given a table and a list of player indices for players wanting another card, generate a new modified table.
• resetTable :: [Card] -> Table -> Int -> Table. Given a new randomized card deck, a table, and a new number of other players, generate a new table.
• scoreHands :: Table -> Table. Given a table, score all dealt hands and generate a new table with these scores. There is no table type score data, rather, we “score” by changing the number of chips all of the players (inclding the dealer) has.
• dealCardToUser :: Table -> Int -> Table. For the game user, always deal a card. For the dealer and other players, deal another card if their hand score is less than 17.
• handOver :: Table -> Bool. Determine if the current hand is over.
• setPlayerPasses :: Table -> Table. Call this function when the payer passes. Other players and dealer are then played out automatically.

The implementation in the file Table.hs is fairly simple, with the exception of the use of Haskell lenses to access nested data in the table type. I will discuss the use of lenses after the program listing, but: as you are reading the code look out for variables starting with the underscore character _ that alerts the Lens system that it should create data accessors for these variables:

In line 48 we use the function makeLenses to generate access functions for the type Table. We will look in some detail at lines 54-56 where we use the lense over function to modify a nested value in a table, returning a new table:

The expression in line 3 evaluates to a partial function that takes another argument, a table, and returns a new table with the card deck modified. Function over expects a function as its second argument. In this example, the inline function ignores the argument it is called with, which would be the old card deck value, and returns the new card deck value which is placed in the table value.

Using lenses can greatly simplify the code to manipulate complex types.

Another place where I am using lenses is in the definition of function scoreHands (lines 88-109). On line 109 we are using the over function to replace the old player betting chip counts with the new value we have just calculated:

Similarly, we use over in line 113 to change the current player bet. In function handOver on line 157, notice how I am using the generated function _userPasses to extract the value of the user passes boolean flag from a table.

The function main, defined in the file Main.hs, uses the code we have just seen to represent a table and modify a table, is fairly simple. A main game loop repetitively accepts game user imput, and calls the appropriate functions to modify the current table, producing a new table. Remember that the table data is immutable: we always generate a new table from the old table when we need to modify it.

I encourage you to try playing the game yourself, but if you don’t here is a sample game:

Here the game user has four cards with values of [10,6,3,2] for a winning score of 21. The dealer has [10,7] for a score of 17 and the other player has [8,10,6], a value greater than 21 so the player went “bust.”

I hope that you enjoyed this last example that demonstrates a reasonable approach for managing state when using immutable data.

## Section 3 - Larger Projects

This section is new for the second edition of this book. So far we have covered the basics of Haskell programming and seen many examples. In this section we look at a few new projects that I derived from my own work and these new examples will hopefully further encourage you to think of novel uses for Haskell in your own work.

The project knowledge_graph_creator helps to automate the process of creating Knowledge Graphs from raw text input and generates data for both the Neo4J open source graph database as well as RDF data for use in semantic web and linked data applications. I have also implemented this same application in Common Lisp that is also a new example in the latest edition of my book Loving Common Lisp, Or The Savvy Programmer’s Secret Weapon (released September 2019).

The next two chapters in this section are similar in that they both use examples of using Python for Natural Language Processing (NLP) tasks, wrapping the Python code as a REST service, and then writing Haskell clients for these services.

The project HybridHaskellPythonNlp uses web services written in Python for natural language processing. The Python web services use the SpaCy library.

The project HybridHaskellPythonCorefAnaphoraResolution uses web services written in Python to allow Haskell applications to use deep learning models created with TensorFlow and Keras.

In these last two examples I use REST APIs to access code written in Python. A good alternative that I don’t cover in this book is using the servant library for generating distributed applications.

## Knowledge Graph Creator

The large project described here processes raw text inputs and generates data for knowledge graphs in formats for both the Neo4J graph database and in RDF format for semantic web and linked data applications.

This application works by identifying entities in text. Example entity types are people, companies, country names, city names, broadcast network names, political party names, and university names. We saw earlier code for detecting entities in the chapter on natural language processing (NLP) and we will reuse this code. We will discuss later three strategies for reusing code from different projects.

The following figure shows part of a Neo4J Knowledge Graph created with the example code. This graph has shortened labels in displayed nodes but Neo4J offers a web browser-based console that lets you interactively explore Knowledge Graphs. We don’t cover setting up Neo4J here so please use the Neo4J documentation. As an introduction to RDF data, the semantic web, and linked data you can get free copies of my two books Practical Semantic Web and Linked Data Applications, Common Lisp Edition and Practical Semantic Web and Linked Data Applications, Java, Scala, Clojure, and JRuby Edition.

There are two versions of this project that deal with generating duplicate data in two ways:

• As either Neo4J Cypher data or RDF triples data are created, store generated data in a SQLite embedded database. Check this database before writing new output data.
• Ignore the problem of generating duplicate data and filter out duplicates in the outer processing pipeline that uses the Knowledge Graph Creator as one processing step.

For my own work I choose the second method since filtering duplicates is as easy as a few Makefile targets (the following listing is in the file Makefile in the directory haskell_tutorial_cookbook_examples/knowledge_graph_creator_pure):

The Haskell KGCreator application we develop here writes output files out.n3 (N3 is a RDF data format) and out.cypher (Cypher is the import output format and query language for the Neo4J open source and commercial graph database). The awk commands remove duplicate lines and write de-duplicated data to output.n3 and output.cypher.

We will use this second approach but the next section provides sufficient information and a link to alternative code in case you are interested in using SQLite to prevent duplicate data generation.

#### Notes for Using SQLite to Avoid Duplicates (Optional Material)

We saw two methods of avoiding duplicates in generated data in the last section. If you want to use the first method for avoiding generating duplicate data, I leave it as an exercise but here are some notes to get you started: you can then modify the example code by using the utility function Blackboard.h in the directory knowledge_graph_creator_pure/src/fileutils and implement the logic seen below for checking new generated data to see if it is in the SQLite database. This first method as it also is a good example for wrapping the embedded SQLite library in an IO Monad and is left as an exercise, otherwise skip this section.

Before you write either an RDF statement or a Neo4J Cypher data import statement, check to see if the statement has already been written using something like:

and after writing a RDF statement or a Neo4J Cypher data import statement, write it to the temportary SQLite database using something like:

For the rest of the chapter we will use the approach of not keeping track of generated data in SQLite and instead remove duplicates during postprocessing using the standard awk command line utility.

This section is optional. In the rest of this chapter we use the example code in knowledge_graph_creator_pure.

### Code Layout For the KGCreator Project and strategies for sharing Haskell code between projects

We will reuse the code for finding entities that we studied in an earlier chapter. There are several ways to reuse code from multiple local Haskell projects:

• In a project’s cabal file, use relative paths to the source code for other projects. This is my preferred way to work but has the drawback that the stack command sdist to make a distribution tarball will not work with relative paths. If this is a problem for you then create relative symbolic file links to the source directories in other projects.
• In your project’s stack.yaml file, add the other project’s name and path as a extra-deps.
• In library projects, define a packages definition and install the library globally on your system.

I almost always use the first method on my projects with dependencies on other local projects I work on and this is also the approach we use here. The relevant lines in the file KGCreator.cabal are:

This is a standard looking cabal file except for lines 37 and 38 where the source paths reference the example code for the NlpTool application developed in a previous chapter. The exposed module BlackBoard (line 8) is not used but I leave it in the cabal file in case you want to experiment with recording generated data in SQLite to avoid data duplication. You are likely to also want to use BlackBoard if you modify this example to continuously process incoming data in a production system. This is left as an exercise.

Before going into too much detail on the implementation let’s look at the layout of the project code:

As mentioned before, we are using the Haskell source fies in a relative path ../NlpTool/src/… and the local src directory. We discuss this code in the next few sections.

### The Main Event: Detecting Entities in Text

A primary task in KGCreator is to identify entities (people, places, etc.) in text and then we will create RDF and Neo4J Cypher data statements using these entities, knowledge of the origin of text data and general relationships between entities.

We will use the top level code that we developed earlier that is located in the directory ../NlpTool/src/nlp (please see the chapter Natural Language Processing Tools for more detail):

• Categorize.hs - categorizes text into categories like news, religion, business, politics, science, etc.
• Entities.hs - identifies entities like people, companies, places, new broadcast networks, labor unions, etc. in text
• Summarize.hs - creates an extractive summary of text

The KGCreator Haskell application looks in a specified directory for text files to process. For each file with a .txt extension there should be a matching file with the extension .meta that contains a single line: the URI of the web location where the corresponding text was found. The reason we need this is that we want to create graph knowledge data from information found in text sources and the original location of the data is important to preserve. In other words, we want to know where the data elements in our knowledge graph came from.

We have not looked at an example of using command line arguments yet so let’s go into some detail on how we do this. Previously when we have defined an output target executable in our .cabal file, in this case KGCreator-exe, we could use stack to build the executable and run it with:

Now, we have an executable that requires two arguments: a source input directory and the file root for generated RDF and Cypher output files. We can pass command line arguments using this notation:

The two command line arguments are:

• test_data which is the file path of a local directory containing the input files
• outtest which is the root file name for generated Neo4J Cypher and RDF output files

If you are using KGCreator in production, then you will want to copy the compiled and linked executable file KGCreator-exe to somewhere on your PATH like /usr/local/bin.

The following listing shows the file app/Main.hs, the main program for this example that handles command line arguments and calls two top level functions in src/toplevel/Apis.hs:

Here we use getArgs in line8 to fetch a list of command line arguments and verify that at least two arguments have been provided. Then we call the functions processFilesToRdf and processFilesToNeo4j and the functions they call in the next three sections.

### Utility Code for Generating RDF

The code for generating RDF and for generating Neo4J Cypher data is similar. We start with the code to generate RDF triples. Before we look at the code, let’s start with a few lines of generated RDF:

The next listing shows the file src/sw/GenTriples.hs that finds entities like broadcast network names, city names, company names, people’s names, political party names, and university names in text and generates RDF triple data. If you need to add more entity types for your own applications, then use the following steps:

• Look at the format of entity data for the NlpTool example and add names for the new entity type you are adding.
• Add a utility function to find instances of the new entity type to NlpTools. For example, if you are adding a new entity type “park names”, then copy the code for companyNames to parkNames, modify as necessary, and export parkNames.
• In the following code, add new code for the new entity helper function after lines 10, 97, 151, and 261. Use the code for companyNames as an example.

The map *category_to_uri_map** created in lines 36 to 84 maps a topic name to a linked Data URI that describes the topic. For example, we would not refer to an information source as being about the topic “economics”, but would instead refer to a linked data URI like http://knowledgebooks.com/schema/topic/economics. The utility function uri_from_categor takes a text description of a topic like “economy” and converts it to an appropriate URI using the map *category_to_uri_map**.

The utility function textToTriple takes a file path to a text input file and a path to meta file path, calculates the text string representing the generated triples for the input text file, and returns the result wrapped in an IO monad.

The code in this file could be shortened but having repetitive code for each entity type hopefully makes it easier for you to understand how it works.

### Utility Code for Generating Cypher Input Data for Neo4J

Now we will generate Neo4J Cypher data. In order to keep the implementation simple, both the RDF and Cypher generation code starts with raw text and performs the NLP analysis to find entities. This example could be refactored to perform the NLP analysis just one time but in practice you will likely be working with either RDF or NEO4J and so you will probably extract just the code you need from this example (i.e., either the RDF or Cypher generation code).

Before we look at the code, let’s start with a few lines of generated Neo4J Cypher import data:

The following listing shows the file src/sw/GenNeo4jCypher.hs. This code is very similar to the code for generating RDF in the last section. The same notes for adding your own new entity notes in the last section are also relevant here.

Notice that we import in line 29 the map category_to_uri_map that was defined in the last section. The function neo4j_category_node_defs defined in lines 35 to 43 creates category graph nodes for each category in the map category_to_uri_map. These nodes will be referenced by graph nodes created in the functions create_neo4j_node, create_neo4j_lin, create_summary_node, and create_entity_node. The top level function is textToCypher that is similar to the function textToTriples in the last section.

Because the top level function is textToCypher returns a string wrapped in a monad, it is possible to add “debug”” print statements in textToCypher. I left many such debug statements in the example code to help you understand the data that is being operated on. I leave it as an exercise to remove these print statements if you use this code in your own projects and no longer need to see the debug output.

### Top Level API Code for Handling Knowledge Graph Data Generation

So far we have looked at processing command line arguments and processing individual input files. Now we look at higher level utility APIs for processing an entire directory of input files. The following listing shows the file API.hs that contains the two top level helper functions we saw in app/Main.hs.

The functions processFilesToRdf and processFilesToNeo4j both have the function type signature FilePath->FilePath->IO() and are very similar except for calling different helper functions to generate RDF triples or Cypher input graph data:

Since both of these functions return IO monads, I could add “debug” print statements that should be helpful in understanding the data being operated on.

### Wrapup for Automating the Creation of Knowledge Graphs

The code in this chapter will provide you with a good start for creating both test knowledge graphs and for generating data for production. In practice, generated data should be reviewed before use and additional data manually generated as needed. It is good practice to document required manual changes because this documentation can be used in the requirements for updating the code in this chapter to more closely match your knowledge graph requirements.

## Hybrid Haskell and Python Natural Language Processing

Here we will write a Haskell client for using a Natural Language Processing (NLP) server written in Python. There is some common material in this chapter and the next chapter Hybrid Haskell and Python For Coreference Resolution because I wanted both chapters to be self contained.

### Example Use of the Haskell NLP Client

Before learning how to use the Python NLP server code and understand the code for the Haskell client code, let’s look at an example of running the client code so you understand the type of processing that we are performing:

Notice on line 5 that each of the three entities is tagged with the entity type. GPE is the tag for a country and the tag ORG can refer to an entity that is a company or a non-profit organization.

There is some overlap in functionality between the Python SpaCy NLP library and my pure Haskell code in the NLP Tools chapter. SpaCy has the advantage of using state of the art deep learning models.

### Setting up the Python NLP Server

I assume that you have some familiarity with using Python. If not, you will still be able to follow these directions assuming that you have the utilities pip, and python installed. I recommend installing Python and Pip using Anaconda.

The server code is in the subdirectory HybridHaskellPythonNlp/python_spacy_nlp_server where you will work when performing a one time initialization. After the server is installed you can then run it from the command line from any directory on your laptop.

I recommend that you use virtual Python environments when using Python applications to separate the dependencies required for each application or development project. Here I assume that you are running in a Python version 3.6 (or higher) version environment. First install the dependencies:

Then change directory to the subdirectory HybridHaskellPythonNlp/python_spacy_nlp_server and install the NLP server:

Once you install the server, you can run it from any directory on your laptop or server using:

I use deep learning models written in Python using TensorFlow or PyTorch in applications I write in Haskell or Common Lisp. While it is possible to directly embed models in Haskell and Common Lisp, I find it much easier and developer friendly to wrap deep learning models I use a REST services as I have done here. Often deep learning models only require about a gigabyte of memory and using pre-trained models has lightweight CPU resource needs so while I am developing on my laptop I might have two or three models running and available as wrapped REST services. For production, I configure both the Python services and my Haskell and Common Lisp applications to start automatically on system startup.

This is not a Python programming book and I will not discuss the simple Python wrapping code but if you are also a Python developer you can easily read and understand the code.

### Understanding the Haskell NLP Client Code

The Python server returns JSON file. We saw earlier the use of the Haskell aeson library for parsing JSON data stored as a string into Haskell native data. We also used the wreq library to access remote web services. We use both of these libraries here:

The main command line program for using the client library:

### Wrapup for Using the Python SpaCy NLP Service

The example in this chapter shows a technique that I often use for using libraries and frameworks that are not written in Haskell: wrap the service implemented in another programming language is a REST web service. While it is possible to use a foreign function interface (FFI) to call out to code written in other languages I find for my own work that I prefer calling out to a separate service especially when I run other services on remote servers so I do not need to run them on my development laptop. For production it is also useful to be able to easily scale horizontally across servers.

## Hybrid Haskell and Python For Coreference Resolution

Here we will write a Haskell client for using a server written in Python that performs coreference resolution (more on this later). There is some common material in this chapter and the last chapter Hybrid Haskell and Python Natural Language Processing because I wanted both chapters to be self contained. The code for this chapter can be found in the subdirectory HybridHaskellPythonCorefAnaphoraResolution.

Coreference resolution is also called anaphora resolution and is the process for replacing pronouns in text with the original nouns, proper nouns, or noun phrases that the pronouns refer to.

Before discussing setting up the Python library for performing coreference analysis and the Haskell client, let’s run the client so you can see and understand anaphora resolution:

In this example notice that the words “He” and “it” in the second sentence are replaced by “John Smith” and “a car” which makes it easier to write information extraction applications.

### Installing the Python Coreference Server

I recommend that you use virtual Python environments when using Python applications to separate the dependencies required for each application or development project. Here I assume that you are running in a Python version 3.6 (or higher) version environment. If you want to install the neuralcoref library using pip you must use and older version of spaCy. First install the dependencies:

As I write this chapter the neuralcoref model and library require a slightly older version of spaCy (the current latest version is 2.3.0).

After installing all dependencies, then change directory to the subdirectory python_coreference_anaphora_resolution_server and install the coref server:

Once you install the server, you can run it from any directory on your laptop or server using:

I use deep learning models written in Python using TensorFlow or PyTorch in applications I write in Haskell or Common Lisp. While it is possible to directly embed models in Haskell and Common Lisp, I find it much easier and developer friendly to wrap deep learning models I use a REST services as I have done here. Often deep learning models only require about a gigabyte of memory and using pre-trained models has lightweight CPU resource needs so while I am developing on my laptop I might have two or three models running and available as wrapped REST services. For production, I configure both the Python services and my Haskell and Common Lisp applications to start automatically on system startup.

This is not a Python programming book and I will not discuss the simple Python wrapping code but if you are also a Python developer you can easily read and understand the code.

### Understanding the Haskell Coreference Client Code

The code for the library for fetching data from the Python service is in the subdirectory src in the file CorefWebClient.hs.

We will use techniques for accessing remote web services using the wreq library and using the lens library for accessing the response from the Python server. Here the response is plain text with pronouns replaced by the nouns that they represent. We don’t use the aeson library to parse JSON data as we did in the previous chapter.

The code for the main application is in the subdirectory app in the file Main.hs.

### Wrapup for Using the Python Coreference NLP Service

The example in this chapter is fairly simple but shows a technique that I often use for using libraries and frameworks that are not written in Haskell: wrap the service implemented in another programming language is a REST web service. While it is possible to use a foreign function interface (FFI) to call out to code written in other languages I find for my own work that I prefer calling out to a separate service, especially when I run other services on remote servers so I do not need to run them on my development laptop. For production it is also useful to be able to easily scale horizontally across servers.

## Book Wrap Up

As I mentioned in the Preface, I had a slow start learning Haskell because I tried to learn too much at one time. In this book I have attempted to show you a subset of Haskell that is sufficient to write interesting programs - a gentle introduction.

Haskell beginners often dislike the large error listings from the compiler. The correct attitude is to recognize that these error messages are there to help you. That is easier said than done, but try to be happy when the compiler points out an error - in the long run I find using Haskell’s fussy compiler saves me time and lets me refactor code knowing that if I miss something in my refactoring the compiler will immediately let me know what needs to be fixed.

The other thing that I hope you learned working through this book is how effective repl based programming is. Most code I write, unless it is very trivial, starts its life in a GHCi repl. When you are working with somene else’s Haskell code it is similarly useful to have their code loaded in a repl as you read.

I have been programming professionally for forty years and I use many programming languages. Once I worked my way through early difficulties using Haskell it has become a favorite programming language. I hope that you enjoy Haskell development as much as I do.

## Appendix A - Haskell Tools Setup

I recommend that if you are new to Haskell that you at least do a minimal installation of stack and work through the first chapter using an interactive REPL. After experimenting with the REPL then do please come back to Appendix A and install support for the editor of your choice (or an IDE) and hlint.

### stack

I assume that you have the Haskell package manager stack installed. If you have not installed stack yet please follow these directions.

After installing stack and running it you will have a directory “.stack” in your home directory where stack will keep compiled libraries and configuration data. You will want to create a file “~/.stack/config.yaml” with contents similar to my stack configuration file:

Replace my name and email address with yours. You might also want to install the package manager Cabal and the “lint” program hlint:

These installs might take a while so go outside for ten minutes and get some fresh air.

You should get in the habit of running hlint on your code and consider trying to remove all or at least most warnings. You can customize the types of warnings hlint shows: read the documentation for hlint.

#### Creating a New Stack Project

I have already created stack projects for the examples in this book. When you have worked through them, then please refer to the stack documentation for creating projects.

### Emacs Setup

There are several good alternatives to using the Emacs editor:

• GEdit on Linux
• TextMate on OS X
• IntelliJ with the Haskell plugin (all platforms)

I use all three of these alternatives on occasion, but Emacs with haskell-mode is my favorite environment. There are instructions for adding haskell-mode to Emacs on the project home page on github. If you follow these instructions you will have syntax hiliting and Emacs will understand Haskell indentation rules.

### Do you want more of an IDE-like Development Environment?

I recommend and use the Intero Emacs package to get auto completions and real time syntax error warnings. Intero is designed to work with stack.

I add the following to the bottom of my .emacs file: