Loving Common Lisp, or the Savvy Programmer's Secret Weapon
Loving Common Lisp, or the Savvy Programmer's Secret Weapon
Mark Watson
Buy on Leanpub

Table of Contents

Copyright 2011-2023 Mark Watson. All rights reserved. This book may be shared using the Creative Commons “share and share alike, no modifications, no commercial reuse” license.

This eBook will be updated occasionally so please periodically check the leanpub.com web page for this book for updates.

If you read my eBooks free online then please consider tipping me https://markwatson.com/#tip.

This is the eighth edition released August 2022.

Please visit the author’s website.

If you found a copy of this book on the web and find it of value then please consider buying a copy at leanpub.com/lovinglisp.

If you would like to support my work please consider purchasing my books on Leanpub and star my git repositories that you find useful on GitHub. You can also interact with me on social media on Mastodon and Twitter.

Preface

Notes on the Eighth Edition Published August 2022

The main change is splitting the Knowledge Graph Navigator (KGN) chapter that features the LispWorks CAPI UI APIs into three chapters for a library for KGN functionality, a text based (console) UI, and a CAPI based UI. I added examples using the OpenAI GPT-3 APIs. There are other small corrections and improvements.

Notes on the Seventh Edition Published March 2021

I added two short chapters to the previous edition: Knowledge Graph Sampler for Creating Small Custom Knowledge Graphs and Using Common Lisp With Wolfram/One.

Notes on the Sixth Edition Published June 2020

Two examples optionally use the CAPI user interface toolkit provided with LispWorks Common Lisp and work with the free personal edition. The first CAPI application is Knowledge Graph Navigator and the second CAPI example is Knowledge Graph Creator. Both of these examples build up utilities for working with Knowledge Graphs and the Semantic Web.

I expand the Plot Library chapter to generate either PNG graphics files or if you are using the free personal edition of LispWorks you can also direct plotting output to a new window in interactive programs.

I added a new chapter on using the py4cl library to embed Python libraries and application code into a Common Lisp system. I provide new examples for embedding spaCy and TensorFlow applications in Common Lisp applications. In earlier editions, I used a web services interface to wrap Python code using spaCy and TensorFlow. I am leaving that chapter intact, renaming it from “Using Python Deep Learning Models In Common Lisp” to “Using Python Deep Learning Models In Common Lisp With a Web Services Interface.” The new chapter for this edition is “Using the PY4CL Library to Embed Python in Common Lisp.”

Notes on the Fifth Edition Published September 2019

There were two chapters added:

  • A complete application for processing text to generate data for Knowledge Graphs (targeting the open source Neo4J graph database and also support RDF semantic web/linked data).
  • A library for accessing the state of the art spaCy natural language processing (NLP) library and also a state of the art deep learning model. These models are implemented in thin Python wrappers that use Python libraries like spaCy, PyTorch, and TensorFlow. These examples replace a simple hybrid Java and Common Lisp example in previous editions.

I have added text and explanations as appropriate throughout the book and I removed the CouchDB examples.

I have made large changes to how the code for this book is packaged. I have reorganized the example code on GitHub by providing the examples as multiple Quicklisp libraries or applications. I now do this with all of my Common Lisp code and it makes it easier to write smaller libraries that can be composed into larger applications. In my own workflow, I also like to use Makefile targets to build standalone applications that can be run on other computers without installing Lisp development environments. Please follow the directions at the end of the Preface for configuring Quicklisp for easy builds and use of the example software for this book.

Why Use Common Lisp?

Why Common Lisp? Isn’t Common Lisp an old language? Do many people still use Common Lisp?

I believe that using Lisp languages like Common Lisp, Clojure, Racket, and Scheme are all secret weapons useful in agile software development. An interactive development process and live production updates feel like a breath of fresh air if you have development on heavy weight like Java Enterprise Edition (JEE).

Yes, Common Lisp is an old language but with age comes stability and extremely good compiler technology. There is also a little inconsistency between different Common Lisp systems in such things as handling threads but with a little up front knowledge you can choose which Common Lisp systems will support your requirements.

A Request from the Author

I spent time writing this book to help you, dear reader. I release this book under the Creative Commons License and set the minimum purchase price to $0.00 (free!) in order to reach the most readers. Under this license you can share a PDF version of this book with your friends and coworkers and I encourage you to do so. If you found this book on the web (or it was given to you) and if it provides value to you then please consider doing one of the following to support my future writing efforts and also to support future updates to this book:

I enjoy writing and your support helps me write new editions and updates for my books and to develop new book projects. Thank you!

Older Book Editions

The fourth edition of this book was released in May 2017 and the major changes were:

  • Added an example application KGCreator that processes text data to automatically generate data for Knowledge Graphs. This example application supports the Neo4J graph database as well as semantic web/linked data systems. The major changes were:
  • Added a backpropagation neural network example
  • Added a deep learning example using the Java based Armed Bear Common Lisp with the popular DeepLearning4j library
  • Added a heuristic search example
  • Added two machine learning examples (K-Means clustering and SVM classification) using the CLML library
  • A few edits to the previous text

The third edition was released in October 2014. The major changes made in the 2014 edition are:

  • I reworked the chapter Common Lisp Basics.
  • I added material to the chapter on using QuickLisp.

The second edition was released in 2013 and was derived from the version that I distributed on my web site and I moved production of the book to leanpub.com.

Acknowledgments

I would like to thank Jans Aasman for contributing as technical editor for the fourth edition of this book. Jans is CEO of Franz.com which sells Allegro Common Lisp as well as tools for semantic web and linked data applications.

I would like to thank the following people who made suggestions for improving previous editions of this book:

Sam Steingold, Andrew Philpot, Kenny Tilton, Mathew Villeneuve, Eli Draluk, Erik Winkels, Adam Shimali, and Paolo Amoroso.

I would like to also thank several people who pointed out typo errors in this book and for specific suggestions: Martin Lightheart, Tong-Kiat Tan, Rainer Joswig, Gerold Rupprecht, HN member rurban, David Cortesi. I would like to thank the following Reddit /r/lisp readers who pointed out mistakes in the fifth edition of this book: arnulfslayer, rpiirp, and itmuckel. I would like to thank Ted Briscoe for pointing out a problem with the spacy web client example in the 6th edition.

I would like to thank Paul Graham for coining the phrase “The Secret Weapon” (in his excellent paper “Beating the Averages”) in discussing the advantages of Lisp and giving me permission to reuse his phrase.

I would especially like to thank my wife Carol Watson for her fine work in editing this book.

Setting Up Your Common Lisp Development System and Quicklisp

These instructions assume the use of SBCL. See comments for LispWorks, Franz Common Lisp, and Closure Common List at the end of this section. I assume that you have installed SBCL and Quicklisp by following the instructions at lisp-lang.org/learn/getting-started. These instructions also guide you through installing the Slime extensions for Emacs. I use both Emacs + Slime and VSCode with Common Lisp plugins for editing Common Lisp. If you like VSCode then I recommend Yasuhiro Matsumoto’s Lisp plugin for syntax highlighting. For both Emacs and VSCode I usually run a separate REPL in a terminal window and don’t run an editor-integrated REPL. I think that I am in the minority in using a separate REPL running in a shell.

I have been using Common Lisp since about 1982 and Quicklisp (developed and maintained by Zach Beane) has been the most revolutionary change in my Common Lisp development (even more so than getting a hardware Lisp Machine and the availability of Coral Common Lisp on the Macintosh).

You can follow the directions on the main GitHub repository for this book: https://github.com/mark-watson/loving-common-lisp to get the examples set up to run on your computer. Starting with the 8th edition, I have a new scheme for distributing the book examples on GitHub:

  • A few short example Common Lisp code snippets are still kept in the main repository for the book: https://github.com/mark-watson/loving-common-lisp.
  • The longer examples are now stored in separate GitHub repositories to facilitate using them as reusable Quicklisp libraries.
  • Clone the main GitHub repository and copy the Makefile to the directory ~/quicklisp/local-projects/ on your computer.
  • Change directory to ~/quicklisp/local-projects/ and run the Makefile target make fetch to copy all separate GitHub repositories to subdirectories of ~/quicklisp/local-projects/.
  • You can now load any book example using Quicklisp, for example: (ql:quickload :sparql).

For example, the subdirectory loving-common-lisp/src/spacy-py4cl contains a package named spacy-py4cl that can now be accessed from any directory on your system using:

1 $ sbcl
2 (ql:quickload "spacy-py4cl")
3 * (spacy-py4cl:nlp "My sister has a dog Henry. She loves him.")
4 * (defvar x (spacy-py4cl:nlp "President Bill Clinton went to Congress. He gave a spe\
5 ech on taxes and Mexico."))

This example uses the deep learning NLP models in spaCy which is a Python library - see the chapter on NLP for details on installing the Python dependencies. Note that only a few examples in this book require Python dependencies.

I have used the SBCL implementation of Common Lisp in this book. There are many fine Common Lisp implementations from Franz, LispWorks, Clozure Common Lisp, etc. I usually use LispWorks for my professional development work. If you have any great difficulty adopting the examples to your choice of Common Lisp implementations and performing web search does not suggest a solution then you can reach me through my web site, markwatson.com.

Introduction

This book is intended to get you, the reader, programming quickly in Common Lisp. Although the Lisp programming language is often associated with artificial intelligence, this introduction is on general Common Lisp programming techniques. Later we will look at general example applications and artificial intelligence examples.

The Common Lisp program examples are distributed on the github repo for this book.

Why Did I Write this Book?

Why the title “Loving Common Lisp”? Simple! I have been using Lisp for almost 40 years and seldom do I find a better match between a programming language and the programming job at hand. I am not a total fanatic on Lisp, however. I often use Python for deep learning. I like Ruby, Java and Javascript for server side programming, and the few years that I spent working on Nintendo video games and virtual reality systems for SAIC and Disney, I found C++ to be a good bet because of stringent runtime performance requirements. For some jobs, I find the logic-programming paradigm useful: I also enjoy the Prolog language.

In any case, I love programming in Lisp, especially the industry standard Common Lisp. As I wrote the second edition of this book over a decade ago, I had been using Common Lisp almost exclusively for an artificial intelligence project for a health care company and for commercial product development. While working on the third edition of this book, I was not using Common Lisp professionally but since the release of the Quicklisp Common Lisp package manager I have found myself enjoying using Common Lisp more for small side projects. I use Quicklisp throughout in the third edition example code so you can easily install required libraries. For the fourth and fifth editions of this book I have added more examples using neural networks and deep learning. In this new sixth edition I have added a complete application that uses CAP for the user interface.

As programmers, we all (hopefully) enjoy applying our experience and brains for tackling interesting problems. My wife and I recently watched a two-night 7-hour PBS special “Joseph Campbell, and the Power of Myths.” Campbell, a college professor for almost 40 years, said that he always advised his students to “follow their bliss” and not to settle for jobs and avocations that are not what they truly want to do. That said I always feel that when a job calls for using Java, Python or other languages besides Lisp, that even though I may get a lot of pleasure from the job I am not following my bliss.

My goal in this book is to introduce you to one of my favorite programming languages, Common Lisp. I assume that you already know how to program in another language but if you are a complete beginner you can still master the material in this book with some effort. I challenge you to make this effort.

Free Software Tools for Common Lisp Programming

There are several Common Lisp compilers and runtime tools available for free on the web:

  • CLISP – licensed under the GNU GPL and is available for Windows, Macintosh, and Linux/Unix
  • Clozure Common Lisp (CCL) – open source with good Mac OS X and Linux support
  • CMU Common Lisp – open source implementation
  • SBCL – derived from CMU Common Lisp
  • ECL – compiles using a separate C/C++ compiler
  • ABCL – Armed Bear Common Lisp for the JVM

There are also fine commercial Common Lisp products:

  • LispWorks – high quality and reasonably priced system for Windows and Linux. No charge for distributing compiled applications lispworks.com
  • Allegro Common Lisp - high quality, great support and higher cost. franz.com
  • MCL – Macintosh Common Lisp. I used this Lisp environment in the late 1980s. MCL was so good that I gave away my Xerox 1108 Lisp Machine and switched to a Mac and MCL for my development work. Now open source but only runs on the old MacOS

I currently (mostly) use SBCL, CCL, and LispWorks. The SBCL compiler produces very fast code and the compiler warning can be of great value in finding potential problems with your code. Like CCL because it compiles quickly so is often preferable for development.

For working through this book, I will assume that you are using SBCL or CCL. For the example in the last chapter you will need LispWorks and the free Personal edition is fine for the purposes of experimenting with the example application and the CAPI user interface library.

Making Book Examples Run Portably on Most Common Lisp Implementations

Many of the book examples require making web service calls. In general when I am writing Common Lisp applications that require making REST calls I prefer using 3rd party Common Lisp libraries like Drakma or Dexador. However it is sometimes a little tricky to set up Common Lisp on different operating systems and CPU architectures with libopenssl, libcrypto, etc. Because of this, in book examples I run the external curl program using uiop:run-program and collect the output as a string that is then parsed as JSON or CSV data. The overhead of starting an external process is very small compared to calling a web service so in your own applications you can either follow my example of using curl or use the Drakma or Dexador libraries. Using the Apple M1 processor on macOS can be particularly problematic with OpenSSL issues.

I also use the excellent Common Lisp to Python bridge library py4cl in a few book examples. Usually py4cl installs without problems.

How is Lisp Different from Languages like Java and C++?

This is a trick question! Lisp is slightly more similar to Java than C++ because of automated memory management so we will start by comparing Lisp and Java.

In Java, variables are strongly typed while in Common Lisp values are strongly typed. For example, consider the Java code:

1   Float x = new Float(3.14f);
2   String s = "the cat ran" ;
3   Object any_object = null;
4   any_object = s;
5   x = s;  // illegal: generates a
6           // compilation error

Here, in Java, variables are strongly typed so a variable x of type Float can’t legally be assigned a string value: the code in line 5 would generate a compilation error. Lisp code can assign a value to a variable and then reassign another value of a different type.

Java and Lisp both provide automatic memory management. In either language, you can create new data structures and not worry about freeing memory when the data is no longer used, or to be more precise, is no longer referenced.

Common Lisp is an ANSI standard language. Portability between different Common Lisp implementations and on different platforms is very good. I have used Clozure Common Lisp, SBCL, Allegro Lisp (from Franz Inc), LispWorks, and CLISP that all run well on Windows, Mac OS X, and Linux. As a Common Lisp developer you will have great flexibility in tools and platforms.

ANSI Common Lisp was the first object oriented language to become an ANSI standard language. The Common Lisp Object System (CLOS) is probably the best platform for object oriented programming.

In C++ programs, a common bug that affects a program’s efficiency is forgetting to free memory that is no longer used. In a virtual memory system, the effect of a program’s increasing memory usage is usually just poorer system performance but can lead to system crashes or failures if all available virtual memory is exhausted. A worse type of C++ error is to free memory and then try to use it. Can you say “program crash”? C programs suffer from the same types of memory related errors.

Since computer processing power is usually much less expensive than the costs of software development, it is almost always worth while to give up a few percent of runtime efficiency and let the programming environment of runtime libraries manage memory for you. Languages like Lisp, Ruby, Python, and Java are said to perform automatic garbage collection.

I have written six books on Java, and I have been quoted as saying that for me, programming in Java is about twice as efficient (in terms of my time) as programming in C++. I base this statement on approximately ten years of C++ experience on projects for SAIC, PacBell, Angel Studios, Nintendo, and Disney. I find Common Lisp and other Lisp languages like Clojure and Scheme to be about twice as efficient (again, in terms of my time) as Java. That is correct: I am claiming a four times increase in my programming productivity when using Common Lisp vs. C++.

What do I mean by programming productivity? Simple: for a given job, how long does it take me to design, code, debug, and later maintain the software for a given task.

Advantages of Working in a Lisp Environment

We will soon see that Lisp is not just a language; it is also a programming environment and runtime environment.

The beginning of this book introduces the basics of Lisp programming. In later chapters, we will develop interesting and non-trivial programs in Common Lisp that I argue would be more difficult to implement in other languages and programming environments.

The big win in programming in a Lisp environment is that you can set up an environment and interactively write new code and test new code in small pieces. We will cover programming with large amounts of data in the Chapter on Natural Language Processing, but let me share a a general use case for work that I do that is far more efficient in Lisp:

Much of my Lisp programming used to be writing commercial natural language processing (NLP) programs for my company www.knowledgebooks.com. My Lisp NLP code uses a large amount of memory resident data; for example: hash tables for different types of words, hash tables for text categorization, 200,000 proper nouns for place names (cities, counties, rivers, etc.), and about 40,000 common first and last names of various nationalities.

If I was writing my NLP products in C++, I would probably use a relational database to store this data because if I read all of this data into memory for each test run of a C++ program, I would wait 30 seconds every time that I ran a program test. When I start working in any Common Lisp environment, I do have to load the linguistic data into memory one time, but then can code/test/code/test… for hours with no startup overhead for reloading the data that my programs need to run. Because of the interactive nature of Lisp development, I can test small bits of code when tracking down errors and when writing new code.

It is a personal preference, but I find the combination of the stable Common Lisp language and an iterative Lisp programming environment to be much more productive than other languages and programming environments.

Common Lisp Basics

The material in this chapter will serve as an introduction to Common Lisp. I have attempted to make this book a self contained resource for learning Common Lisp and to provide code examples to perform common tasks. If you already know Common Lisp and bought this book for the code examples later in this book then you can probably skip this chapter.

For working through this chapter we will be using the interactive shell, or repl, built into SBCL and other Common Lisp systems. For this chapter it is sufficient for you to download and install SBCL. Please install SBCL right now, if you have not already done so.

Getting Started with SBCL

When we start SBCL, we see an introductory message and then an input prompt. We will start with a short tutorial, walking you through a session using SBCL repl (other Common LISP systems are very similar). A repl is an interactive console where you type expressions and see the results of evaluating these expressions. An expression can be a large block of code pasted into the repl, using the load function to load Lisp code into the repl, calling functions to test them, etc. Assuming that SBCL is installed on your system, start SBCL by running the SBCL program:

 1 % sbcl
 2 (running SBCL from: /Users/markw/sbcl)
 3 This is SBCL 2.0.2, an implementation of ANSI Common Lisp.
 4 More information about SBCL is available at <http://www.sbcl.org/>.
 5 
 6 SBCL is free software, provided as is, with absolutely no warranty.
 7 It is mostly in the public domain; some portions are provided under
 8 BSD-style licenses.  See the CREDITS and COPYING files in the
 9 distribution for more information.
10 
11 * (defvar x 1.0)
12 
13 X
14 * x
15 
16 1.0
17 * (+ x 1)
18 
19 2.0
20 * x
21 
22 1.0
23 * (setq x (+ x 1))
24 
25 2.0
26 * x
27 
28 2.0
29 * (setq x "the dog chased the cat")
30 
31 "the dog chased the cat"
32 * x
33 
34 "the dog chased the cat"
35 * (quit)

We started by defining a new variable x in line 11. Notice how the value of the defvar macro is the symbol that is defined. The Lisp reader prints X capitalized because symbols are made upper case (we will look at the exception later).

In Lisp, a variable can reference any data type. We start by assigning a floating point value to the variable x, using the + function to add 1 to x in line 17, using the setq function to change the value of x in lines 23 and 29 first to another floating point value and finally setting x to a string value. One thing that you will have noticed: function names always occur first, then the arguments to a function. Also, parenthesis is used to separate expressions.

I learned to program Lisp in 1976 and my professor half-jokingly told us that Lisp was an acronym for “Lots-of Irritating Superfluous Parenthesis.” There may be some truth in this when you are just starting with Lisp programming, but you will quickly get used to the parenthesis, especially if you use an editor like Emacs that automatically indents Lisp code for you and highlights the opening parenthesis for every closing parenthesis that you type. Many other editors support coding in Lisp but I personally use Emacs or sometimes VScode (with Common Lisp plugins) to edit Lisp code.

Before you proceed to the next chapter, please take the time to install SBCL on your computer and try typing some expressions into the Lisp listener. If you get errors, or want to quit, try using the quit function:

1 * (+ 1 2 3 4)
2 
3 10
4 * (quit)
5 Bye.

If you get an error you can enter help to get options for handling an error. When I get an error and have a good idea of what caused the error then I just enter :a: to abort out of the error).

As we discussed in the introduction, there are many different Lisp programming environments that you can choose from. I recommend a free set of tools: Emacs, Quicklisp, slime, and SBCL. Emacs is a fine text editor that is extensible to work well with many programming languages and document types (e.g., HTML and XML). Slime is an Emacs extension package that greatly facilitates Lisp development. SBCL is a robust Common Lisp compiler and runtime system that is often used in production.

We will cover the Quicklisp package manager and using Quicklisp to setup Slime and Emacs in a later chapter.

I will not spend much time covering the use of Emacs as a text editor in this book since you can try most of the example code snippets in the book text by copying and then pasting them into a SBCL repl and by loading the book example source files directly into a repl. If you already use Emacs then I recommend that you do set up Slime sooner rather than later and start using it for development. If you are not already an Emacs user and do not mind spending the effort to learn Emacs, then search the web first for an Emacs tutorial. That said, you will easily be able to use the example code from this book using any text editor you like with a SBCL repl. I don’t use the vi or vim editors but if vi is your weapon of choice for editing text then a web search for “common lisp vi vim repl” should get you going for developing Common Lisp code with vi or vim. If you are not already an Emacs or vi user then using VSCode with a Common Lisp plugin is recommended.

Here, we will assume that under Windows, Unix, Linux, or Mac OS X you will use one command window to run SBCL and a separate editor that can edit plain text files.

Making the repl Nicer using rlwrap

While reading the last section you (hopefully!) played with the SBCL interactive repl. If you haven’t played with the repl, I won’t get too judgmental except to say that if you do not play with the examples as you read you will not get the full benefit from this book.

Did you notice that the backspace key does not work in the SBCL repl? The way to fix this is to install the GNU rlwrap utility. On OS X, assuming that you have homebrew installed, install rlwrap with:

1 brew install rlwrap

If you are running Ubuntu Linux, install rlwrap using:

1 sudo apt-get install rlwrap

You can then create an alias for bash or zsh using something like the following to define a command rsbcl:

1 alias rsbcl='rlwrap sbcl'

This is fine, just remember to run sbcl if you don’t need rlwrap command line editing or run rsbcl when you do need command line editing. That said, I find that I always want to run SBCL with command line editing, so I redefine sbcl on my computers using:

1 ->  ~  which sbcl
2 /Users/markw/sbcl/sbcl
3 ->  ~  alias sbcl='rlwrap /Users/markw/sbcl/sbcl'

This alias is different on my laptops and servers, since I don’t usually install SBCL in the default installation directory. For each of my computers, I add an appropriate alias in my .zshrc file (if I am running zsh) or my .bashrc file (if I am running bash).

The Basics of Lisp Programming

Although we will use SBCL in this book, any Common Lisp environment will do fine. In previous sections, we saw the top-level Lisp prompt and how we could type any expression that would be evaluated:

 1 * 1
 2 1
 3 * 3.14159
 4 3.14159
 5 * "the dog bit the cat"
 6 "the dog bit the cat"
 7 * (defun my-add-one (x)
 8 (+ x 1))
 9 MY-ADD-ONE
10 * (my-add-one -10)
11 -9

Notice that when we defined the function my-add-one in lines 7 and 8, we split the definition over two lines and on line 8 you don’t see the “*” prompt from SBCL – this lets you know that you have not yet entered a complete expression. The top level Lisp evaluator counts parentheses and considers a form to be complete when the number of closing parentheses equals the number of opening parentheses and an expression is complete when the parentheses match. I tend to count in my head, adding one for every opening parentheses and subtracting one for every closing parentheses – when I get back down to zero then the expression is complete. When we evaluate a number (or a variable), there are no parentheses, so evaluation proceeds when we hit a new line (or carriage return).

The Lisp reader by default tries to evaluate any form that you enter. There is a reader macro that prevents the evaluation of an expression. You can either use the character or quote:

1 * (+ 1 2)
2 3
3 * '(+ 1 2)
4 (+ 1 2)
5 * (quote (+ 1 2))
6 (+ 1 2)
7 * 

Lisp supports both global and local variables. Global variables can be declared using defvar:

 1 * (defvar *x* "cat")
 2 *X*
 3 * *x*
 4 "cat"
 5 * (setq *x* "dog")
 6 "dog"
 7 * *x*
 8 "dog"
 9 * (setq *x* 3.14159)
10 3.14159
11 * *x*
12 3.14159

One thing to be careful of when defining global variables with defvar: the declared global variable is dynamically scoped. We will discuss dynamic versus lexical scoping later, but for now a warning: if you define a global variable avoid redefining the same variable name inside functions. Lisp programmers usually use a global variable naming convention of beginning and ending dynamically scoped global variables with the * character. If you follow this naming convention and also do not use the * character in local variable names, you will stay out of trouble. For convenience, I do not always follow this convention in short examples in this book.

Lisp variables have no type. Rather, values assigned to variables have a type. In this last example, the variable x was set to a string, then to a floating-point number. Lisp types support inheritance and can be thought of as a hierarchical tree with the type t at the top. (Actually, the type hierarchy is a DAG, but we can ignore that for now.) Common Lisp also has powerful object oriented programming facilities in the Common Lisp Object System (CLOS) that we will discuss in a later chapter.

Here is a partial list of types (note that indentation denotes being a subtype of the preceding type):

 1 t  [top level type (all other types are a sub-type)]
 2      sequence
 3           list
 4           array
 5                vector
 6                     string
 7      number
 8           float
 9           rational
10                integer
11                ratio
12           complex
13      character
14      symbol
15      structure
16      function
17      hash-table

We can use the typep function to test the type of value of any variable or expression or use type-of to get type information of any value):

 1 * (setq x '(1 2 3))
 2 (1 2 3)
 3 * (typep x 'list)
 4 T
 5 * (typep x 'sequence)
 6 T
 7 * (typep x 'number)
 8 NIL
 9 * (typep (+ 1 2 3) 'number)
10 T
11 * (type-of 3.14159)
12 single-float
13 * (type-of "the dog ran quickly")
14 (simple-array character (19))
15 * (type-of 100193)
16 (integer 0 4611686018427387903)

A useful feature of all ANSI standard Common Lisp implementations’ top-level listener is that it sets * to the value of the last expression evaluated. For example:

1 * (+ 1 2 3 4 5)
2 15
3 * *
4 15
5 * (setq x *)
6 15
7 * x
8 15

All Common Lisp environments set * to the value of the last expression evaluated. This example may be slightly confusing because * is also the prompt character in the SBCL repl that indicates that you can enter a new expression for evaluation. For example in line 3, the first * character is the repl prompt and the second * we type in to see that value of the previous expression that we typed into the repl.

Frequently, when you are interactively testing new code, you will call a function that you just wrote with test arguments; it is useful to save intermediate results for later testing. It is the ability to create complex data structures and then experiment with code that uses or changes these data structures that makes Lisp programming environments so effective.

Common Lisp is a lexically scoped language that means that variable declarations and function definitions can be nested and that the same variable names can be used in nested let forms; when a variable is used, the current let form is searched for a definition of that variable and if it is not found, then the next outer let form is searched. Of course, this search for the correct declaration of a variable is done at compile time so there need not be extra runtime overhead. We should not nest defun special form inside each other or inside let expressions. Instead we use the special forms flet and labels to define functions inside a scoped environment. Functions defined inside a labels special form can be recursive while functions defined inside a flet special form cannot be recursive. Consider the following example in the file nested.lisp (all example files are in the src directory):

 1 (flet ((add-one (x)
 2          (+ x 1))
 3        (add-two (x)
 4          (+ x 2)))
 5   (format t "redefined variables: ~A  ~A~%" (add-one 100) (add-two 100)))
 6 
 7 (let ((a 3.14))
 8   (defun test2 (x) ; this works, but don't do it!
 9     (print x))
10   (test2 a))
11 
12 (test2 50)
13 
14 (let ((x 1)
15       (y 2))
16   ;; properly define a test function nested inside a let statement:
17   (flet ((test (a b)
18            (let ((z (+ a b)))
19              ;; define a helper function nested inside a let/function/let:
20              (flet ((nested-function (a)
21                       (+ a a)))
22                (nested-function z)))))
23     ;; call nested function 'test':
24     (format t "test result is ~A~%" (test x y))))
25 
26 (let ((z 10))
27   (labels ((test-recursion (a)
28              (format t "test-recursion ~A~%" (+ a z))
29              (if (> a 0)
30                  (test-recursion (- a 1)))))
31     (test-recursion 5)))

We define a top level flet special form in lines 1-5 that defines two nested functions add-one and add-two and then calls each nested function in the body of the flet special form. For many years I have used nested defun special forms inside let expressions for defining local functions but I now try to avoid doing this. Functions defined inside defun special forms have global visibility so they are not hidden in the local context where they are defined. The example of a nested defun in lines 7-12 shows that the function test2 has global visibility inside the current package.

Functions defined inside of a flet special form have access to variables defined in the outer scope containing the flet (also applies to labels). We see this in lines 14-24 where the local variables x and y defined in the let expression are visible inside the function nested-function defined inside the flet.

The final example in lines 26-31 shows a recursive function defined inside a labels special form.

Assuming that we started SBCL in the src directory we can then use the Lisp load function to evaluate the contents of the file nested.lisp in the sub-directory code_snippets_for_book using the load function:

* (load "./code_snippets_for_book/nested.lisp")
redefined variables: 101  102

3.14 
50 test result is 6
test-recursion 15
test-recursion 14
test-recursion 13
test-recursion 12
test-recursion 11
test-recursion 10
T
*

The function load returned a value of t (prints in upper case as T) after successfully loading the file.

We will use Common Lisp vectors and arrays frequently in later chapters, but will also briefly introduce them here. A singly dimensioned array is also called a vector. Although there are often more efficient functions for handling vectors, we will just look at generic functions that handle any type of array, including vectors. Common Lisp provides support for functions with the same name that take different argument types; we will discuss this in some detail when we cover this in the later chapter on CLOS. We will start by defining three vectors v1, v2, and v3:

1 * (setq v1 (make-array '(3)))
2 #(NIL NIL NIL)
3 * (setq v2 (make-array '(4) :initial-element "lisp is good"))
4 #("lisp is good" "lisp is good" "lisp is good" "lisp is good")
5 * (setq v3 #(1 2 3 4 "cat" '(99 100)))
6 #(1 2 3 4 "cat" '(99 100))

In line 1, we are defining a one-dimensional array, or vector, with three elements. In line 3 we specify the default value assigned to each element of the array v2. In line 5 I use the form for specifying array literals using the special character #. The function aref can be used to access any element in an array:

* (aref v3 3)
4
* (aref v3 5)
'(99 100)
* 

Notice how indexing of arrays is zero-based; that is, indices start at zero for the first element of a sequence. Also notice that array elements can be any Lisp data type. So far, we have used the special operator setq to set the value of a variable. Common Lisp has a generalized version of setq called setf that can set any value in a list, array, hash table, etc. You can use setf instead of setq in all cases, but not vice-versa. Here is a simple example:

* v1
#(NIL NIL NIL)
* (setf (aref v1 1) "this is a test") 
"this is a test"
* v1
#(NIL "this is a test" NIL)
* 

When writing new code or doing quick programming experiments, it is often easiest (i.e., quickest to program) to use lists to build interesting data structures. However, as programs mature, it is common to modify them to use more efficient (at runtime) data structures like arrays and hash tables.

Symbols

We will discuss symbols in more detail the Chapter on Common Lisp Packages. For now, it is enough for you to understand that symbols can be names that refer to variables. For example:

> (defvar *cat* "bowser")
*CAT*
* *cat*
"bowser"
* (defvar *l* (list *cat*))
*L*
* *l*
("bowser")
*

Note that the first defvar returns the defined symbol as its value. Symbols are almost always converted to upper case. An exception to this “upper case rule” is when we define symbols that may contain white space using vertical bar characters:

* (defvar |a symbol with Space Characters| 3.14159)
|a symbol with Space Characters|
* |a symbol with Space Characters|
3.14159
* 

Operations on Lists

Lists are a fundamental data structure of Common Lisp. In this section, we will look at some of the more commonly used functions that operate on lists. All of the functions described in this section have something in common: they do not modify their arguments.

In Lisp, a cons cell is a data structure containing two pointers. Usually, the first pointer in a cons cell will point to the first element in a list and the second pointer will point to another cons representing the start of the rest of the original list.

The function cons takes two arguments that it stores in the two pointers of a new cons data structure. For example:

* (cons 1 2)
(1 . 2)
* (cons 1 '(2 3 4))
(1 2 3 4)
* 

The first form evaluates to a cons data structure while the second evaluates to a cons data structure that is also a proper list. The difference is that in the second case the second pointer of the freshly created cons data structure points to another cons cell.

First, we will declare two global variables l1 and l2 that we will use in our examples. The list l1 contains five elements and the list l2 contains four elements:

* (defvar l1 '(1 2 (3) 4 (5 6)))
L1
* (length l1)

5
* (defvar l2 '(the "dog" calculated 3.14159))
L2
* l1
(1 2 (3) 4 (5 6))
* l2
(THE "dog" CALCULATED 3.14159)
>

You can also use the function list to create a new list; the arguments passed to function list are the elements of the created list:

* (list 1 2 3 'cat "dog")
(1 2 3 CAT "dog")
*

The function car returns the first element of a list and the function cdr returns a list with its first element removed (but does not modify its argument):

* (car l1)
1
* (cdr l1)
(2 (3) 4 (5 6))
*

Using combinations of car and cdr calls can be used to extract any element of a list:

* (car (cdr l1))
2
* (cadr l1)
2
*

Notice that we can combine calls to car and cdr into a single function call, in this case the function cadr. Common Lisp defines all functions of the form cXXr, cXXXr, and cXXXXr where X can be either a or d.

Suppose that we want to extract the value 5 from the nested list l1. Some experimentation with using combinations of car and cdr gets the job done:

* l1
(1 2 (3) 4 (5 6))
* (cadr l1)
2
* (caddr l1)
(3)
(car (caddr l1))
3
* (caar (last l1))
5
* (caar (cddddr l1))

5
*

The function last returns the last cdr of a list (i.e., the last element, in a list):

* (last l1)
((5 6))
*

Common list supplies alternative functions to car and cdr that you might find more readable: first, second, third, fourth, and rest. Here are some examples:

* (defvar *x* '(1 2 3 4 5))

*X*
* (first *x*)

1
* (rest *x*)

(2 3 4 5)
* (second *x*)

2
* (third *x*)

3
* (fourth *x*)

4

The function nth takes two arguments: an index of a top-level list element and a list. The first index argument is zero based:

* l1
(1 2 (3) 4 (5 6))
* (nth 0 l1)
1
* (nth 1 l1)
2
* (nth 2 l1)
(3)
*

The function cons adds an element to the beginning of a list and returns as its value a new list (it does not modify its arguments). An element added to the beginning of a list can be any Lisp data type, including another list:

* (cons 'first l1)
(FIRST 1 2 (3) 4 (5 6))
* (cons '(1 2 3) '(11 22 33))
((1 2 3) 11 22 33)
* 

The function append takes two lists as arguments and returns as its value the two lists appended together:

* l1
(1 2 (3) 4 (5 6))
* l2
('THE "dog" 'CALCULATED 3.14159)
* (append l1 l2)
(1 2 (3) 4 (5 6) THE "dog" CALCULATED 3.14159)
* (append '(first) l1)
(FIRST 1 2 (3) 4 (5 6))
* 

A frequent error that beginning Lisp programmers make is not understanding shared structures in lists. Consider the following example where we generate a list y by reusing three copies of the list x:

* (setq x '(0 0 0 0))
(0 0 0 0)
* (setq y (list x x x))
((0 0 0 0) (0 0 0 0) (0 0 0 0))
* (setf (nth 2 (nth 1 y)) 'x)
X
* x
(0 0 X 0)
* y
((0 0 X 0) (0 0 X 0) (0 0 X 0))
* (setq z '((0 0 0 0) (0 0 0 0) (0 0 0 0)))
((0 0 0 0) (0 0 0 0) (0 0 0 0))
* (setf (nth 2 (nth 1 z)) 'x)
X
* z
((0 0 0 0) (0 0 X 0) (0 0 0 0))
* 

When we change the shared structure referenced by the variable x that change is reflected three times in the list y. When we create the list stored in the variable z we are not using a shared structure.

Using Arrays and Vectors

Using lists is easy but the time spent accessing a list element is proportional to the length of the list. Arrays and vectors are more efficient at runtime than long lists because list elements are kept on a linked-list that must be searched. Accessing any element of a short list is fast, but for sequences with thousands of elements, it is faster to use vectors and arrays.

By default, elements of arrays and vectors can be any Lisp data type. There are options when creating arrays to tell the Common Lisp compiler that a given array or vector will only contain a single data type (e.g., floating point numbers) but we will not use these options in this book.

Vectors are a specialization of arrays; vectors are arrays that only have one dimension. For efficiency, there are functions that only operate on vectors, but since array functions also work on vectors, we will concentrate on arrays. In the next section, we will look at character strings that are a specialization of vectors.

We could use the generalized make-sequence function to make a singularly dimensioned array (i.e., a vector). Restart sbcl and try:

* (defvar x (make-sequence 'vector 5 :initial-element 0))
X
* x
#(0 0 0 0 0)
* 

In this example, notice the print format for vectors that looks like a list with a proceeding # character. As seen in the last section, we use the function make-array to create arrays:

* (defvar y (make-array '(2 3) :initial-element 1))
Y
* y
#2A((1 1 1) (1 1 1))
>

Notice the print format of an array: it looks like a list proceeded by a # character and the integer number of dimensions.

Instead of using make-sequence to create vectors, we can pass an integer as the first argument of make-array instead of a list of dimension values. We can also create a vector by using the function vector and providing the vector contents as arguments:

* (make-array 10)  
#(NIL NIL NIL NIL NIL NIL NIL NIL NIL NIL)
* (vector 1 2 3 'cat)
#(1 2 3 CAT)
* 

The function aref is used to access sequence elements. The first argument is an array and the remaining argument(s) are array indices. For example:

* x
#(0 0 0 0 0)
* (aref x 2)
0
* (setf (aref x 2) "parrot")
"parrot"
* x
#(0 0 "parrot" 0 0)
* (aref x 2)
"parrot"
* y
#2A((1 1 1) (1 1 1))
* (setf (aref y 1 2) 3.14159)
3.14159
* y
#2A((1 1 1) (1 1 3.14159))
* 

Using Strings

It is likely that even your first Lisp programs will involve the use of character strings. In this section, we will cover the basics: creating strings, concatenating strings to create new strings, for substrings in a string, and extracting substrings from longer strings. The string functions that we will look at here do not modify their arguments; rather, they return new strings as values. For efficiency, Common Lisp does include destructive string functions that do modify their arguments but we will not discuss these destructive functions here.

We saw earlier that a string is a type of vector, which in turn is a type of array (which in turn is a type of sequence). A full coverage of the Common Lisp type system is outside the scope of this tutorial introduction to Common Lisp; a very good treatment of Common Lisp types is in Guy Steele’s “Common Lisp, The Language” which is available both in print and for free on the web. Many of the built in functions for handling strings are actually more general because they are defined for the type sequence. The Common Lisp Hyperspec is another great free resource that you can find on the web. I suggest that you download an HTML version of Guy Steele’s excellent reference book and the Common Lisp Hyperspec and keep both on your computer. If you continue using Common Lisp, eventually you will want to read all of Steele’s book and use the Hyperspec for reference.

The following text was captured from input and output from a Common Lisp repl. First, we will declare two global variables s1 and space that contain string values:

* (defvar s1 "the cat ran up the tree")
S1
* (defvar space " ")
SPACE
* 

One of the most common operations on strings is to concatenate two or more strings into a new string:

* (concatenate 'string s1 space "up the tree")
"the cat ran up the tree up the tree"
*

Notice that the first argument of the function concatenate is the type of the sequence that the function should return; in this case, we want a string. Another common string operation is search for a substring:

* (search "ran" s1)
8
* (search "zzzz" s1)
NIL
*

If the search string (first argument to function search) is not found, function search returns nil, otherwise search returns an index into the second argument string. Function search takes several optional keyword arguments (see the next chapter for a discussion of keyword arguments):

  (search search-string a-longer-string :from-end :test
                                        :test-not :key
                                        :start1 :start2
                                        :end1 :end2)

For our discussion, we will just use the keyword argument :start2 for specifying the starting search index in the second argument string and the :from-end flag to specify that search should start at the end of the second argument string and proceed backwards to the beginning of the string:

* (search " " s1)
3
* (search " " s1 :start2 5)
7
* (search " " s1 :from-end t)
18
*

The sequence function subseq can be used for strings to extract a substring from a longer string:

* (subseq s1 8)
"ran up the tree"
>

Here, the second argument specifies the starting index; the substring from the starting index to the end of the string is returned. An optional third index argument specifies one greater than the last character index that you want to extract:

* (subseq s1 8 11)
"ran"
*

It is frequently useful to remove white space (or other) characters from the beginning or end of a string:

* (string-trim '(#\space #\z #\a) " a boy said pez")
"boy said pe"
* 

The character #\space is the space character. Other common characters that are trimmed are #\tab and #\newline. There are also utility functions for making strings upper or lower case:

* (string-upcase "The dog bit the cat.")
"THE DOG BIT THE CAT."
* (string-downcase "The boy said WOW!")
"the boy said wow!"
>

We have not yet discussed equality of variables. The function eq returns true if two variables refer to the same data in memory. The function eql returns true if the arguments refer to the same data in memory or if they are equal numbers or characters. The function equal is more lenient: it returns true if two variables print the same when evaluated. More formally, function equal returns true if the car and cdr recursively equal to each other. An example will make this clearer:

* (defvar x '(1 2 3))
X
* (defvar y '(1 2 3))
Y
* (eql x y)
NIL
* (equal x y)
T
* x
(1 2 3)
* y
(1 2 3)
* 

For strings, the function string= is slightly more efficient than using the function equal:

* (eql "cat" "cat")
NIL
* (equal "cat" "cat")
T
* (string= "cat" "cat")
T
* 

Common Lisp strings are sequences of characters. The function char is used to extract individual characters from a string:

* s1
"the cat ran up the tree"
* (char s1 0)
#\t
* (char s1 1)
#\h
* 

Using Hash Tables

Hash tables are an extremely useful data type. While it is true that you can get the same effect by using lists and the assoc function, hash tables are much more efficient than lists if the lists contain many elements. For example:

* (defvar x '((1 2) ("animal" "dog")))
X
* (assoc 1 x)
(1 2)
* (assoc "animal" x)
NIL
* (assoc "animal" x :test #'equal)
("animal" "dog")
*

The second argument to function assoc is a list of cons cells. Function assoc searches for a sub-list (in the second argument) that has its car (i.e., first element) equal to the first argument to function assoc. The perhaps surprising thing about this example is that assoc seems to work with an integer as the first argument but not with a string. The reason for this is that by default the test for equality is done with eql that tests two variables to see if they refer to the same memory location or if they are identical if they are numbers. In the last call to assoc we used “:test #’equal” to make assoc use the function equal to test for equality.

The problem with using lists and assoc is that they are very inefficient for large lists. We will see that it is no more difficult to code with hash tables.

A hash table stores associations between key and value pairs, much like our last example using the assoc function. By default, hash tables use eql to test for equality when looking for a key match. We will duplicate the previous example using hash tables:

* (defvar h (make-hash-table))
H
* (setf (gethash 1 h) 2)
2
* (setf (gethash "animal" h) "dog")
"dog"
* (gethash 1 h)
2 ;
T
* (gethash "animal" h)
NIL ;
NIL
*

Notice that gethash returns multiple values: the first value is the value matching the key passed as the first argument to function gethash and the second returned value is true if the key was found and nil otherwise. The second returned value could be useful if hash values are nil.

Since we have not yet seen how to handle multiple returned values from a function, we will digress and do so here (there are many ways to handle multiple return values and we are just covering one of them):

* (multiple-value-setq (a b) (gethash 1 h))
2
* a
2
* b
T
* 

Assuming that variables a and b are already declared, the variable a will be set to the first returned value from gethash and the variable b will be set to the second returned value.

If we use symbols as hash table keys, then using eql for testing for equality with hash table keys is fine:

* (setf (gethash 'bb h) 'aa)
AA
* (gethash 'bb h)
AA ;
T
* 

However, we saw that eql will not match keys with character string values. The function make-hash-table has optional key arguments and one of them will allow us to use strings as hash key values:

  (make-hash-table &key :test :size :rehash-size :rehash-threshold)

Here, we are only interested in the first optional key argument :test that allows us to use the function equal to test for equality when matching hash table keys. For example:

* (defvar h2 (make-hash-table :test #'equal))
H2
* (setf (gethash "animal" h2) "dog")
"dog"
* (setf (gethash "parrot" h2) "Brady")
"Brady"
* (gethash "parrot" h2)
"Brady" ;
T
* 

It is often useful to be able to enumerate all the key and value pairs in a hash table. Here is a simple example of doing this by first defining a function my-print that takes two arguments, a key and a value. We can then use the maphash function to call our new function my-print with every key and value pair in a hash table:

* (defun my-print (a-key a-value)
        (format t "key: ~A value: ~A~\%" a-key a-value))          
MY-PRINT
* (maphash #'my-print h2)
key: parrot value: Brady
key: animal value: dog
NIL
* 

The function my-print is applied to each key/value pair in the hash table. There are a few other useful hash table functions that we demonstrate here:

* (hash-table-count h2)
2
* (remhash "animal" h2)
T
* (hash-table-count h2)
1
* (clrhash h2)
#S(HASH-TABLE EQUAL)
* (hash-table-count h2)
0
* 

The function hash-table-count returns the number of key and value pairs in a hash table. The function remhash can be used to remove a single key and value pair from a hash table. The function clrhash clears out a hash table by removing all key and value pairs in a hash table.

It is interesting to note that clrhash and remhash are the first Common Lisp functions that we have seen so far that modify any of its arguments, except for setq and setf that are macros and not functions.

Using Eval to Evaluate Lisp Forms

We have seen how we can type arbitrary Lisp expressions in the Lisp repl listener and then they are evaluated. We will see in the Chapter on Input and Output that the Lisp function read evaluates lists (or forms) and indeed the Lisp repl uses function read.

In this section, we will use the function eval to evaluate arbitrary Lisp expressions inside a program. As a simple example:

* (defvar x '(+ 1 2 3 4 5))
X
* x
(+ 1 2 3 4 5)
* (eval x)
15
* 

Using the function eval, we can build lists containing Lisp code and evaluate generated code inside our own programs. We get the effect of “data is code”. A classic Lisp program, the OPS5 expert system tool, stored snippets of Lisp code in a network data structure and used the function eval to execute Lisp code stored in the network. A warning: the use of eval is likely to be inefficient in non-compiled code. For efficiency, the OPS5 program contained its own version of eval that only interpreted a subset of Lisp used in the network.

Using a Text Editor to Edit Lisp Source Files

I usually use Emacs, but we will briefly discuss the editor vi also. If you use vi (e.g., enter “vi nested.lisp”) the first thing that you should do is to configure vi to indicate matching opening parentheses whenever a closing parentheses is typed; you do this by typing “:set sm” after vi is running.

If you choose to learn Emacs, enter the following in your .emacs file (or your _emacs file in your home directory if you are running Windows):

1   (set-default 'auto-mode-alist
2                (append '(("\\.lisp$" . lisp-mode)
3                          ("\\.lsp$" . lisp-mode)
4                          ("\\.cl$" . lisp-mode))
5                        auto-mode-alist))

Now, whenever you open a file with the extension of “lisp”, “lsp”, or “cl” (for “Common Lisp”) then Emacs will automatically use a Lisp editing mode. I recommend searching the web using keywords “Emacs tutorial” to learn how to use the basic Emacs editing commands - we will not repeat this information here.

I do my professional Lisp programming using free software tools: Emacs, SBCL, Clozure Common Lisp, and Clojure. I will show you how to configure Emacs and Slime in the last section of the Chapter on Quicklisp.

Recovering from Errors

When you enter forms (or expressions) in a Lisp repl listener, you will occasionally make a mistake and an error will be thrown. Here is an example where I am not showing all of the output when entering help when an error is thrown:

* (defun my-add-one (x) (+ x 1))

MY-ADD-ONE
* (my-add-one 10)

11
* (my-add-one 3.14159)

4.14159
* (my-add-one "cat")

debugger invoked on a SIMPLE-TYPE-ERROR: Argument X is not a NUMBER: "cat"

Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.

restarts (invokable by number or by possibly-abbreviated name):
  0: [ABORT] Exit debugger, returning to top level.

(SB-KERNEL:TWO-ARG-+ "cat" 1)
0] help

The debug prompt is square brackets, with number(s) indicating the current
  control stack level and, if you've entered the debugger recursively, how
  deeply recursed you are.

 ...

Getting in and out of the debugger:
  TOPLEVEL, TOP  exits debugger and returns to top level REPL
  RESTART        invokes restart numbered as shown (prompt if not given).
  ERROR          prints the error condition and restart cases.

 ...

Inspecting frames:
  BACKTRACE [n]  shows n frames going down the stack.
  LIST-LOCALS, L lists locals in current frame.
  PRINT, P       displays function call for current frame.
  SOURCE [n]     displays frame's source form with n levels of enclosing forms.

Stepping:
  START Selects the CONTINUE restart if one exists and starts
        single-stepping. Single stepping affects only code compiled with
        under high DEBUG optimization quality. See User Manual for details.
  STEP  Steps into the current form.
  NEXT  Steps over the current form.
  OUT   Stops stepping temporarily, but resumes it when the topmost frame that
        was stepped into returns.
  STOP  Stops single-stepping.

 ...

0] list-locals
SB-DEBUG::ARG-0  =  "cat"
SB-DEBUG::ARG-1  =  1

0] backtrace 2

Backtrace for: #<SB-THREAD:THREAD "main thread" RUNNING {1002AC32F3}>
0: (SB-KERNEL:TWO-ARG-+ "cat" 1)
1: (MY-ADD-ONE "cat")
0] :0

*

Here, I first used the backtrace command :bt to print the sequence of function calls that caused the error. If it is obvious where the error is in the code that I am working on then I do not bother using the backtrace command. I then used the abort command :a to recover back to the top level Lisp listener (i.e., back to the greater than prompt). Sometimes, you must type :a more than once to fully recover to the top level greater than prompt.

Garbage Collection

Like other languages like Java and Python, Common Lisp provides garbage collection (GC) or automatic memory management.

In simple terms, GC occurs to free memory in a Lisp environment that is no longer accessible by any global variable (or function closure, which we will cover in the next chapter). If a global variable *variable-1* is first set to a list and then if we later then set *variable-1* to, for example nil, and if the data referenced in the original list is not referenced by any other accessible data, then this now unused data is subject to GC.

In practice, memory for Lisp data is allocated in time ordered batches and ephemeral or generational garbage collectors garbage collect recent memory allocations far more often than memory that has been allocated for a longer period of time.

Loading your Working Environment Quickly

When you start using Common Lisp for large projects, you will likely have many files to load into your Lisp environment when you start working. Most Common Lisp implementations have a function called defsystem that works somewhat like the Unix make utility. While I strongly recommend defsystem for large multi-person projects, I usually use a simpler scheme when working on my own: I place a file loadit.lisp in the top directory of each project that I work on. For any project, its loadit.lisp file loads all source files and initializes any global data for the project.

The last two chapters of this book provide example applications that are configured to work with Quicklisp, which we will study in the next chapter.

Another good technique is to create a Lisp image containing all the code and data for all your projects. There is an example of this in the first section of the Chapter on NLP. In this example, it takes a few minutes to load the code and data for my NLP (natural language processing) library so when I am working with it I like to be able to quickly load a SBCL Lisp image.

All Common Lisp implementations have a mechanism for dumping a working image containing code and data.

Functional Programming Concepts

There are two main styles for doing Common Lisp development. Object oriented programming is well supported (see the Chapter on CLOS) as is functional programming. In a nut shell, functional programming means that we should write functions with no side effects. First let me give you a non-functional example with side effects:

(defun non-functional-example (car)
  (set-color car "red"))

This example using CLOS is non-functional because we modify the value of an argument to the function. Some functional languages like the Lisp Clojure language and the Haskell language dissuade you from modifying arguments to functions. With Common Lisp you should make a decision on which approach you like to use.

Functional programming means that we avoid maintaining state inside of functions and treat data as immutable (i.e., once an object is created, it is never modified). We could modify the last example to be function by creating a new car object inside the function, copy the attributes of the car passed as an object, change the color to “red” of the new car object, and return the new car instance as the value of the function.

Functional programming prevents many types of programming errors, makes unit testing simpler, and makes programming for modern multi-core CPUs easier because read-only objects are inherently thread safe. Modern best practices for the Java language also prefer immutable data objects and a functional approach.

Quicklisp

For several decades managing packages and libraries was a manual process when developing Lisp systems. I used to package the source code for specific versions of libraries as part of my Common Lisp projects. Early package management systems mk-defsystem and ASDF were very useful, but I did not totally give up my practice keeping third party library source code with my projects until Zach Beane created the Quicklisp package system. You will need to have Quicklisp installed for many of the examples later in this book so please take the time to install it now as per the instructions on the Quicklisp web site.

Using Quicklisp to Find Packages

We will need the Common Lisp Hunchentoot library later in the Chapter on Network Programming so we will install it now using Quicklisp as an example for getting started with Quicklisp.

We already know the package name we want, but as an example of discovering packages let’s start by using Quicklisp to search for all packages with “hunchentoot” in the package name:

 1 * (ql:system-apropos "hunchentoot")
 2 #<SYSTEM clack-handler-hunchentoot / clack-20131111-git / quicklisp 2013-11-11>
 3 #<SYSTEM hunchentoot / hunchentoot-1.2.21 / quicklisp 2013-11-11>
 4 #<SYSTEM hunchentoot-auth / hunchentoot-auth-20101107-git / quicklisp 2013-11-11>
 5 #<SYSTEM hunchentoot-cgi / hunchentoot-cgi-20121125-git / quicklisp 2013-11-11>
 6 #<SYSTEM hunchentoot-dev / hunchentoot-1.2.21 / quicklisp 2013-11-11>
 7 #<SYSTEM hunchentoot-single-signon / hunchentoot-single-signon-20131111-git / quickl\
 8 isp 2013-11-11>
 9 #<SYSTEM hunchentoot-test / hunchentoot-1.2.21 / quicklisp 2013-11-11>
10 #<SYSTEM hunchentoot-vhost / hunchentoot-vhost-20110418-git / quicklisp 2013-11-11>

We want the base package seen in line 3 and we can install the base package as seen in the following example:

1 * (ql:quickload :hunchentoot)
2 To load "hunchentoot":
3   Load 1 ASDF system:
4     hunchentoot
5 ; Loading "hunchentoot"
6 .......
7 (:HUNCHENTOOT)

In line 1, I refer to the package name using a symbol :hunchentoot but using the string “hunchentoot” would have worked the same. The first time you ql:quickload a library you may see additional printout and it takes longer to load because the source code is downloaded from the web and cached locally in the directory ~/quicklisp/local-projects. In most of the rest of this book, when I install or use a package by calling the ql:quickload function I do not show the output from this function in the repl listings.

Now, we can use the fantastically useful Common Lisp function apropos to see what was just installed:

 1 * (apropos "hunchentoot")
 2 
 3 HUNCHENTOOT::*CLOSE-HUNCHENTOOT-STREAM* (bound)
 4 HUNCHENTOOT:*HUNCHENTOOT-DEFAULT-EXTERNAL-FORMAT* (bound)
 5 HUNCHENTOOT::*HUNCHENTOOT-STREAM*
 6 HUNCHENTOOT:*HUNCHENTOOT-VERSION* (bound)
 7 HUNCHENTOOT:HUNCHENTOOT-CONDITION
 8 HUNCHENTOOT:HUNCHENTOOT-ERROR (fbound)
 9 HUNCHENTOOT::HUNCHENTOOT-OPERATION-NOT-IMPLEMENTED-OPERATION (fbound)
10 HUNCHENTOOT::HUNCHENTOOT-SIMPLE-ERROR
11 HUNCHENTOOT::HUNCHENTOOT-SIMPLE-WARNING
12 HUNCHENTOOT::HUNCHENTOOT-WARN (fbound)
13 HUNCHENTOOT:HUNCHENTOOT-WARNING
14 HUNCHENTOOT-ASD:*HUNCHENTOOT-VERSION* (bound)
15 HUNCHENTOOT-ASD::HUNCHENTOOT
16 :HUNCHENTOOT (bound)
17 :HUNCHENTOOT-ASD (bound)
18 :HUNCHENTOOT-DEV (bound)
19 :HUNCHENTOOT-NO-SSL (bound)
20 :HUNCHENTOOT-TEST (bound)
21 :HUNCHENTOOT-VERSION (bound)
22 * 

As long as you are thinking about the new tool Quicklisp that is now in your tool chest, you should install most of the packages and libraries that you will need for working through the rest of this book. I will show the statements needed to load more libraries without showing the output printed in the repl as each package is loaded:

1 (ql:quickload "clsql")
2 (ql:quickload "clsql-postgresql")
3 (ql:quickload "clsql-mysql")
4 (ql:quickload "clsql-sqlite3")
5 (ql:quickload :drakma)
6 (ql:quickload :hunchentoot)
7 (ql:quickload :cl-json)
8 (ql:quickload "clouchdb")  ;; for CouchDB access
9 (ql:quickload "sqlite")

You need to have the Postgres and MySQL client developer libraries installed on your system for the clsql-postgresql and clsql-mysql installations to work. If you are unlikely to use relational databases with Common Lisp then you might skip the effort of installing Postgres and MySQL. The example in the Chapter on the Knowledge Graph Navigator uses the SQLite database for caching. You don’t need any extra dependencies for the sqlite package.

Using Quicklisp to Configure Emacs and Slime

I assume that you have Emacs installed on your system. In a repl you can setup the Slime package that allows Emacs to connect to a running Lisp environment:

(ql:quickload "quicklisp-slime-helper")

Pay attention to the output in the repl. On my system the output contained the following:

1 [package quicklisp-slime-helper]
2 slime-helper.el installed in "/Users/markw/quicklisp/slime-helper.el"
3 
4 To use, add this to your ~/.emacs:
5 
6   (load (expand-file-name "~/quicklisp/slime-helper.el"))
7   ;; Replace "sbcl" with the path to your implementation
8   (setq inferior-lisp-program "sbcl")

If you installed rlwrap and defined an alias for running SBCL, make sure you set the inferior lisp program to the absolute path of the SBCL executable; on my system I set the following in my .emacs file:

1   (setq inferior-lisp-program "/Users/markw/sbcl/sbcl")

I am not going to cover using Emacs and Slime, there are many good tutorials on the web you can read.

In later chapters we will write libraries and applications as Quicklisp projects so that you will be able to load your own libraries, making it easier to write small libraries that you can compose into larger applications.

Defining Lisp Functions

In the previous chapter, we defined a few simple functions. In this chapter, we will discuss how to write functions that take a variable number of arguments, optional arguments, and keyword arguments.

The special form defun is used to define new functions either in Lisp source files or at the top level Lisp listener prompt. Usually, it is most convenient to place function definitions in a source file and use the function load to load them into our Lisp working environment.

In general, it is bad form to use global variables inside Lisp functions. Rather, we prefer to pass all required data into a function via its argument list and to get the results of the function as the value (or values) returned from a function. Note that if we do require global variables, it is customary to name them with beginning and ending * characters; for example:

1 (defvar *lexical-hash-table*
2         (make-hash-table :test #'equal :size 5000))

Then in this example, if you see the variable *lexical-hash-table* inside a function definition, you will know that at least by naming convention, that this is a global variable.

In Chapter 1, we saw an example of using lexically scoped local variables inside a function definition (in the example file nested.lisp).

There are several options for defining the arguments that a function can take. The fastest way to introduce the various options is with a few examples.

First, we can use the &aux keyword to declare local variables for use in a function definition:

1 * (defun test (x &aux y)
2        (setq y (list x x))
3        y)
4 TEST
5 * (test 'cat)
6 (CAT CAT)
7 * (test 3.14159)
8 (3.14159 3.14159)

It is considered better coding style to use the let special operator for defining auxiliary local variables; for example:

1 * (defun test (x)
2        (let ((y (list x x)))
3          y))
4 TEST
5 * (test "the dog bit the cat")
6 ("the dog bit the cat" "the dog bit the cat")
7 * 

You will probably not use &aux very often, but there are two other options for specifying function arguments: &optional and &key.

The following code example shows how to use optional function arguments. Note that optional arguments must occur after required arguments.

 1 * (defun test (a &optional b (c 123))
 2         (format t "a=~A b=~A c=~A~%" a b c))
 3 TEST
 4 * (test 1)
 5 a=1 b=NIL c=123
 6 NIL
 7 * (test 1 2)
 8 a=1 b=2 c=123
 9 NIL
10 * (test 1 2 3)
11 a=1 b=2 c=3
12 NIL
13 * (test 1 2 "Italian Greyhound")
14 a=1 b=2 c=Italian Greyhound
15 NIL
16 * 

In this example, the optional argument b was not given a default value so if unspecified it will default to nil. The optional argument c is given a default value of 123.

We have already seen the use of keyword arguments in built-in Lisp functions. Here is an example of how to specify key word arguments in your functions:

 1 * (defun test (a &key b c)
 2         (format t "a=~A b=~A c=~A~%" a b c))
 3 TEST
 4 * (test 1)
 5 a=1 b=NIL c=NIL
 6 NIL
 7 * (test 1 :c 3.14159)
 8 a=1 b=NIL c=3.14159
 9 NIL
10 * (test "cat" :b "dog")
11 a=cat b=dog c=NIL
12 NIL
13 * 

Using Lambda Forms

It is often useful to define unnamed functions. We can define an unnamed function using lambda; for example, let’s look at the example file src/lambda1.lisp. But first, we will introduce the Common Lisp function funcall that takes one or more arguments; the first argument is a function and any remaining arguments are passed to the function bound to the first argument. For example:

1 * (funcall 'print 'cat)
2 CAT 
3 CAT
4 * (funcall '+ 1 2)
5 3
6 * (funcall #'- 2 3)
7 -1
8 * 

In the first two calls to funcall here, we simply quote the function name that we want to call. In the third example, we use a better notation by quoting with #’. We use the #’ characters to quote a function name.

Consider the following repl listing where we will look at a primary difference between quoting a symbol using and with #’:

1 $ ccl
2 Clozure Common Lisp Version 1.12  DarwinX8664
3 ? 'barfoo531
4 BARFOO531
5 ? (apropos "barfoo")
6 BARFOO531
7 ? #'bar987
8 > Error: Undefined function: BAR987

On line three we create a new symbol BARFOO531 that is interned as you can see from looking at all interned symbols containing the string “barfoo”. Line 7 throws an error because #’ does not intern a new symbol.

Here is the example file src/lambda1.lisp:

1 (defun test ()
2   (let ((my-func
3           (lambda (x) (+ x 1))))
4     (funcall my-func 1)))

Here, we define a function using lambda and set the value of the local variable my-func to the unnamed function’s value. Here is output from the function test:

1 * (test)
2 2
3 
4 * 

The ability to use functions as data is surprisingly useful. For now, we will look at a simple example:

 1 * (defvar f1 #'(lambda (x) (+ x 1)))
 2 
 3 F1
 4 * (funcall f1 100)
 5 
 6 101
 7 * (funcall #'print 100)
 8 
 9 100 
10 100

Notice that the second call to function testfn prints “100” twice: the first time as a side effect of calling the function print and the second time as the returned value of testfn (the function print returns what it is printing as its value).

Using Recursion

Later, we will see how to use special Common Lisp macros for programming repetitive loops. In this section, we will use recursion for both coding simple loops and as an effective way to solve a variety of problems that can be expressed naturally using recursion.

As usual, the example programs for this section are found in the src directory. In the file src/recursion1.lisp, we see our first example of recursion:

1 ;; a simple loop using recursion
2 
3 (defun recursion1 (value)
4   (format t "entering recursion1(~A)~\%" value)
5   (if (< value 5)
6       (recursion1 (1+ value))))

This example is simple, but it is useful for discussing a few points. First, notice how the function recursion1 calls itself with an argument value of one greater than its own input argument only if the input argument “value” is less than 5. This test keeps the function from getting in an infinite loop. Here is some sample output:

 1 * (load "recursion1.lisp")
 2 ;; Loading file recursion1.lisp ...
 3 ;; Loading of file recursion1.lisp is finished.
 4 T
 5 * (recursion1 0)
 6 entering recursion1(0)
 7 entering recursion1(1)
 8 entering recursion1(2)
 9 entering recursion1(3)
10 entering recursion1(4)
11 entering recursion1(5)
12 NIL
13 * (recursion1 -3)
14 entering recursion1(-3)
15 entering recursion1(-2)
16 entering recursion1(-1)
17 entering recursion1(0)
18 entering recursion1(1)
19 entering recursion1(2)
20 entering recursion1(3)
21 entering recursion1(4)
22 entering recursion1(5)
23 NIL
24 * (recursion1 20)
25 entering recursion1(20)
26 NIL
27 * 

Why did the call on line 24 not loop via recursion? Because the input argument is not less than 5, no recursion occurs.

Closures

We have seen that functions can take other functions as arguments and return new functions as values. A function that references an outer lexically scoped variable is called a closure. The example file src/closure1.lisp contains a simple example:

 1 (let* ((fortunes
 2         '("You will become a great Lisp Programmer"
 3           "The force will not be with you"
 4           "Take time for meditation"))
 5        (len (length fortunes))
 6        (index 0))
 7   (defun fortune ()
 8     (let ((new-fortune (nth index fortunes)))
 9       (setq index (1+ index))
10       (if (>= index len) (setq index 0))
11       new-fortune)))

Here the function fortune is defined inside a let form. Because the local variable fortunes is referenced inside the function fortune, the variable fortunes exists after the let form is evaluated. It is important to understand that usually a local variable defined inside a let form “goes out of scope” and can no longer be referenced after the let form is evaluated.

However, in this example, there is no way to access the contents of the variable fortunes except by calling the function fortune. At a minimum, closures are a great way to hide variables. Here is some output from loading the src/closure1.lisp file and calling the function fortune several times:

 1 * (load "closure1.lisp")
 2 ;; Loading file closure1.lisp ...
 3 ;; Loading of file closure1.lisp is finished.
 4 T
 5 * (fortune)
 6 "You will become a great Lisp Programmer"
 7 * (fortune)
 8 "The force will not be with you"
 9 * (fortune)
10 "Take time for meditation"
11 * (fortune)
12 "You will become a great Lisp Programmer"
13 * 

Using the Function eval

In Lisp languages we often say that code is data. The function eval can be used to execute code that is stored as Lisp data. Let’s look at an example:

 1 $ ccl
 2 Clozure Common Lisp Version 1.12  DarwinX8664
 3 ? '(+ 1 2.2)
 4 (+ 1 2.2)
 5 ? (eval '(+ 1 2.2))
 6 3.2
 7 ? (eval '(defun foo2 (x) (+ x x)))
 8 FOO2
 9 ? (foo2 4)
10 8

I leave it up to you, dear reader, how often you are motivated to use eval. In forty years of using Lisp languages my principle use of eval has been in modifying the standard version of the Ops5 programming language for production systems to support things like multiple data worlds and new actions to spawn off new data worlds and to remove them. Ops5 works by finding common expressions in a set of production rules (also referred to as “expert systems”) and factoring them into a network (a Rete network if you want to look it up) with common expressions in rules stored in just a single place. eval is used a lot in Ops5 and I used it for my extensions to Ops5.

Defining Common Lisp Macros

We saw in the last chapter how the Lisp function eval could be used to evaluate arbitrary Lisp code stored in lists. Because eval is inefficient, a better way to generate Lisp code automatically is to define macro expressions that are expanded inline when they are used. In most Common Lisp systems, using eval requires the Lisp compiler to compile a form on-the-fly which is not very efficient. Some Lisp implementations use an interpreter for eval which is likely to be faster but might lead to obscure bugs if the interpreter and compiled code do not function identically.

The ability to add functionality and syntax to the Common Lisp language, to in effect extend the language as needed, is truly a super power of languages like Common Lisp and Scheme.

Example Macro

The file src/macro1.lisp contains both a simple macro and a function that uses the macro. This macro example is a bit contrived since it could be just a function definition, but it does show the process of creating and using a macro. We are using the gensym function to define a new unique symbol to reference a temporary variable:

 1 ;; first simple macro example:
 2 
 3 (defmacro double-list (a-list)
 4   (let ((ret (gensym)))
 5     `(let ((,ret nil))
 6        (dolist (x ,a-list)
 7          (setq ,ret (append ,ret (list x x))))
 8        ,ret)))
 9 
10 ;; use the macro:
11 
12 (defun test (x)
13   (double-list x))

The backquote character seen at the beginning of line 5 is used to quote a list in a special way: nothing in the list is evaluated during macro expansion unless it is immediately preceded by a comma character. In this case, we specify ,a-list because we want the value of the macro’s argument a-list to be substituted into the specially quoted list. We will look at dolist in some detail in the next chapter but for now it is sufficient to understand that dolist is used to iterate through the top-level elements of a list, for example:

1 * (dolist (x '("the" "cat" "bit" "the" "rat"))
2        (print x))
3 "the" 
4 "cat" 
5 "bit" 
6 "the" 
7 "rat" 
8 NIL
9 * 

Notice that the example macro double-list itself uses the macro dolist. It is common to nest macros in the same way functions can be nested.

Returning to our macro example in the file src/macro1.lisp, we will try the function test that uses the macro double-list:

1 * (load "macro1.lisp")
2 ;; Loading file macro1.lisp ...
3 ;; Loading of file macro1.lisp is finished.
4 T
5 * (test '(1 2 3))
6 (1 1 2 2 3 3)
7 *

Using the Splicing Operator

Another similar example is in the file src/macro2.lisp:

 1 ;; another macro example that uses ,@:
 2 
 3 (defmacro double-args (&rest args)
 4   `(let ((ret nil))
 5     (dolist (x ,@args)
 6       (setq ret (append ret (list x x))))
 7     ret))
 8 
 9 ;; use the macro:
10 
11 (defun test (&rest x)
12   (double-args x))

Here, the splicing operator ,@ is used to substitute in the list args in the macro double-args.

Using macroexpand-1

The function macroexpand-1 is used to transform macros with arguments into new Lisp expressions. For example:

1 * (defmacro double (a-number)                       
2         (list '+ a-number a-number))
3 DOUBLE
4 * (macroexpand-1 '(double n))
5 (+ N N) ;
6 T
7 * 

Writing macros is an effective way to extend the Lisp language because you can control the code passed to the Common Lisp compiler. In both macro example files, when the function test was defined, the macro expansion is done before the compiler processes the code. We will see in the next chapter several useful macros included in Common Lisp.

We have only “scratched the surface” looking at macros; the interested reader is encouraged to search the web using, for example, “Common Lisp macros.” There are two books in particular that I recommend that take a deep dive into Common Lisp macros: Paul Graham’s “On Lisp” and Doug Hoyte’s “Let Over Lambda.” Both are deep books and will change the way you experience software development. A good plan of study is spending a year absorbing “On Lisp” before tackling “Let Over Lambda.”

Using Common Lisp Loop Macros

In this chapter, we will discuss several useful macros for performing iteration (we saw how to use recursion for iteration in Chapter 2):

  • dolist – a simple way to process the elements of a list
  • dotimes – a simple way to iterate with an integer valued loop variable
  • do – the most general looping macro
  • loop – a complex looping macro that I almost never use in my own code because it does not look “Lisp like.” I don’t use the loop macro in this book. Many programmers do like the loop macro so you are likely to see it when reading other people’s code.

dolist

We saw a quick example of dolist in the last chapter. The arguments of the dolist macro are:

   (dolist (a-variable a-list [optional-result-value])   ...body... )

Usually, the dolist macro returns nil as its value, but we can add a third optional argument which will be returned as the generated expression’s value; for example:

1 * (dolist (a '(1 2) 'done) (print a))
2 1 
3 2 
4 DONE
5 * (dolist (a '(1 2)) (print a))
6 1 
7 2 
8 NIL
9 * 

The first argument to the dolist macro is a local lexically scoped variable. Once the code generated by the dolist macro finishes executing, this variable is undefined.

dotimes

The dotimes macro is used when you need a loop with an integer loop index. The arguments of the dotimes macro are:

   (dotimes (an-index-variable max-index-plus-one [optional-result-value])
         ...body... )

Usually, the dotimes macro returns nil as its value, but we can add a third optional argument that will be returned as the generated expression’s value; for example:

1 * (dotimes (i 3 "all-done-with-test-dotimes-loop") (print i))
2 
3 0 
4 1 
5 2 
6 "all-done-with-test-dotimes-loop"
7 * 

As with the dolist macro, you will often use a let form inside a dotimes macro to declare additional temporary (lexical) variables.

do

The do macro is more general purpose than either dotimes or dolist but it is more complicated to use. Here is the general form for using the do looping macro:

  (do ((variable-1 variable-1-init-value variable-1-update-expression)
          (variable-2 variable-2-init-value variable-2-update-expression)
          .
          .
          (variable-N variable-N-init-value variable-N-update-expression))
        (loop-termination-test  loop-return-value)
        optional-variable-declarations
        expressions-to-be-executed-inside-the-loop)

There is a similar macro do* that is analogous to let* in that loop variable values can depend on the values or previously declared loop variable values.

As a simple example, here is a loop to print out the integers from 0 to 3. This example is in the file src/do1.lisp:

;; example do macro use

(do ((i 0 (1+ i)))
        ((> i 3) "value-of-do-loop")
  (print i))

In this example, we only declare one loop variable so we might as well as used the simpler dotimes macro.

Here we load the file src/do1.lisp:

1 * (load "do1.lisp")
2 ;; Loading file do1.lisp ...
3 0 
4 1 
5 2 
6 3 
7 ;; Loading of file do1.lisp is finished.
8 T
9 * 

You will notice that we do not see the return value of the do loop (i.e., the string “value-of-do-loop”) because the top-level form that we are evaluating is a call to the function load; we do see the return value of load printed. If we had manually typed this example loop in the Lisp listener, then you would see the final value value-of-do-loop printed.

Using the loop Special Form to Iterate Over Vectors or Arrays

We previousely used dolist to iterate over elements in lists. For efficiency we will often use vectors (one dimensional arrays) and we can use loop to similarly handle vectors:

  (loop for td across testdata
     do
       (print td))))

where testdata is a one dimensional array (a vector) and inside the do block the local variable td is assigned to each element in the vector.

Common Lisp Package System

In later chapters we will see two complete applications that are defined as Quicklisp projects: the chapter on the Knowledge Graph Creator and the chapter on the Knowledge Graph Navigator. Another example for setting up a Quicklib project can be seen in the chapter Plotting Data.

While these later chapters provide practical examples for bundling up your own projects in packages, the material here will give you general background information that you should know.

In the simple examples that we have seen so far, all newly created Lisp symbols have been placed in the default package. You can always check the current package by evaluating the expression package:

> *package*
#<PACKAGE COMMON-LISP-USER>
> 

As we will use in the following example, the package :cl is an alias for :common-lisp-user.

We will define a new package :my-new-package and two functions foo1 and foo2 inside the package. Externally to this package, assuming that it is loaded, we can access foo2 using my-new-package:foo2. foo1 is not exported so it cannot be accessed this way. However, we can always start a symbol name with a package name and two colon characters if we want to use a symbol defined in another package so we can use my-new-package::foo1. Using :: allows us access to symbols not explicitly exported.

When I leave package :my-new-package in line 22 and return to package :cl, and try to access my-new-package:foo1 notice that an error is thrown.

On line 3 we define the alias :p1 for the package :my-new-package and we use this alias in line 44. The main point of the following example is that we define two functions in a package but only export one of these functions. By default the other function is not visible outside of the new package.

 1 * (defpackage "MY-NEW-PACKAGE"
 2     (:use :cl)
 3     (:nicknames "P1")
 4     (:export :FOO2))
 5 
 6 #<PACKAGE "MY-NEW-PACKAGE">
 7 * (in-package my-new-package)
 8 
 9 #<PACKAGE "MY-NEW-PACKAGE">
10 * (defun foo1 () "foo1")
11 
12 FOO1
13 * (defun foo2 () "foo2")
14 
15 FOO2
16 * (foo1)
17 
18 "foo1"
19 * (foo2)
20 
21 "foo2"
22 * (in-package :cl)
23 
24 #<PACKAGE "COMMON-LISP">
25 * (my-new-package:foo2)
26 
27 "foo2"
28 * (my-new-package:foo1)
29 
30 debugger invoked on a SB-INT:SIMPLE-READER-PACKAGE-ERROR in thread
31 #<THREAD "main thread" RUNNING {1001F1ECE3}>:
32   The symbol "FOO1" is not external in the MY-NEW-PACKAGE package.
33 
34     Stream: #<SYNONYM-STREAM :SYMBOL SB-SYS:*STDIN* {100001C343}>
35 
36 Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
37 
38 restarts (invokable by number or by possibly-abbreviated name):
39   0: [CONTINUE] Use symbol anyway.
40   1: [ABORT   ] Exit debugger, returning to top level.
41 
42 * 1
43  
44 * (p1:foo2)
45 
46 "foo2"

Since we specified a nickname in the defpackage expression, Common Lisp allows the use of the nickname (in this case P1) in calling function foo2 that is exported from package :my-new-package.

Near the end of the last example, we switched back to the default package COMMON-LISP-USER so we had to specify the package name for the function foo2 on line 42.

What about the error on line 28 where my-new-package:foo1 is undefined because the function foo1 is not exported (see line 4)? It turns out that you can easily use symbols not exported from a package by using :: instead of a single :. Here, this would be defined: (my-new-package::foo1).

When you are writing very large Common Lisp programs, it is useful to be able to break up the program into different modules and place each module and all its required data in different name spaces by creating new packages. Remember that all symbols, including variables, generated symbols, CLOS methods, functions, and macros are in some package.

For small packages I sometimes put a defpackage expression at the top of the file immediately followed by an in-package expression to switch to the new package. In the general case, please properly use separate project and asdf files as I do in the later chapters Knowledge Graph Creator and Knowledge Graph Navigator.

Input and Output

We will see that the input and output of Lisp data is handled using streams. Streams are powerful abstractions that support common libraries of functions for writing to the terminal, files, sockets, and to strings.

In all cases, if an input or output function is called without specifying a stream, the default for input stream is *standard-input* and the default for output stream is *standard-output*. These default streams are connected to the Lisp listener that we discussed in Chapter 2. In the later chapter Knowledge Graph Navigator that supports a user interface, we will again use output streams bound to different scrolling output areas of the application window to write color-hilighted text. The stream formalism is general purpose, covering many common I/O use cases.

The Lisp read and read-line Functions

The function read is used to read one Lisp expression. Function read stops reading after reading one expression and ignores new line characters. We will look at a simple example of reading a file test.dat using the example Lisp program in the file read-test-1.lisp. Both of these files can be found in the directory src/code_snippets_for_book that came bundled with this web book. Start your Lisp program in the src directory. The contents of the file test.dat is:

1 1 2 3
2 4 "the cat bit the rat"
3         read with-open-file

In the function read-test-1, we use the macro with-open-file to read from a file. To write to a file (which we will do later), we can use the keyword arguments :direction :output. The first argument to the macro with-open-file is a symbol that is bound to a newly created input stream (or an output stream if we are writing a file); this symbol can then be used in calling any function that expects a stream argument.

Notice that we call the function read with three arguments: an input stream, a flag to indicate if an error should be thrown if there is an I/O error (e.g., reaching the end of a file), and the third argument is the value that function read should return if the end of the file (or stream) is reached. When calling read with these three arguments, either the next expression from the file test.dat will be returned, or the value nil will be returned when the end of the file is reached. If we do reach the end of the file, the local variable x will be assigned the value nil and the function return will break out of the dotimes loop. One big advantage of using the macro with-open-file over using the open function (which we will not cover) is that the file stream is automatically closed when leaving the code generated by the with-open-file macro. The contents of file read-test-1.lisp is:

(defun read-test-1 ()
  "read a maximum of 1000 expressions from the file 'test.dat'"
  (with-open-file
   (input-stream "test.dat" :direction :input)
   (dotimes (i 1000)
     (let ((x (read input-stream nil nil)))
       (if (null x) (return)) ;; break out of the 'dotimes' loop
       (format t "next expression in file: ~S~%" x)))))

Here is the output that you will see if you load the file read-test-1.lisp and execute the expression (read-test-1):

 1 * (load "read-test-1.lisp")
 2 ;; Loading file read-test-1.lisp ...
 3 ;; Loading of file read-test-1.lisp is finished.
 4 T
 5 * (read-test-1)
 6 next expression in file: 1
 7 next expression in file: 2
 8 next expression in file: 3
 9 next expression in file: 4
10 next expression in file: "the cat bit the rat"
11 NIL

Note: the string “the cat bit the rat” prints as a string (with quotes) because we used a ~S instead of a ~A in the format string in the call to function format.

In this last example, we passed the file name as a string to the macro with-open-file. This is not generally portable across all operating systems. Instead, we could have created a pathname object and passed that instead. The pathname function can take eight different keyword arguments, but we will use only the two most common in the example in the file read-test-2.lisp in the src directory. The following listing shows just the differences between this example and the last:

  (let ((a-path-name
          (make-pathname :directory "testdata"
                         :name "test.dat")))
    (with-open-file
     (input-stream a-path-name :direction :input)

Here, we are specifying that we want to use the file test.dat in the subdirectory testdata. Note: I almost never use pathnames. Instead, I specify files using a string and the character / as a directory delimiter. I find this to be portable for the Macintosh, Windows, and Linux operating systems using all Common Lisp implementations.

The file readline-test.lisp is identical to the file read-test-1.lisp except that we call function readline instead of the function read and we change the output format message to indicate that an entire line of text has been read

(defun readline-test ()
  "read a maximum of 1000 expressions from the file 'test.dat'"
  (with-open-file
   (input-stream "test.dat" :direction :input)
   (dotimes (i 1000)
     (let ((x (read-line input-stream nil nil)))
       (if (null x) (return)) ;; break out of the 'dotimes' loop
       (format t "next line in file: ~S~%" x)))))

When we execute the expression (readline-test), notice that the string contained in the second line of the input file has the quote characters escaped:

1 * (load "readline-test.lisp")
2 ;; Loading file readline-test.lisp ...
3 ;; Loading of file readline-test.lisp is finished.
4 T
5 * (readline-test)
6 next line in file: "1 2 3"
7 next line in file: "4 \"the cat bit the rat\""
8 NIL
9 *

We can also create an input stream from the contents of a string. The file read-from-string-test.lisp is very similar to the example file read-test-1.lisp except that we use the macro with-input-from-string (notice how I escaped the quote characters used inside the test string):

(defun read-from-string-test ()
  "read a maximum of 1000 expressions from a string"
  (let ((str "1 2 \"My parrot is named Brady.\" (11 22)"))
    (with-input-from-string
     (input-stream str)
     (dotimes (i 1000)
       (let ((x (read input-stream nil nil)))
         (if (null x) (return)) ;; break out of the 'dotimes' loop
         (format t "next expression in string: ~S~%" x))))))

We see the following output when we load the file read-from-string-test.lisp:

 1 * (load "read-from-string-test.lisp")
 2 ;; Loading file read-from-string-test.lisp ...
 3 ;; Loading of file read-from-string-test.lisp is finished.
 4 T
 5 * (read-from-string-test)
 6 next expression in string: 1
 7 next expression in string: 2
 8 next expression in string: "My parrot is named Brady."
 9 next expression in string: (11 22)
10 NIL
11 *

We have seen how the stream abstraction is useful for allowing the same operations on a variety of stream data. In the next section, we will see that this generality also applies to the Lisp printing functions.

Lisp Printing Functions

All of the printing functions that we will look at in this section take an optional last argument that is an output stream. The exception is the format function that can take a stream value as its first argument (or t to indicate *standard-output*, or a nil value to indicate that format should return a string value).

Here is an example of specifying the optional stream argument:

1 * (print "testing")
2 
3 "testing" 
4 "testing"
5 * (print "testing" *standard-output*)
6 
7 "testing" 
8 "testing"
9 *

The function print prints Lisp objects so that they can be read back using function read. The corresponding function princ is used to print for “human consumption”. For example:

1 * (print "testing")
2 
3 "testing" 
4 "testing"
5 * (princ "testing")
6 testing
7 "testing"
8 * 

Both print and princ return their first argument as their return value, which you see in the previous output. Notice that princ also does not print a new line character, so princ is often used with terpri (which also takes an optional stream argument).

We have also seen many examples in this book of using the format function. Here is a different use of format, building a string by specifying the value nil for the first argument:

1 * (let ((l1 '(1 2))
2            (x 3.14159))
3        (format nil "~A~A" l1 x))
4 "(1 2)3.14159"
5 * 

We have not yet seen an example of writing to a file. Here, we will use the with-open-file macro with options to write a file and to delete any existing file with the same name:

(with-open-file (out-stream "test1.dat"
                   :direction :output
                   :if-exists :supersede)
       (print "the cat ran down the road" out-stream)
       (format out-stream "1 + 2 is: ~A~%" (+ 1 2))
       (princ "Stoking!!" out-stream)
       (terpri out-stream))

Here is the result of evaluating this expression (i.e., the contents of the newly created file test1.dat in the src directory):

1 % cat test1.dat 
2 
3 "the cat ran down the road" 1 + 2 is: 3
4 Stoking!!

Notice that print generates a new line character before printing its argument.

Plotting Data

We will use Zach Beane’s vecto library for plotting data with the results written to files. Ideally we would like to have interactive plotting capability but for the purposes of this book I need to support the combinations of all Common Lisp implementations on multiple operating systems. Interactive plotting libraries are usually implementation and OS dependent. We will use the plotlib example we develop in the later chapter Backpropagation Neural Networks.

Implementing the Library

The examples here are all contained in the directory src/plotlib and is packaged as a Quicklisp loadable library. This library will be used in later chapters.

When I work on my macOS laptop, I leave the output graphics file open in the Preview App and whenever I rerun a program producing graphics in the REPL, making the preview App window active refreshes the graphics display.

PNG file generated by running plotlib test
PNG file generated by running plotlib test

The following listing shows the file plotlib.lisp that is a simple wrapper for the vecto Common Lisp plotting library. Please note that I only implemented wrappers for vecto functionality that I need for later examples in this book, so the following code is not particularly general but should be easy enough for you to extend for the specific needs of your projects.

 1 ;; Misc. plotting examples using the vecto library
 2 
 3 (ql:quickload :vecto) ;; Zach Beane's plotting library
 4 (defpackage #:plotlib
 5   (:use #:cl #:vecto))
 6 
 7 (in-package #:plotlib)
 8 
 9 ;; the coordinate (0,0) is the lower left corner of the plotting area.
10 ;; Increasing the y coordinate is "up page" and increasing x is "to the right"
11 
12 ;; fills a rectangle with a gray-scale value
13 (defun plot-fill-rect (x y width height gscale) ; 0 < gscale < 1
14   (set-rgb-fill gscale gscale gscale)
15   (rectangle x y width height)
16   (fill-path))
17 
18 ;; plots a frame rectangle
19 (defun plot-frame-rect (x y width height)
20   (set-line-width 1)
21   (set-rgb-fill 1 1 1)
22   (rectangle x y width height)
23   (stroke))
24 
25 (defun plot-line(x1 y1 x2 y2)
26   (set-line-width 1)
27   (set-rgb-fill 0 0 0)
28   (move-to x1 y1)
29   (line-to x2 y2)
30   (stroke))
31 
32 (defun plot-string(x y str)
33   (let ((font (get-font "OpenSans-Regular.ttf")))
34     (set-font font 15)
35     (set-rgb-fill 0 0 0)
36     (draw-string x y str)))
37 
38 (defun plot-string-bold(x y str)
39   (let ((font (get-font "OpenSans-Bold.ttf")))
40     (set-font font 15)
41     (set-rgb-fill 0 0 0)
42     (draw-string x y str)))
43 
44 
45 (defun test-plotlib (file)
46   (with-canvas (:width 90 :height 90)
47     (plot-fill-rect 5 10 15 30 0.2) ; black
48     (plot-fill-rect 25 30 30 7 0.7) ; medium gray
49     (plot-frame-rect 10 50 30 7)
50     (plot-line 90 5 10 5)
51     (plot-string 10 65 "test 1 2 3")
52     (plot-string-bold 10 78 "Hello")
53     (save-png file)))
54 
55 ;;(test-plotlib "test-plotlib.png")

This plot library is used in later examples in the chapters on search, backpropagation neural networks and Hopfield neural networks. I prefer using implementation and operating specific plotting libraires for generating interactive plots, but the advantage of writing plot data to a file using the vecto library is that the code is portable across operating systems and Common Lisp implementations.

Packaging as a Quicklisp Project

The two files src/plotlib/plotlib.asd src/plotlib/package.lisp configure the library. The file package.lisp defines the required library vecto and lists the functions that are publicly exported from the library:

(defpackage #:plotlib
  (:use #:cl #:vecto)
  (:export save-png plot-fill-rect plot-frame-rect
     plot-size-rect plot-line plot-string plot-string-bold
     pen-width))

To run the test function provided with this library you load the library and preface exported function names with the package name plotlib: as in this example:

(ql:quickload "plotlib")
(plotlib::test-plotlib "test-plotlib.png")

In addition to a package.lisp file we also use a file with the extension .asd

(asdf:defsystem #:plotlib
  :description "Describe plotlib here"
  :author "mark.watson@gmail.com"
  :license "Apache 2"
  :depends-on (#:vecto)
  :components ((:file "package")
               (:file "plotlib")))

If you have specified a dependency that is not already downloaded to your computer, Quicklisp will install the dependency for you.

Common Lisp Object System - CLOS

CLOS was the first ANSI standardized object oriented programming facility. While I do not use classes and objects as often in my Common Lisp programs as I do when using Java and Smalltalk, it is difficult to imagine a Common Lisp program of any size that did not define and use at least a few CLOS classes.

The example program for this chapter in the file src/loving_snippets/HTMLstream.lisp. I used this CLOS class about ten years ago in a demo for my commercial natural language processing product to automatically generate demo web pages.

We are going to start our discussion of CLOS somewhat backwards by first looking at a short test function that uses the HTMLstream class. Once we see how to use this example CLOS class, we will introduce a small subset of CLOS by discussing in some detail the implementation of the HTMLstream class and finally, at the end of the chapter, see a few more CLOS programming techniques. This book only provides a brief introduction to CLOS; the interested reader is encouraged to do a web search for “CLOS tutorial”.

The macros and functions defined to implement CLOS are a standard part of Common Lisp. Common Lisp supports generic functions, that is, different functions with the same name that are distinguished by different argument types.

Example of Using a CLOS Class

The file src/loving_snippets/HTMLstream.lisp contains a short test program at the end of the file:

1 (defun test (&aux x)
2     (setq x (make-instance 'HTMLstream))
3     (set-header x "test page")
4     (add-element x "test text - this could be any element")
5     (add-table
6          x
7          '(("<b>Key phrase</b>" "<b>Ranking value</b>")
8            ("this is a test" 3.3)))
9     (get-html-string x))

The generic function make-instance takes the following arguments:

1     make-instance class-name &rest initial-arguments &key ...

There are four generic functions used in the function test:

  • set-header - required to initialize class and also defines the page title
  • add-element - used to insert a string that defines any type of HTML element
  • add-table - takes a list of lists and uses the list data to construct an HTML table
  • get-html-string - closes the stream and returns all generated HTML data as a string

The first thing to notice in the function test is that the first argument for calling each of these generic functions is an instance of the class HTMLstream. You are free to also define a function, for example, add-element that does not take an instance of the class HTMLstream as the first function argument and calls to add-element will be routed correctly to the correct function definition.

We will see that the macro defmethod acts similarly to defun except that it also allows us to define many methods (i.e., functions for a class) with the same function name that are differentiated by different argument types and possibly different numbers of arguments.

Implementation of the HTMLstream Class

The class HTMLstream is very simple and will serve as a reasonable introduction to CLOS programming. Later we will see more complicated class examples that use multiple inheritance. Still, this is a good example because the code is simple and the author uses this class frequently (some proof that it is useful!). The code fragments listed in this section are all contained in the file src/loving_snippets/HTMLstream.lisp. We start defining a new class using the macro defclass that takes the following arguments:

1     defclass class-name list-of-super-classes
2              list-of-slot-specifications class-specifications

The class definition for HTMLstream is fairly simple:

1 (defclass HTMLstream ()
2   ((out :accessor out))
3   (:documentation "Provide HTML generation services"))

Here, the class name is HTMLstream, the list of super classes is an empty list (), the list of slot specifications contains only one slot specification for the slot named out and there is only one class specification: a documentation string. Slots are like instance variables in languages like Java and Smalltalk. Most CLOS classes inherit from at least one super class but we will wait until the next section to see examples of inheritance. There is only one slot (or instance variable) and we define an accessor variable with the same name as the slot name. This is a personal preference of mine to name read/write accessor variables with the same name as the slot.

The method set-header initializes the string output stream used internally by an instance of this class. This method uses convenience macro with-accessors that binds a local set of local variable to one or more class slot accessors. We will list the entire method then discuss it:

1 (defmethod set-header ((ho HTMLstream) title)
2   (with-accessors
3       ((out out))
4       ho
5     (setf out (make-string-output-stream))
6     (princ "<HTML><head><title>" out)
7     (princ title out)
8     (princ "</title></head><BODY>" out)
9     (terpri out)))

The first interesting thing to notice about the defmethod is the argument list: there are two arguments ho and title but we are constraining the argument ho to be either a member of the class HTMLstream or a subclass of HTMLstream. Now, it makes sense that since we are passing an instance of the class HTMLstream to this generic function (or method – I use the terms “generic function” and “method” interchangeably) that we would want access to the slot defined for this class. The convenience macro with-accessors is exactly what we need to get read and write access to the slot inside a generic function (or method) for this class. In the term ((out out)), the first out is local variable bound to the value of the slot named out for this instance ho of class HTMLstream. Inside the with-accessors macro, we can now use setf to set the slot value to a new string output stream. Note: we have not covered the Common Lisp type string-output-stream yet in this book, but we will explain its use on the next page.

By the time a call to the method set-header (with arguments of an HTMLstream instance and a string title) finishes, the instance has its slot set to a new string-output-stream and HTML header information is written to the newly created string output stream. Note: this string output stream is now available for use by any class methods called after set-header.

There are several methods defined in the file src/loving_snippets/HTMLstream.lisp, but we will look at just four of them: add-H1, add-element, add-table, and get-html-string. The remaining methods are very similar to add-H1 and the reader can read the code in the source file.

As in the method set-header, the method add-H1 uses the macro with-accessors to access the stream output stream slot as a local variable out. In add-H1 we use the function princ that we discussed in Chapter on Input and Output to write HTML text to the string output stream:

1 (defmethod add-H1 ((ho HTMLstream) some-text)
2   (with-accessors
3    ((out out))
4    ho
5    (princ "<H1>" out)
6    (princ some-text out)
7    (princ "</H1>" out)
8    (terpri out)))

The method add-element is very similar to add-H1 except the string passed as the second argument element is written directly to the stream output stream slot:

1 (defmethod add-element ((ho HTMLstream) element)
2   (with-accessors
3       ((out out))
4       ho
5     (princ element out)
6     (terpri out)))

The method add-table converts a list of lists into an HTML table. The Common Lisp function princ-to-string is a useful utility function for writing the value of any variable to a string. The functions string-left-trim and string-right-trim are string utility functions that take two arguments: a list of characters and a string and respectively remove these characters from either the left or right side of a string. Note: another similar function that takes the same arguments is string-trim that removes characters from both the front (left) and end (right) of a string. All three of these functions do not modify the second string argument; they return a new string value. Here is the definition of the add-table method:

 1 (defmethod add-table ((ho HTMLstream) table-data)
 2   (with-accessors
 3       ((out out))
 4       ho
 5     (princ "<TABLE BORDER=\"1\" WIDTH=\"100\%\">" out)
 6     (dolist (d table-data)
 7       (terpri out)
 8       (princ "  <TR>" out)
 9       (terpri out)
10       (dolist (w d)
11         (princ "    <TD>" out)
12         (let ((str (princ-to-string w)))
13           (setq str (string-left-trim '(#\() str))
14           (setq str (string-right-trim '(#\)) str))
15           (princ str out))
16         (princ "</TD>" out)
17         (terpri out))
18       (princ "  </TR>" out)
19       (terpri out))
20     (princ "</TABLE>" out)
21     (terpri out)))

The method get-html-string gets the string stored in the string output stream slot by using the function get-output-stream-string:

1 (defmethod get-html-string ((ho HTMLstream))
2   (with-accessors
3       ((out out))
4       ho
5   (princ "</BODY></HTML>" out)
6   (terpri out)
7   (get-output-stream-string out)))

CLOS is a rich framework for object oriented programming, providing a superset of features found in languages like Java, Ruby, and Smalltalk. I have barely scratched the surface in this short CLOS example for generating HTML. Later in the book, whenever you see calls to make-instance, that lets you know we are using CLOS even if I don’t specifically mention CLOS in the examples.

Using Defstruct or CLOS

You might notice from my own code that I use Common Lisp defstruct macros to define data structures more often than I use CLOS. The defclass macro used to create CLOS classes are much more flexible but for simple data structures I find that using defstruct is much more concise. In the simplest case, a defstruct can just be a name of the new type followed by slot names. For each slot like my-slot-1 accessor functions are generated automatically. Here is a simple example:

1 $ ccl
2 Clozure Common Lisp Version 1.12  DarwinX8664
3 ? (defstruct struct1 s1 s2)
4 STRUCT1
5 ? (make-struct1 :s1 1 :s2 2)
6 #S(STRUCT1 :S1 1 :S2 2)
7 ? (struct1-s1 (make-struct1 :s1 1 :s2 2))
8 1

We defined a struct struct1 on line3 with two slots names s1 and s2, show the use of the automatically generated constructor make-struct1 on line 5, and one of the two automatically generated accessor functions struct1-s1 on line 7. The names of accessor functions are formed with the structure name and the slot name.

Network Programming

Distributed computing is pervasive: you need to look no further than the World Wide Web, Internet chat, etc. Of course, as a Lisp programmer, you will want to do at least some of your network programming in Lisp! The previous editions of this book provided low level socket network programming examples. I decided that for this new edition, I would remove those examples and instead encourage you to “move further up the food chain” and work at a higher level of abstraction that makes sense for the projects you will likely be developing. Starting in the 1980s, a lot of my work entailed low level socket programming for distributed networked applications. As I write this, it is 2013, and there are better ways to structure distributed applications.

Specifically, since many of the examples later in this book fetch information from the web and linked data sources, we will start be learning how to use Edi Weitz’s Drakma HTTP client library. In order to have a complete client server example we will also look briefly at Edi Weitz’s Hunchentoot web server that uses JSON as a data serialization format. I used to use XML for data serialization but JSON has many advantages: easier for a human to read and it plays nicely with Javascript code and some data stores like Postgres (new in versions 9.x), MongoDB, and CouchDB that support JSON as a native data format.

The code snippets in the first two sections of this chapter are derived from examples in the Drackma and Hunchentoot documentation.

An introduction to Drakma

Edi Weitz’s Drakma library supports fetching data via HTTP requests. As you can see in the Drakma documentation, you can use this library for authenticated HTTP requests (i.e., allow you to access web sites that require a login), support HTTP GET and PUT operations, and deal with cookies. The top level API that we will use is drakma:http-request that returns multiple values. In the following example, I want only the first three values, and ignore the others like the original URI that was fetched and an IO stream object. We use the built-in Common Lisp macro multiple-value-setq:

1 * (ql:quickload :drakma)
2 * (multiple-value-setq
3     (data http-response-code headers)
4     (drakma:http-request "http://markwatson.com"))

I manually formatted the last statement I entered in the last repl listing and I will continue to manually edit the repl listings in the rest of this book to make them more easily readable.

The following shows some of the data bound to the variables data, http-response-code, and headers:

1 * data
2 
3 "<!DOCTYPE html>
4 <html>
5   <head>
6 	<title>Mark Watson: Consultant and Author</title>

The value of http-response-code is 200 which means that there were no errors:

1 * http-response-code
2 
3 200

The HTTP response headers will be useful in many applications; for fetching the home page of my web site the headers are:

 1 * headers
 2 
 3 ((:SERVER . "nginx/1.1.19")
 4  (:DATE . "Fri, 05 Jul 2013 15:18:27 GMT")
 5  (:CONTENT-TYPE . "text/html; charset=utf-8")
 6  (:TRANSFER-ENCODING . "chunked")
 7  (:CONNECTION . "close")
 8  (:SET-COOKIE
 9    .
10    "ring-session=cec5d7ba-e4da-4bf4-b05e-aff670e0dd10;Path=/"))

We will use Drakma later in this book for several examples. In the next section we will write a web app using Hunchentoot and test it with a Drakma client.

An introduction to Hunchentoot

Edi Weitz’s Hunchentoot project is a flexible library for writing web applications and web services. We will also use Edi’s CL-WHO library in this section for generating HTML from Lisp code. Hunchentoot will be installed the first time you quick load it in the example code for this section:

1 (ql:quickload "hunchentoot")

I will use only easy handler framework in the Hunchentoot examples in this section. I leave it to you to read the documentation on using custom acceptors after you experiment with the examples in this section.

The following code will work for both multi-threading installations of SBCL and single thread installations (e.g., some default installations of SBCL on OS X):

 1 (ql:quickload :hunchentoot)
 2 (ql:quickload :cl-who)
 3 
 4 (in-package :cl-user)
 5 (defpackage hdemo
 6   (:use :cl
 7         :cl-who
 8         :hunchentoot))
 9 (in-package :hdemo)
10 
11 (defvar *h* (make-instance 'easy-acceptor :port 3000))
12 
13 ;; define a handler with the arbitrary name my-greetings:
14 
15 (define-easy-handler (my-greetings :uri "/hello") (name)
16   (setf (hunchentoot:content-type*) "text/html")
17   (with-html-output-to-string (*standard-output* nil :prologue t)
18     (:html
19      (:head (:title "hunchentoot test"))
20      (:body
21       (:h1 "hunchentoot form demo")
22       (:form
23        :method :post
24        (:input :type :text
25 	       :name "name"
26 	       :value name)
27        (:input :type :submit :value "Submit your name"))
28       (:p "Hello " (str name))))))
29 
30 (hunchentoot:start *h*)

In lines 5 through 9 we create an use a new package that includes support for generating HTML in Lisp code (CL-WHO) and the Hunchentoot library). On line 11 we create an instance of an easy acceptor on port 3000 that provides useful default behaviors for providing HTTP services.

The Hunchentoot macro define-easy-handler is used in lines 15 through 28 to define an HTTP request handler and add it to the easy acceptor instance. The first argument, my-greetings in this example, is an arbitrary name and the keyword :uri argument provides a URL pattern that the easy acceptor server object uses to route requests to this handler. For example, when you run this example on your computer, this URL routing pattern would handle requests like:

1 http://localhost:3000/hello

In lines 17 through 28 we are using the CL-WHO library to generate HTML for a web page. As you might guess, :html generates the outer <html></html> tags for a web page. Line 19 would generate HTML like:

1   <head>
2 	<title>hunchentoot test</title>
3   </head>

Lines 22 through 27 generate an HTML input form and line 28 displays any value generated when the user entered text in the input filed and clicked the submit button. Notice the definition of the argument name in line 1 in the definition of the easy handler. If the argument name is not defined, the nil value will be displayed in line 28 as an empty string.

You should run this example and access the generated web page in a web browser, and enter text, submit, etc. You can also fetch the generated page HTML using the Drakma library that we saw in the last section. Here is a code snippet using the Drakma client library to access this last example:

 1 * (drakma:http-request "http://127.0.0.1:3000/hello?name=Mark")
 2 
 3 "Hello Mark"
 4 200
 5 ((:CONTENT-LENGTH . "10")
 6  (:DATE . "Fri, 05 Jul 2013 15:57:22 GMT")
 7  (:SERVER . "Hunchentoot 1.2.18")
 8  (:CONNECTION . "Close")
 9  (:CONTENT-TYPE . "text/plain; charset=utf-8"))
10 #<PURI:URI http://127.0.0.1:3000/hello?name=Mark>
11 #<FLEXI-STREAMS:FLEXI-IO-STREAM {10095654A3}>
12 T
13 "OK"

We will use both Drackma and Hunchentoot in the next section.

Complete REST Client Server Example Using JSON for Data Serialization

A reasonable way to build modern distributed systems is to write REST web services that serve JSON data to client applications. These client applications might be rich web apps written in Javascript, other web services, and applications running on smartphones that fetch and save data to a remote web service.

We will use the cl-json Quicklisp package to encode Lisp data into a string representing JSON encoded data. Here is a quick example:

1 * (ql:quickload :cl-json)
2 * (defvar y (list (list '(cat . "the cat ran") '(dog . 101)) 1 2 3 4 5))
3 
4 Y
5 * y
6 
7 (((CAT . "the cat ran") (DOG . 101)) 1 2 3 4 5)
8 * (json:encode-json-to-string y)
9 "[{\"cat\":\"the cat ran\",\"dog\":101},1,2,3,4,5]"

The following list shows the contents of the file src/web-hunchentoot-json.lisp:

 1 (ql:quickload :hunchentoot)
 2 (ql:quickload :cl-json)
 3 
 4 (defvar *h* (make-instance 'hunchentoot:easy-acceptor :port 3000))
 5 
 6 ;; define a handler with the name animal:
 7 
 8 (hunchentoot:define-easy-handler (animal :uri "/animal") (name)
 9   (print name)
10   (setf (hunchentoot:content-type*) "text/plain")
11   (cond
12     ((string-equal name "cat")
13      (json:encode-json-to-string
14        (list
15         (list
16          '(average_weight . 10)
17          '(friendly . nil))
18         "A cat can live indoors or outdoors.")))
19     ((string-equal name "dog")
20      (json:encode-json-to-string
21        (list
22         (list
23          '(average_weight . 40)
24          '(friendly . t))
25         "A dog is a loyal creature, much valued by humans.")))
26     (t
27      (json:encode-json-to-string
28        (list
29         ()
30         "unknown type of animal")))))
31 
32 (hunchentoot:start *h*)

This example is very similar to the web application example in the last section. The difference is that this application is not intended to be viewed on a web page because it returns JSON data as HTTP responses. The easy handler definition on line 8 specifies a handler argument name. In lines 12 and 19 we check to see if the value of the argument name is “cat” or “dog” and if it is, we return the appropriate JSON example data for those animals. If there is no match, the default cond clause starting on line 26 returns a warning string as a JSON encoded string.

While running this test service, in one repl, you can ue the Drakma library in another repl to test it (not all output is shown in the next listing):

 1 * (ql:quickload :drakma)
 2 * (drakma:http-request "http://127.0.0.1:3000/animal?name=dog")
 3 
 4 "[{\"average_weight\":40,
 5    \"friendly\":true},
 6    \"A dog is a loyal creature, much valued by humans.\"]"
 7 200
 8 * (drakma:http-request "http://127.0.0.1:3000/animal?name=cat")
 9 
10 "[{\"average_weight\":10,
11    \"friendly\":null},
12    \"A cat can live indoors or outdoors.\"]"
13 200

You can use the cl-json library to decode a string containing JSON data to Lisp data:

 1 * (ql:quickload :cl-json)
 2 To load "cl-json":
 3   Load 1 ASDF system:
 4     cl-json
 5 ; Loading "cl-json"
 6 .
 7 (:CL-JSON)
 8 * (cl-json:decode-json-from-string
 9     (drakma:http-request "http://127.0.0.1:3000/animal?name=dog"))
10 
11 (((:AVERAGE--WEIGHT . 40) (:FRIENDLY . T))
12  "A dog is a loyal creature, much valued by humans.")

For most of my work, REST web services are “read-only” in the sense that clients don’t modify state on the server. However, there are use cases where a client application might want to; for example, letting clients add new animals to the last example.

 1 (defparameter *animal-hash* (make-hash-table))
 2 
 3 ;; handle HTTP POST requests:
 4 (hunchentoot:define-easy-handler (some-handler :uri "/add") (json-data)
 5   (setf (hunchentoot:content-type*) "text/plain")
 6   (let* ((data-string (hunchentoot:raw-post-data :force-text t))
 7          (data (cl-json:decode-json-from-string json-data))
 8          ;; assume that the name of the animal is a hashed value:
 9          (animal-name (gethash "name" data)))
10      (setf (gethash animal-name *animal-hash*) data))
11   "OK")

In line 4 we are defining an additional easy handler with a handler argument json-data. This data is assumed to be a string encoding of JSON data which is decoded into Lisp data in lines 6 and 7. We save the data to the global variable animal-hash.

In this example, we are storing data sent from a client in an in-memory hash table. In a real application new data might be stored in a database.

Network Programming Wrap Up

You have learned the basics for writing web services and writing clients to use web services. Later, we will use web services written in Python by writing Common Lisp clients: we will wrap retrained deep learning models and access them from Common Lisp.

Using the Brave Search APIs

Note: I started using the Brave search APIs in June 2024 and replaced the Microsoft Bing search chapter in previous editions with the following material.

You will need to get a free API key at https://brave.com/search/api/ to use the following code examples. You can use the search API 2000 times a month for free or pay $5/month to get 20 million API calls a month.

Setting an Environment Variable for the Access Key for Brave Search APIs

Once you get a key for https://brave.com/search/api/ set the following environment variable:

export BRAVE_SEARCH_API_KEY=BSGhQ-Nd-......

That is not my real subscription key!

Example Search Script

Get the code for this example using (change directory to your Quicklist local-projects directory):

1 cd ~/.roswell/local-projects/brave_search # if you use Roswell in install and update\
2  SBCL, etc.
3 cd ~/quicklisp/local-projects  # if you DO NOT use Roswell
4 git clone https://github.com/mark-watson/brave_search.git
5 cd brave_search

It takes very little Common Lisp code to access the Brave search APIs. The function websearch makes a generic web search query. I will list the entire library with comments to follow:

 1 ;; Copyright Mark Watson 2024. All Rights Reserved.  https://markwatson.com
 2 ;; Apache 2 license.
 3 
 4 (in-package #:brave_search)
 5 
 6 (defun websearch (query)
 7   (let* ((key (uiop:getenv "BRAVE_SEARCH_API_KEY"))
 8          (command
 9            (concatenate
10             'string
11             "curl https://api.search.brave.com/res/v1/web/search?q="
12             (drakma:url-encode query :utf-8)
13             " -H \"X-Subscription-Token: " key "\""
14             " -H \"Content-Type: application/json\""))
15          (response
16            (uiop:run-program command :output :string)))
17     ;;(print response) ;; weird: comment this out, and a runtime error is thrown
18     (with-input-from-string
19         (s response)
20       (let ((results (cdar (cddr (assoc :web (json:decode-json s))))))
21 	(mapcar (lambda (x)
22 		  (let ((title (cdr (assoc :title x)))
23 			(url (cdr (assoc :url x)))
24 			(description (cdr (assoc :description x))))
25 		    (list url title description)))
26 		results)))))
27 
28 ;; Example usage:
29 ;; (brave_search:websearch "Sedona Arizona")

We get the Brave access key and the search API endpoint in lines 8-9. Lines 10-16 create a complete call to the curl* command line utility. We spawn a process to run **curl and capture the string output in the variable response in lines 17-18. You might want to add a few print statements to see typical values for the variables command and response. The response data is JSON data encoded in a string, with straightforward code in lines 19-28 to parse out the values we want.

The following repl listing shows this library in use (most output not shown):

 $ sbcl
*  (ql:quickload "brave_search")
To load "brave_search":
  Load 1 ASDF system:
    brave_search
; Loading "brave_search"
..................
("brave_search")
* (brave_search:websearch "Sedona Arizona")
(("https://visitsedona.com/"
  "Visit Sedona | The official site of the Sedona Tourism Bureau"
  "The official site of the <strong>Sedona</strong>, AZ tourism bureau. Find out the\
 best places to stay, eat, and relax in our beautiful central <strong>Arizona</stron\
g> resort town.")
 ("https://www.sedonaaz.gov/" "City of Sedona | Home"
  "The City of <strong>Sedona</strong> wastewater department has experienced ...

Accessing Relational Databases

There are good options for accessing relational databases from Common Lisp. Personally I almost always use Postgres and in the past I used either native foreign client libraries or the socket interface to Postgres. Recently, I decided to switch to CLSQL which provides a common interface for accessing Postgres, MySQL, SQLite, and Oracle databases. There are also several recent forks of CLSQL on github. We will use CLSQL in examples in this book. Hopefully while reading the Chapter on Quicklisp you installed CLSQL and the back end for one or more databases that you use for your projects.

For some database applications when I know that I will always use the embedded SQLite database (i.e., that I will never want to switch to Postgres of another database) I will just use the sqlite library as I do in chapter Knowledge Graph Navigator.

If you have not installed CLSQL yet, then please install it now:

(ql:quickload "clsql")

You also need to install one or more CLSQL backends, depending on which relational databases you use:

(ql:quickload "clsql-postgresql")
(ql:quickload "clsql-mysql")
(ql:quickload "clsql-sqlite3")

The directory src/clsql_examples contains the standalone example files for this chapter.

While I often prefer hand crafting SQL queries, there seems to be a general movement in software development towards the data mapper or active record design patterns. CLSQL provides Object Relational Mapping (ORM) functionality to CLOS.

You will need to create a new database news in order to follow along with the examples in this chapter and later in this book. I will use Postgres for examples in this chapter and use the following to create a new database (my account is “markw” and the following assumes that I have Postgres configured to not require a password for this account when accessing the database from “localhost”):

1 ->  ~  psql
2 psql (9.1.4)
3 Type "help" for help.
4 markw=# create database news;
5 CREATE DATABASE

We will use three example programs that you can find in the src/clsql_examples directory in the book repository on github:

  • clsql_create_news_schema.lisp to create table “articles” in database “news”
  • clsql_write_to_news.lisp to write test data to table “articles”
  • clsql_read_from_news.lisp to read from the table “articles”

The following listing shows the file src/clsql_examples/clsql_create_news_schema.lisp:

 1 (ql:quickload :clsql)
 2 (ql:quickload :clsql-postgresql)
 3 
 4 ;; Postgres connection specification:
 5 ;;    (host db user password &optional port options tty).
 6 ;; The first argument to **clsql:connect** is a connection
 7 ;; specification list:
 8 
 9 (clsql:connect '("localhost" "news" "markw" nil)
10                :database-type :postgresql)
11 
12 (clsql:def-view-class articles ()
13   ((id
14     :db-kind :key
15     :db-constraints :not-null
16     :type integer
17     :initarg :id)
18    (uri
19     :accessor uri
20     :type (string 60)
21     :initarg :uri)
22    (title
23     :accessor title
24     :type (string 90)
25     :initarg :title)
26    (text
27     :accessor text
28     :type (string 500)
29     :nulls-ok t
30     :initarg :text)))
31 
32 (defun create-articles-table ()
33   (clsql:create-view-from-class 'articles))

In this repl listing, we create the database table “articles” using the function create-articles-table that we just defined:

1 ->  src git:(master) sbcl
2 (running SBCL from: /Users/markw/sbcl)
3 * (load "clsql_create_news_schema.lisp")
4 * (create-articles-table)
5 NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
6          "article_pk" for table "articles"
7 T
8 *

The following listing shows the file src/clsql_examples/clsql_write_to_news.lisp:

 1 (ql:quickload :clsql)
 2 (ql:quickload :clsql-postgresql)
 3 
 4 ;; Open connection to database and create CLOS class and database view
 5 ;; for table 'articles':
 6 (load "clsql_create_news_schema.lisp")
 7 
 8 (defvar *a1*
 9   (make-instance
10     'article
11     :uri "http://test.com"
12     :title "Trout Season is Open on Oak Creek"
13     :text "State Fish and Game announced the opening of trout season"))
14 
15 (clsql:update-records-from-instance *a1*)
16 ;; modify a slot value and update database:
17 (setf (slot-value *a1* 'title) "Trout season is open on Oak Creek!!!")
18 (clsql:update-records-from-instance *a1*)
19 ;; warning: the last statement changes the "id" column in the table

You should load the file clsql_write_to_news.lisp one time in a repl to create the test data. The following listing shows file clsql_read_from_news.lisp:

 1 (ql:quickload :clsql)
 2 (ql:quickload :clsql-postgresql)
 3 
 4 ;; Open connection to database and create CLOS class and database view
 5 ;; for table 'articles':
 6 (load "clsql_create_news_schema.lisp")
 7 
 8 (defun pp-article (article)
 9   (format t
10     "~%URI: ~S ~%Title: ~S ~%Text: ~S ~%"
11     (slot-value article 'uri)
12     (slot-value article 'title)
13     (slot-value article 'text)))
14 
15 (dolist (a (clsql:select 'article))
16   (pp-article (car a)))

Loading the file clsql_read_from_news.lisp produces the following output:

1 URI: "http://test.com" 
2 Title: "Trout season is open on Oak Creek!!!" 
3 Text: "State Fish and Game announced the opening of trout season"
4 
5 URI: "http://example.com" 
6 Title: "Longest day of year" 
7 Text: "The summer solstice is on Friday."

You can also embed SQL where clauses in queries:

(dolist (a (clsql:select 'article :where "title like '%season%'"))
  (pp-article (car a)))

which produces this output:

1 URI: "http://test.com" 
2 Title: "Trout season is open on Oak Creek!!!" 
3 Text: "State Fish and Game announced the opening of
4        trout season" 

In this example, I am using a SQL like expression to perform partial text matching.

Database Wrap Up

You learned the basics for accessing relational databases. When I am designing new systems for processing data I like to think of my Common Lisp code as being purely functional: my Lisp functions accept arguments that they do not modify and return results. I like to avoid side effects, that is changing global state. When I do have to handle mutable state (or data) I prefer storing mutable state in an external database. I use this same approach when I use the Haskell functional programming language.

Using MongoDB, Solr NoSQL Data Stores

Non-relational data stores are commonly used for applications that don’t need either full relational algebra or must scale.

The MongoDB example code is in the file src/loving_snippets/mongo_news.lisp. The Solr example code is in the subdirectories src/solr_examples.

Note for the fifth edition: The Common Lisp cl-mongo library is now unsupported for versions of MongoDB later than 2.6 (released in 2016). You can install an old version of MongoDB for macOS or for Linux. I have left the MongoDB examples in this section but I can’t recommend that you use cl-mongo and MongoDB for any serious applications.

Brewer’s CAP theorem states that a distributed data storage system comprised of multiple nodes can be robust to two of three of the following guarantees: all nodes always have a Consistent view of the state of data, general Availablity of data if not all nodes are functioning, and Partition tolerance so clients can still communicate with the data storage system when parts of the system are unavailable because of network failures. The basic idea is that different applications have different requirements and sometimes it makes sense to reduce system cost or improve scalability by easing back on one of these requirements.

A good example is that some applications may not need transactions (the first guarantee) because it is not important if clients sometimes get data that is a few seconds out of date.

MongoDB allows you to choose consistency vs. availability vs. efficiency.

I cover the Solr indexing and search service (based on Lucene) both because a Solr indexed document store is a type of NoSQL data store and also because I believe that you will find Solr very useful for building systems, if you don’t already use it.

MongoDB

The following discussion of MongoDB is based on just my personal experience, so I am not covering all use cases. I have used MongoDB for:

  • Small clusters of MongoDB nodes to analyze social media data, mostly text mining and sentiment analysis. In all cases for each application I ran MongoDB with one write master (i.e., I wrote data to this one node but did not use it for reads) and multiple read-only slave nodes. Each slave node would run on the same server that was usually performing a single bit of analytics.
  • Multiple very large independent clusters for web advertising. Problems faced included trying to have some level of consistency across data centers. Replica sets were used within each data center.
  • Running a single node MongoDB instance for low volume data collection and analytics.

One of the advantages of MongoDB is that it is very “developer friendly” because it supports ad-hoc document schemas and interactive queries. I mentioned that MongoDB allows you to choose consistency vs. availability vs. efficiency. When you perform MongoDB writes you can specify some granularity of what constitutes a “successful write” by requiring that a write is performed at a specific number of nodes before the client gets acknowledgement that the write was successful. This requirement adds overhead to each write operation and can cause writes to fail if some nodes are not available.

The MongoDB online documentation is very good. You don’t have to read it in order to have fun playing with the following Common Lisp and MongoDB examples, but if you find that MongoDB is a good fit for your needs after playing with these examples then you should read the documentation. I usually install MongoDB myself but it is sometimes convenient to use a hosting service. There are several well regarded services and I have used MongoHQ.

At this time there is no official Common Lisp support for accessing MongoDB but there is a useful project by Alfons Haffmans’ cl-mongo that will allow us to write Common Lisp client applications and have access to most of the capabilities of MongoDB.

The file src/mongo_news.lisp contains the example code used in the next three sessions.

Adding Documents

The following repl listing shows the cl-mongo APIs for creating a new document, adding elements (attributes) to it, and inserting it in a MongoDB data store:

(ql:quickload "cl-mongo")

(cl-mongo:db.use "news")

(defun add-article (uri title text)
  (let ((doc (cl-mongo:make-document)))
	(cl-mongo:add-element "uri" uri doc)
	(cl-mongo:add-element "title" title doc)
	(cl-mongo:add-element "text" text doc)
	(cl-mongo:db.insert "article" doc)))

;; add a test document:
(add-article "http://test.com" "article title 1" "article text 1")

In this example, three string attributes were added to a new document before it was saved.

Fetching Documents by Attribute

We will start by fetchng and pretty-printing all documents in the collection articles and fetching all articles a list of nested lists where the inner nested lists are document URI, title, and text:

 1 (defun print-articles ()
 2   (cl-mongo:pp (cl-mongo:iter (cl-mongo:db.find "article" :all))))
 3 
 4 ;; for each document, use the cl-mongo:get-element on
 5 ;; each element we want to save:
 6 (defun article-results->lisp-data (mdata)
 7   (let ((ret '()))
 8     ;;(print (list "size of result=" (length mdata)))
 9     (dolist (a mdata)
10       ;;(print a)
11       (push
12         (list
13          (cl-mongo:get-element "uri" a)
14          (cl-mongo:get-element "title" a)
15          (cl-mongo:get-element "text" a))
16         ret)))
17     ret))
18 
19 (defun get-articles ()
20   (article-results->lisp-data
21     (cadr (cl-mongo:db.find "article" :all))))

Output for these two functions looks like:

 1 * (print-articles)
 2 
 3 {
 4   "_id" -> objectid(99778A792EBB4F76B82F75C6)
 5   "uri"  ->  http://test.com/3
 6   "title"  ->  article title 3
 7   "text"  ->  article text 3
 8 }
 9 
10 {
11   "_id" -> objectid(D47DEF3CFDB44DEA92FD9E56)
12   "uri"  ->  http://test.com/2
13   "title"  ->  article title 2
14   "text"  ->  article text 2
15 }
16 
17 * (get-articles)
18 
19 (("http://test.com/2" "article title 2" "article text 2")
20  ("http://test.com/3" "article title 3" "article text 3"))

By reusing the function article-results->lisp-data defined in the last section, we can also search for JSON documents using regular expressions matching attribute values:

 1 ;; find documents where substring 'str' is in the title:
 2 (defun search-articles-title (str)
 3   (article-results->lisp-data
 4     (cadr
 5       (cl-mongo:iter
 6        (cl-mongo:db.find
 7          "article"
 8          (cl-mongo:kv
 9             "title"     // TITLE ATTRIBUTE
10             (cl-mongo:kv "$regex" str)) :limit 10)))))
11 
12 ;; find documents where substring 'str' is in the text element:
13 (defun search-articles-text (str) 
14   (article-results->lisp-data
15     (cadr
16       (cl-mongo:db.find
17         "article"
18         (cl-mongo:kv
19            "text"        // TEXT ATTRIBUTE
20            (cl-mongo:kv "$regex" str)) :limit 10))))

I set the limit to return a maximum of ten documents. If you do not set the limit, this example code only returns one search result. The following repl listing shows the results from calling function search-articles-text:

1 * (SEARCH-ARTICLES-TEXT "text")
2 
3 (("http://test.com/2" "article title 2" "article text 2")
4  ("http://test.com/3" "article title 3" "article text 3"))
5 * (SEARCH-ARTICLES-TEXT "3")
6 
7 (("http://test.com/3" "article title 3" "article text 3"))

I find using MongoDB to be especially effective when experimenting with data and code. The schema free JSON document format, using interactive queries using the mongo shell, and easy to use client libraries like clouchdb for Common Lisp will let you experiment with a lot of ideas in a short period of time. The following listing shows the use of the interactive mongo shell. The database news is the database used in the MongoDB examples in this chapter; you will notice that I also have other databases for other projects on my laptop:

 1 ->  src git:(master) mongo
 2 MongoDB shell version: 2.4.5
 3 connecting to: test
 4 > show dbs
 5 kbsportal	0.03125GB
 6 knowledgespace	0.03125GB
 7 local	(empty)
 8 mark_twitter	0.0625GB
 9 myfocus	0.03125GB
10 news	0.03125GB
11 nyt	0.125GB
12 twitter	0.125GB
13 > use news
14 switched to db news
15 > show collections
16 article
17 system.indexes
18 > db.article.find()
19 { "uri" : "http://test.com/3",
20  "title" : "article title 3",
21  "text" : "article text 3",
22  "_id" : ObjectId("99778a792ebb4f76b82f75c6") }
23 { "uri" : "http://test.com/2",
24  "title" : "article title 2",
25  "text" : "article text 2",
26  "_id" : ObjectId("d47def3cfdb44dea92fd9e56") }
27 > 

Line 1 of this listing shows starting the mongo shell. Line 4 shows how to list all databases in the data store. In line 13 I select the database “news” to use. Line 15 prints out the names of all collections in the current database “news”. Line 18 prints out all documents in the “articles” collection. You can read the documentation for the mongo shell for more options like selective queries, adding indices, etc.

When you run a MongoDB service on your laptop, also try the admin interface on http://localhost:28017/.

A Common Lisp Solr Client

The Lucene project is one of the most widely used Apache Foundation projects. Lucene is a flexible library for preprocessing and indexing text, and searching text. I have personally used Lucene on so many projects that it would be difficult to count them. The Apache Solr Project adds a network interface to the Lucene text indexer and search engine. Solr also adds other utility features to Lucene:

  • While Lucene is a library to embed in your programs, Solr is a complete system.
  • Solr provides good defaults for preprocessing and indexing text and also provides rich support for managing structured data.
  • Provides both XML and JSON APIs using HTTP and REST.
  • Supports faceted search, geospatial search, and provides utilities for highlighting search terms in surrounding text of search results.
  • If your system ever grows to a very large number of users, Solr supports scaling via replication.

I hope that you will find the Common Lisp example Solr client code in the following sections helps you make Solr part of large systems that you write using Common Lisp.

Installing Solr

Download a binary Solr distribution and un-tar or un-zip this Solr distribution, cd to the distribution directory, then cd to the example directory and run:

1  ~/solr/example>  java -jar start.jar

You can access the Solr Admin Web App at http://localhost:8983/solr/#/. This web app can be seen in the following screen shot:

Solr Admin Web App
Solr Admin Web App

There is no data in the Solr example index yet, so following the Solr tutorial instructions:

 1  ~/> cd ~/solr/example/exampledocs
 2  ~/solr/example/exampledocs> java -jar post.jar *.xml
 3 SimplePostTool version 1.5
 4 Posting files to base url http://localhost:8983/solr/update
 5         using content-type application/xml..
 6 POSTing file gb18030-example.xml
 7 POSTing file hd.xml
 8 POSTing file ipod_other.xml
 9 POSTing file ipod_video.xml
10 POSTing file manufacturers.xml
11 POSTing file mem.xml
12 POSTing file money.xml
13 POSTing file monitor.xml
14 POSTing file monitor2.xml
15 POSTing file mp500.xml
16 POSTing file sd500.xml
17 POSTing file solr.xml
18 POSTing file utf8-example.xml
19 POSTing file vidcard.xml
20 14 files indexed.
21 COMMITting Solr index changes
22            to http://localhost:8983/solr/update..
23 Time spent: 0:00:00.480

You will learn how to add documents to Solr directly in your Common Lisp programs in a later section.

Assuming that you have a fast Internet connection so that downloading Solr was quick, you have hopefully spent less than five or six minutes getting Solr installed and running with enough example search data for the Common Lisp client examples we will play with. Solr is a great tool for storing, indexing, and searching data. I recommend that you put off reading the official Solr documentation for now and instead work through the Common Lisp examples in the next two sections. Later, if you want to use Solr then you will need to carefully read the Solr documentation.

Solr’s REST Interface

The Solr REST Interface Documentation documents how to perform search using HTTP GET requests. All we need to do is implement this in Common Lisp which you will see is easy.

Assuming that you have Solr running and the example data loaded, we can try searching for documents with, for example, the word “British” using the URL http://localhost:8983/solr/select?q=British. This is a REST request URL and you can use utilities like curl or wget to fetch the XML data. I fetched the data in a web browser, as seen in the following screen shot of a Firefox web browser (I like the way Firefox formats and displays XML data):

Solr Search Results as XML Data
Solr Search Results as XML Data

The attributes in the returned search results need some explanation. We indexed several example XML data files, one of which contained the following XML element that we just saw as a search result:

 1 <doc>
 2   <field name="id">GBP</field>
 3   <field name="name">One British Pound</field>
 4   <field name="manu">U.K.</field>
 5   <field name="manu_id_s">uk</field>
 6   <field name="cat">currency</field>
 7   <field name="features">Coins and notes</field>
 8   <field name="price_c">1,GBP</field>
 9   <field name="inStock">true</field>
10 </doc>

So, the search result has the same attributes as the structured XML data that was added to the Solr search index. Solr’s capability for indexing structured data is a superset of just indexing plain text. If for example we were indexing news stories, then example input data might look like:

1 <doc>
2   <field name="id">new_story_0001</field>
3   <field name="title">Fishing Season Opens</field>
4   <field name="text">Fishing season opens on Friday in Oak Creek.</field>
5 </doc>

With this example, a search result that returned this document as a result would return attributes id, title, and text, and the values of these three attributes.

By default the Solr web service returns XML data as seen in the last screen shot. For our examples, I prefer using JSON so we are going to always add a request parameter wt=json to all REST calls. The following screen shot shows the same data returned in JSON serialization format instead of XML format of a Chrome web browser (I like the way Chrome formats and displays JSON data with the JSONView Chrome Browser extension):

Solr Search Results as JSON Data
Solr Search Results as JSON Data

You can read the full JSON REST Solr documentation later, but for our use here we will use the following search patterns:

  • http://localhost:8983/solr/select?q=British+One&wt=json - search for documents with either of the words “British” or “one” in them. Note that in URIs that the “+” character is used to encode a space character. If you wanted a “+” character you would encode it with “%2B” and a space character is encoded as “%20”. The default Solr search option is an OR of the search terms, unlike, for example, Google Search.
  • http://localhost:8983/solr/select?q=British+AND+one&wt=json - search for documents that contain both of the words “British” and “one” in them. The search term in plain text is “British AND one”.

As we sawearlier in Network Programming it is fairly simple to use the drakma and cl-json Common Lisp libraries to call REST services that return JSON data. The function do-search defined in the next listing (all the Solr example code is in the file src/solr-client.lisp) constructs a query URI as we saw in the last section and uses the Drackma library to perform an HTTP GET operation and the cl-json library to parse the returned string containing JSON data into Lisp data structures:

(ql:quickload :drakma)
(ql:quickload :cl-json)

(defun do-search (&rest terms)
  (let ((query-string (format nil "~{~A~^+AND+~}" terms)))
   (cl-json:decode-json-from-string
     (drakma:http-request
       (concatenate 
        'string
        "http://localhost:8983/solr/select?q="
        query-string
        "&wt=json")))))

This example code does return the search results as Lisp list data; for example:

 1 * (do-search "British" "one")
 2 
 3 ((:RESPONSE-HEADER (:STATUS . 0) (:*Q-TIME . 1)
 4   (:PARAMS (:Q . "British+AND+one") (:WT . "json")))
 5  (:RESPONSE (:NUM-FOUND . 6) (:START . 0)
 6   (:DOCS
 7    ((:ID . "GBP") (:NAME . "One British Pound") (:MANU . "U.K.")
 8     (:MANU--ID--S . "uk") (:CAT "currency")
 9     (:FEATURES "Coins and notes")
10     (:PRICE--C . "1,GBP") (:IN-STOCK . T)
11     (:--VERSION-- . 1440194917628379136))
12    ((:ID . "USD") (:NAME . "One Dollar")
13     (:MANU . "Bank of America")
14     (:MANU--ID--S . "boa") (:CAT "currency")
15     (:FEATURES "Coins and notes")
16     (:PRICE--C . "1,USD") (:IN-STOCK . T)
17     (:--VERSION-- . 1440194917624184832))
18    ((:ID . "EUR") (:NAME . "One Euro")
19     (:MANU . "European Union")
20     (:MANU--ID--S . "eu") (:CAT "currency")
21     (:FEATURES "Coins and notes")
22     (:PRICE--C . "1,EUR") (:IN-STOCK . T)
23     (:--VERSION-- . 1440194917626281984))
24    ((:ID . "NOK") (:NAME . "One Krone")
25     (:MANU . "Bank of Norway")
26     (:MANU--ID--S . "nor") (:CAT "currency")
27     (:FEATURES "Coins and notes")
28     (:PRICE--C . "1,NOK") (:IN-STOCK . T)
29     (:--VERSION-- . 1440194917631524864))
30    ((:ID . "0579B002")
31     (:NAME . "Canon PIXMA MP500 All-In-One Photo Printer")
32     (:MANU . "Canon Inc.")
33     (:MANU--ID--S . "canon")
34     (:CAT "electronics" "multifunction printer"
35      "printer" "scanner" "copier")
36     (:FEATURES "Multifunction ink-jet color photo printer"
37      "Flatbed scanner, optical scan resolution of 1,200 x 2,400 dpi"
38      "2.5\" color LCD preview screen" "Duplex Copying"
39      "Printing speed up to 29ppm black, 19ppm color" "Hi-Speed USB"
40      "memory card: CompactFlash, Micro Drive, SmartMedia,
41       Memory Stick, Memory Stick Pro, SD Card, and MultiMediaCard")
42     (:WEIGHT . 352.0) (:PRICE . 179.99)
43     (:PRICE--C . "179.99,USD")
44     (:POPULARITY . 6) (:IN-STOCK . T)
45     (:STORE . "45.19214,-93.89941")
46     (:--VERSION-- . 1440194917651447808))
47    ((:ID . "SOLR1000")
48     (:NAME . "Solr, the Enterprise Search Server")
49     (:MANU . "Apache Software Foundation")
50     (:CAT "software" "search")
51     (:FEATURES "Advanced Full-Text Search Capabilities using Lucene"
52      "Optimized for High Volume Web Traffic"
53      "Standards Based Open Interfaces - XML and HTTP"
54      "Comprehensive HTML Administration Interfaces"
55      "Scalability - Efficient Replication to other Solr Search Servers"
56      "Flexible and Adaptable with XML configuration and Schema"
57      "Good unicode support: héllo (hello with an accent over the e)")
58     (:PRICE . 0.0) (:PRICE--C . "0,USD") (:POPULARITY . 10) (:IN-STOCK . T)
59     (:INCUBATIONDATE--DT . "2006-01-17T00:00:00Z")
60     (:--VERSION-- . 1440194917671370752)))))

I might modify the search function to return just the fetched documents as a list, discarding the returned Solr meta data:

 1 * (cdr (cadddr (cadr (do-search "British" "one"))))
 2 
 3 (((:ID . "GBP") (:NAME . "One British Pound") (:MANU . "U.K.")
 4   (:MANU--ID--S . "uk") (:CAT "currency") (:FEATURES "Coins and notes")
 5   (:PRICE--C . "1,GBP") (:IN-STOCK . T)
 6   (:--VERSION-- . 1440194917628379136))
 7  ((:ID . "USD") (:NAME . "One Dollar") (:MANU . "Bank of America")
 8   (:MANU--ID--S . "boa") (:CAT "currency") (:FEATURES "Coins and notes")
 9   (:PRICE--C . "1,USD") (:IN-STOCK . T)
10   (:--VERSION-- . 1440194917624184832))
11  ((:ID . "EUR") (:NAME . "One Euro") (:MANU . "European Union")
12   (:MANU--ID--S . "eu") (:CAT "currency") (:FEATURES "Coins and notes")
13   (:PRICE--C . "1,EUR") (:IN-STOCK . T)
14   (:--VERSION-- . 1440194917626281984))
15  ((:ID . "NOK") (:NAME . "One Krone") (:MANU . "Bank of Norway")
16   (:MANU--ID--S . "nor") (:CAT "currency")
17   (:FEATURES "Coins and notes")
18   (:PRICE--C . "1,NOK") (:IN-STOCK . T)
19   (:--VERSION-- . 1440194917631524864))
20  ((:ID . "0579B002")
21   (:NAME . "Canon PIXMA MP500 All-In-One Photo Printer")
22   (:MANU . "Canon Inc.") (:MANU--ID--S . "canon")
23   (:CAT "electronics" "multifunction printer" "printer"
24    "scanner" "copier")
25   (:FEATURES "Multifunction ink-jet color photo printer"
26    "Flatbed scanner, optical scan resolution of 1,200 x 2,400 dpi"
27    "2.5\" color LCD preview screen" "Duplex Copying"
28    "Printing speed up to 29ppm black, 19ppm color" "Hi-Speed USB"
29    "memory card: CompactFlash, Micro Drive, SmartMedia, Memory Stick,
30     Memory Stick Pro, SD Card, and MultiMediaCard")
31   (:WEIGHT . 352.0) (:PRICE . 179.99) (:PRICE--C . "179.99,USD")
32   (:POPULARITY . 6) (:IN-STOCK . T) (:STORE . "45.19214,-93.89941")
33   (:--VERSION-- . 1440194917651447808))
34  ((:ID . "SOLR1000") (:NAME . "Solr, the Enterprise Search Server")
35   (:MANU . "Apache Software Foundation") (:CAT "software" "search")
36   (:FEATURES "Advanced Full-Text Search Capabilities using Lucene"
37    "Optimized for High Volume Web Traffic"
38    "Standards Based Open Interfaces - XML and HTTP"
39    "Comprehensive HTML Administration Interfaces"
40    "Scalability - Efficient Replication to other Solr Search Servers"
41    "Flexible and Adaptable with XML configuration and Schema"
42    "Good unicode support: héllo (hello with an accent over the e)")
43   (:PRICE . 0.0) (:PRICE--C . "0,USD") (:POPULARITY . 10) (:IN-STOCK . T)
44   (:INCUBATIONDATE--DT . "2006-01-17T00:00:00Z")
45   (:--VERSION-- . 1440194917671370752)))

There are a few more important details if you want to add Solr search to your Common Lisp applications. When there are many search results you might want to fetch a limited number of results and then “page” through them. The following strings can be added to the end of a search query:

  • &rows=2 this example returns a maximum of two “rows” or two query results.
  • &start=4 this example skips the first 4 available results

A query that combines skipping results and limiting the number of returned results looks like this:

1 http://localhost:8983/solr/select?q=British+One&wt=json&start=2&rows=2

Common Lisp Solr Client for Adding Documents

In the last example we relied on adding example documents to the Solr search index using the directions for setting up a new Solr installation. In a real application, in addition to performing search requests for indexed documents you will need to add new documents from your Lisp applications. Using the Drakma we will see that it is very easy to add documents.

We need to construct a bit of XML containing new documents in the form:

1 <add>
2     <doc>
3         <field name="id">123456</field>
4         <field name="title">Fishing Season</field>
5     </doc>
6 </add>

You can specify whatever field names (attributes) that are required for your application. You can also pass multiple <doc></doc> elements in one add request. We will want to specify documents in a Lisp-like way: a list of cons values where each cons value is a field name and a value. For the last XML document example we would like an API that lets us just deal with Lisp data like:

 (do-add '(("id" . "12345")
           ("title" . "Fishing Season")))

One thing to note: the attribute names and values must be passed as strings. Other data types like integers, floating point numbers, structs, etc. will not work.

This is nicer than having to use XML, right? The first thing we need is a function to convert a list of cons values to XML. I could have used the XML Builder functionality in the cxml library that is available via Quicklisp, but for something this simple I just wrote it in pure Common Lisp with no other dependencies (also in the example file src/solr-client.lisp) :

 1 (defun keys-values-to-xml-string  (keys-values-list)
 2  (with-output-to-string (stream)
 3    (format stream "<add><doc>")
 4    (dolist (kv keys-values-list)
 5      (format stream "<field name=\"")
 6      (format stream (car kv))
 7      (format stream "\">")
 8      (format stream (cdr kv))
 9      (format stream "\"</field>"))
10    (format stream "</doc></add>")))

The macro with-output-to-string on line 2 of the listing is my favorite way to generate strings. Everything written to the variable stream inside the macro call is appended to a string; this string is the return value of the macro.

The following function adds documents to the Solr document input queue but does not actually index them:

1 (defun do-add (keys-values-list)
2   (drakma:http-request
3    "http://localhost:8983/solr/update"
4    :method :post
5    :content-type "application/xml"
6    :content ( keys-values-to-xml-string  keys-values-list)))

You have noticed in line 3 that I am accessing a Solr server running on localhost and not a remote server. In an application using a remote Solr server you would need to modify this to reference your server; for example:

1 "http://solr.knowledgebooks.com:8983/solr/update"

For efficiency Solr does not immediately add new documents to the index until you commit the additions. The following function should be called after you are done adding documents to actually add them to the index:

(defun commit-adds ()
  (drakma:http-request
   "http://localhost:8983/solr/update"
   :method :post
   :content-type "application/xml"
   :content "<commit></commit>"))

Notice that all we need is an empty element <commit></commit> that signals the Solr server that it should index all recently added documents. The following repl listing shows everything working together (I am assuming that the contents of the file src/solr-client.lisp has been loaded); not all of the output is shown in this listing:

* (do-add '(("id" . "12345") ("title" . "Fishing Season")))

200
((:CONTENT-TYPE . "application/xml; charset=UTF-8")
 (:CONNECTION . "close"))
#<PURI:URI http://localhost:8983/solr/update>
#<FLEXI-STREAMS:FLEXI-IO-STREAM {1009193133}>
T
"OK"
* (commit-adds)

200
((:CONTENT-TYPE . "application/xml; charset=UTF-8")
 (:CONNECTION . "close"))
#<PURI:URI http://localhost:8983/solr/update>
#<FLEXI-STREAMS:FLEXI-IO-STREAM {10031F20B3}>
T
"OK"
* (do-search "fishing")

((:RESPONSE-HEADER (:STATUS . 0) (:*Q-TIME . 2)
  (:PARAMS (:Q . "fishing") (:WT . "json")))
 (:RESPONSE (:NUM-FOUND . 1) (:START . 0)
  (:DOCS
   ((:ID . "12345\"") (:TITLE "Fishing Season\"")
    (:--VERSION-- . 1440293991717273600)))))
*

Common Lisp Solr Client Wrap Up

Solr has a lot of useful features that we have not used here like supporting faceted search (drilling down in previous search results), geolocation search, and looking up indexed documents by attribute. In the examples I have shown you, all text fields are indexed but Solr optionally allows you fine control over indexing, spelling correction, word stemming, etc.

Solr is a very capable tool for storing, indexing, and searching data. I have seen Solr used effectively on projects as a replacement for a relational database or other NoSQL data stores like CouchDB or MongoDB. There is a higher overhead for modifying or removing data in Solr so for applications that involve frequent modifications to stored data Solr might not be a good choice.

NoSQL Wrapup

There are more convenient languages than Common Lisp to use for accessing MongoDB. To be honest, my favorites are Ruby and Clojure. That said, for applications where the advantages of Common Lisp are compelling, it is good to know that your Common Lisp applications can play nicely with MongoDB.

I am a polyglot programmer: I like to use the best programming language for any specific job. When we design and build systems with more than one programming language, there are several options to share data:

  • Use foreign function interfaces to call one language from another from inside one process.
  • Use a service architecture and send requests using REST or SOAP.
  • Use shared data stores, like relational databases, MongoDB, CouchDB and Solr.

Hopefully this chapter and the last chapter will provide most of what you need for the last option.

Natural Language Processing

Natural Language Processing (NLP) is the automated processing of natural language text with several goals:

  • Determine the parts of speech (POS tagging) of words based on the surrounding words.
  • Detect if two text documents are similar.
  • Categorize text (e.g., is it about the economy, politics, sports, etc.)
  • Summarize text
  • Determine the sentiment of text
  • Detect names (e.g., place names, people’s names, product names, etc.)

We will use a library that I wrote that performs POS tagging, categorization (classification), summarization, and detects proper names.

My example code for this chapter is contained in separate Quicklisp projects located in the subdirectories:

  • src/fasttag: performs part of speech tagging and tokenizes text
  • src/categorize_summarize: performs categorization (e.g., detects the topic of text is news, politics, economy, etc.) and text summarization
  • src/kbnlp: the top level APIs for my pure Common Lisp natural language processing (NLP) code. In later chapters we will take a different approach by using Python deep learning models for NLP that we call as a web service. I use both approaches in my own work.

I worked on this Lisp code, and also similar code in Java, from about 2001 to 2011, and again in 2019 for my application for generating knowledge graph data automatically (this is an example in a later chapter). I am going to begin the next section with a quick explanation of how to run the example code. If you find the examples interesting then you can also read the rest of this chapter where I explain how the code works.

The approach that I used in my library for categorization (word counts) is now dated. I recommend that you consider taking Andrew Ng’s course on Machine Learning on the free online Coursera system and then take one of the Coursera NLP classes for a more modern treatment of NLP.

In addition to the code for my library you might also find the linguistic data in src/linguistic_data useful.

Loading and Running the NLP Library

I repackaged the NLP example code into one long file. The code used to be split over 18 source files. The code should be loaded from the src/kbnlp directory:

1 %  loving-common-lisp git:(master) > cd src/kbnlp
2 %  src/kbnlp git:(master) > sbcl
3 * (ql:quickload "kbnlp")
4 
5 "Startng to load data...." 
6 "....done loading data." 
7 *

This also loads the projects in src/fasttag and src/categorize_summarize.

Unfortunately, it takes about a minute using SBCL to load the required linguistic data so I recommend creating a Lisp image that can be reloaded to avoid the time required to load the data:

1 * (sb-ext:save-lisp-and-die "nlp-image" :purify t)
2 [undoing binding stack and other enclosing state... done]
3 [saving current Lisp image into nlp-image:
4 writing 5280 bytes from the read-only space at 0x0x20000000
5 writing 3088 bytes from the static space at 0x0x20100000
6 writing 80052224 bytes from the dynamic space at 0x0x1000000000
7 done]
8 %  src git:(master) > ls -lh nlp-image
9 -rw-r--r--  1 markw  staff    76M Jul 13 12:49 nlp-image

In line 1 in this repl listing, I use the SBCL built-in function save-lisp-and-die to create the Lisp image file. Using save-lisp-and-die is a great technique to use whenever it takes a while to set up your work environment. Saving a Lisp image for use the next time you work on a Common Lisp project is reminiscent of working in Smalltalk where your work is saved between sessions in an image file.

Note: I often use Clozure-CL (CCL) instead of SBCL for developing my NLP libraries because CCL loads my data files much faster than SBCL.

You can now start SBCL with the NLP library and data preloaded using the Lisp image that you just created:

 1 %  src git:(master) > sbcl --core nlp-image 
 2 * (in-package :kbnlp)
 3 
 4 #<PACKAGE "KBNLP">
 5 * (defvar
 6    *x*
 7    (make-text-object
 8      "President Bob Smith talked to Congress about the economy and taxes"))
 9 
10 *X*
11 
12 * *X*
13 
14 #S(TEXT
15    :URL ""
16    :TITLE ""
17    :SUMMARY "<no summary>"
18    :CATEGORY-TAGS (("news_politics.txt" 0.01648)
19                    ("news_economy.txt" 0.01601))
20    :KEY-WORDS NIL
21    :KEY-PHRASES NIL
22    :HUMAN-NAMES ("President Bob Smith")
23    :PLACE-NAMES NIL
24    :TEXT #("President" "Bob" "Smith" "talked" "to" "Congress" "about" "the"
25            "economy" "and" "taxes")
26    :TAGS #("NNP" "NNP" "NNP" "VBD" "TO" "NNP" "IN" "DT" "NN" "CC" "NNS")
27    :STEMS #("presid" "bob" "smith" "talk" "to" "congress" "about" "the"
28             "economi" "and" "tax"))
29 * 

At the end of the file src/knowledgebooks_nlp.lisp in comments is some test code that processes much more text so that a summary is also generated; here is a bit of the output you will see if you load the test code into your repl:

 1 (:SUMMARY
 2   "Often those amendments are an effort to change government policy
 3    by adding or subtracting money for carrying it out. The initial
 4    surge in foreclosures in 2007 and 2008 was tied to subprime
 5    mortgages issued during the housing boom to people with shaky
 6    credit. 2 trillion in annual appropriations bills for funding
 7    most government programsusually low profile legislation that
 8    typically dominates the work of the House in June and July.
 9    Bill Clinton said that banking in Europe is a good business.
10    These days homeowners who got fixed rate prime mortgages because
11    they had good credit cannot make their payments because they are
12    out of work. The question is whether or not the US dollar remains
13    the world s reserve currency if not the US economy will face
14    a depression."
15 :CATEGORY-TAGS (("news_politics.txt" 0.38268)
16                 ("news_economy.txt" 0.31182)
17                 ("news_war.txt" 0.20174))
18 :HUMAN-NAMES ("President Bill Clinton")
19 :PLACE-NAMES ("Florida"))

The top-level function make-text-object takes one required argument that can be either a string containing text or an array of strings where each string is a word or punctuation. Function make-text-object has two optional keyword parameters: the URL where the text was found and a title.

 1 (defun make-text-object (words &key (url "") (title ""))
 2   (if (typep words 'string) (setq words (words-from-string words)))
 3   (let* ((txt-obj (make-text :text words :url url :title title)))
 4     (setf (text-tags txt-obj) (part-of-speech-tagger words))
 5     (setf (text-stems txt-obj) (stem-text txt-obj))
 6     ;; note: we must find human and place names before calling
 7     ;; pronoun-resolution:
 8     (let ((names-places (find-names-places txt-obj)))
 9       (setf (text-human-names txt-obj) (car names-places))
10       (setf (text-place-names txt-obj) (cadr names-places)))
11     (setf (text-category-tags txt-obj)
12           (mapcar
13             #'(lambda (x)
14                 (list
15                   (car x)
16                   (/ (cadr x) 1000000.0)))
17             (get-word-list-category (text-text txt-obj))))
18     (setf (text-summary txt-obj) (summarize txt-obj))
19     txt-obj))

In line 2, we check if this function was called with a string containing text in which case the function words-from-string is used to tokenize the text into an array of string tokens. Line two defines the local variable txt-obj with the value of a new text object with only three slots (attributes) defined: text, url, and title. Line 4 sets the slot text-tags to the part of speech tokens using the function part-of-speech-tagger. We use the function find-names-places in line 8 to get person and place names and store these values in the text object. In lines 11 through 17 we use the function get-word-list-category to set the categories in the text object. In line 18 we similarly use the function summarize to calculate a summary of the text and also store it in the text object. We will discuss these NLP helper functions throughout the rest of this chapter.

The function make-text-object returns a struct that is defined as:

(defstruct text
  url
  title
  summary
  category-tags
  key-words
  key-phrases
  human-names
  place-names
  text
  tags
  stems)

Part of Speech Tagging

This tagger is the Common Lisp implementation of my FastTag open source project. I based this project on Eric Brill’s PhD thesis (1995). He used machine learning on annotated text to learn tagging rules. I used a subset of the tagging rules that he generated that were most often used when he tested his tagger. I hand coded his rules in Lisp (and Ruby, Java, and Pascal). My tagger is less accurate, but it is fast - thus the name FastTag.

If you just need part of speech tagging (and not summarization, categorization, and top level APIs used in the last section) you can load:

1 (ql:quickload "fasttag")

You can find the tagger implementation in the function part-of-speech-tagger. We already saw sample output from the tagger in the last section:

1 :TEXT #("President" "Bob" "Smith" "talked" "to" "Congress" "about" "the"
2         "economy" "and" "taxes")
3 :TAGS #("NNP" "NNP" "NNP" "VBD" "TO" "NNP" "IN" "DT" "NN" "CC" "NNS")

The following table shows the meanings of the tags and a few example words:

Tag Definition Example words
CC Coord Conjuncn and, but, or
NN Noun, sing. or mass dog
CD Cardinal number one, two
NNS Noun, plural dogs, cats
DT Determiner the, some
NNP Proper noun, sing. Edinburgh
EX Existential there there
NNPS Proper noun, plural Smiths
FW Foreign Word mon dieu
PDT Predeterminer all, both
IN Preposition of, in, by
POS Possessive ending ’s
JJ Adjective big
PP Personal pronoun I, you, she
JJR Adj., comparative bigger
PP$ Possessive pronoun my, one’s
JJS Adj., superlative biggest
RB Adverb quickly
LS List item marker 1, One
RBR Adverb, comparative faster
MD Modal can, should
RBS Adverb, superlative fastest
RP Particle up, off
WP$ Possessive-Wh whose
SYM Symbol +, %, &
WRB Wh-adverb how, where
TO “to” to
$ Dollar sign $
UH Interjection oh, oops
# Pound sign #
VB verb, base form eat, run
quote
VBD verb, past tense ate
VBG verb, gerund eating
( Left paren (
VBN verb, past part eaten
) Right paren )
VBP Verb, present eat
, Comma ,
VBZ Verb, present eats
. Sent-final punct . ! ?
WDT Wh-determiner which, that
: Mid-sent punct. : ; —
WP Wh pronoun who, what

The function part-of-speech-tagger loops through all input words and initially assigns the most likely part of speech as specified in the lexicon. Then a subset of Brill’s rules are applied. Rules operate on the current word and the previous word.

As an example Common Lisp implementation of a rule, look for words that are tagged as common nouns, but end in “ing” so they should be a gerand (verb form):

  ; rule 8: convert a common noun to a present 
  ;         participle verb (i.e., a gerand)
  (if (equal (search "NN" r) 0)
    (let ((i (search "ing" w :from-end t)))
      (if (equal i (- (length w) 3))
          (setq r "VBG"))))

You can find the lexicon data in the file src/linguistic_data/FastTagData.lisp. This file is List code instead of plain data (that in retrospect would be better because it would load faster) and looks like:

(defvar lex-hash (make-hash-table :test #'equal :size 110000))
(setf (gethash "shakeup" lex-hash) (list "NN"))
(setf (gethash "Laurance" lex-hash) (list "NNP"))
(setf (gethash "expressing" lex-hash) (list "VBG"))
(setf (gethash "citybred" lex-hash) (list "JJ"))
(setf (gethash "negative" lex-hash) (list "JJ" "NN"))
(setf (gethash "investors" lex-hash) (list "NNS" "NNPS"))
(setf (gethash "founding" lex-hash) (list "NN" "VBG" "JJ"))

I generated this file automatically from lexicon data using a small Ruby script. Notice that words can have more than one possible part of speech. The most common part of speech for a word is the first entry in the lexicon.

Categorizing Text

The code to categorize text is fairly simple using a technique often called “bag of words.” I collected sample text in several different categories and for each category (like politics, sports, etc.) I calculated the evidence or weight that words contribute to supporting a category. For example, the word “president” has a strong weight for the category “politics” but not for the category “sports.” The reason is that the word “president” occurs frequently in articles and books about politics. The data file that contains the word weightings for each category is src/data/cat-data-tables.lisp. You can look at this file; here is a very small part of it:

If you only need categorization and not the other libraries developed in this chapter, you can just load this library and run the example in the comment at the bottom of the file categorize_summarize.lisp:

({lang=”lisp”,linenos=off} (ql:quickload “categorize_summarize”) (defvar x “President Bill Clinton <<2 pages text no shown>> “) (defvar words1 (myutils:words-from-string x)) (print words1) (setq cats1 (categorize_summarize:categorize words1)) (print cats1) (defvar sum1 (categorize_summarize:summarize words1 cats1)) (print sum1)

Let’s look at the implementation, starting with creating hash tables for storing word count data for each category or topic:

;;;  Starting topic: news_economy.txt

(setf *h* (make-hash-table :test #'equal :size 1000))

  (setf (gethash "news" *h*) 3915)
  (setf (gethash "debt" *h*) 3826)
  (setf (gethash "money" *h*) 1809)
  (setf (gethash "work" *h*) 1779)
  (setf (gethash "business" *h*) 1631)
  (setf (gethash "tax" *h*) 1572)
  (setf (gethash "poverty" *h*) 1512)

This file was created by a simple Ruby script (not included with the book’s example code) that processes a list of sub-directories, one sub-directory per category. The following listing shows the implementation of function get-word-list-category that calculates category tags for input text:

 1 (defun get-word-list-category (words)
 2   (let ((x nil)
 3         (ss nil)
 4         (cat-hash nil)
 5         (word nil)
 6         (len (length words))
 7         (num-categories (length categoryHashtables))
 8         (category-score-accumulation-array
 9           (make-array num-categories :initial-element 0)))
10 
11     (defun list-sort (list-to-sort)
12       ;;(pprint list-to-sort)
13       (sort list-to-sort
14 	    #'(lambda (list-element-1 list-element-2)
15 		(> (cadr list-element-1) (cadr list-element-2)))))
16 
17     (do ((k 0 (+ k 1)))
18         ((equal k len))
19       (setf word (string-downcase (aref words k)))
20       (do ((i 0 (+ i 1)))
21           ((equal i num-categories))
22         (setf cat-hash (nth i categoryHashtables))
23         (setf x (gethash word cat-hash))
24         (if x
25             (setf 
26               (aref category-score-accumulation-array i)
27               (+ x (aref category-score-accumulation-array i))))))
28     (setf ss '())
29     (do ((i 0 (+ i 1)))
30         ((equal i num-categories))
31       (if (> (aref category-score-accumulation-array i) 0.01)
32           (setf
33             ss
34             (cons
35               (list
36                 (nth i categoryNames)
37                 (round (* (aref category-score-accumulation-array i) 10)))
38               ss))))
39     (setf ss (list-sort ss))
40     (let ((cutoff (/ (cadar ss) 2))
41           (results-array '()))
42       (dolist (hit ss)
43         (if (> (cadr hit) cutoff)
44             (setf results-array (cons hit results-array))))
45       (reverse results-array))))

On thing to notice in this listing is lines 11 through 15 where I define a nested function list-sort that takes a list of sub-lists and sorts the sublists based on the second value (which is a number) in the sublists. I often nest functions when the “inner” functions are only used in the “outer” function.

Lines 2 through 9 define several local variables used in the outer function. The global variable categoryHashtables is a list of word weighting score hash tables, one for each category. The local variable category-score-accumulation-array is initialized to an array containing the number zero in each element and will be used to “keep score” of each category. The highest scored categories will be the return value for the outer function.

Lines 17 through 27 are two nested loops. The outer loop is over each word in the input word array. The inner loop is over the number of categories. The logic is simple: for each word check to see if it has a weighting score in each category’s word weighting score hash table and if it is, increment the matching category’s score.

The local variable ss is set to an empty list on line 28 and in the loop in lines 29 through 38 I am copying over categories and their scores when the score is over a threshold value of 0.01. We sort the list in ss on line 39 using the inner function and then return the categories with a score greater than the median category score.

Detecting People’s Names and Place Names

The code for detecting people and place names is in the top level API code in the package defined in src/kbnlp. This package is loaded using:

(ql:quickload "kbnlp")
(kbnlp:make-text-object "President Bill Clinton ran for president of the USA")

The functions that support identifying people’s names and place names in text are in the Common Lisp package kb nlp::

  • find-names (words tags exclusion-list) – words is an array of strings for the words in text, tags are the parts of speech tags (from FastTag), and the exclusion list is a an array of words that you want to exclude from being considered as parts of people’s names. The list of found names records starting and stopping indices for names in the array words.
  • not-in-list-find-names-helper (a-list start end) – returns true if a found name is not already been added to a list for saving people’s names in text
  • find-places (words exclusion-list) – this is similar to find-names, but it finds place names. The list of found place names records starting and stopping indices for place names in the array words.
  • not-in-list-find-places-helper (a-list start end) – returns true if a found place name is not already been added to a list for saving place names in text
  • build-list-find-name-helper (v indices) – This converts lists of start/stop word indices to strings containing the names
  • find-names-places (txt-object) – this is the top level function that your application will call. It takes a defstruct text object as input and modifies the defstruct text by adding people’s and place names it finds in the text. You saw an example of this earlier in this chapter.

I will let you read the code and just list the top level function:

 1 (defun find-names-places (txt-object)
 2   (let* ((words (text-text txt-object))
 3 	 (tags (text-tags txt-object))
 4 	 (place-indices (find-places words nil))
 5 	 (name-indices (find-names words tags place-indices))
 6 	 (name-list
 7 	   (remove-duplicates
 8 	     (build-list-find-name-helper words name-indices) :test #'equal))
 9 	 (place-list
10 	   (remove-duplicates
11 	     (build-list-find-name-helper words place-indices) :test #'equal)))
12     (let ((ret '()))
13       (dolist (x name-list)
14 	(if (search " " x)
15 	    (setq ret (cons x ret))))
16       (setq name-list (reverse ret)))
17     (list
18      (remove-shorter-names name-list)
19      (remove-shorter-names place-list))))

In line 2 we are using the slot accessor text-text to fetch the array of word tokens from the text object. In lines 3, 4, and 5 we are doing the same for part of speech tags, place name indices in the words array, and person names indices in the words array.

In lines 6 through 11 we are using the function build-list-find-name-helper twice to construct the person names and place names as strings given the indices in the words array. We are also using the Common Lisp built-in function remove-duplicates to get rid of duplicate names.

In lines 12 through 16 we are discarding any persons names that do not contain a space, that is, only keep names that are at least two word tokens. Lines 17 through 19 define the return value for the function: a list of lists of people and place names using the function remove-shorter-names twice to remove shorter versions of the same names from the lists. For example, if we had two names “Mr. John Smith” and “John Smith” then we would want to drop the shorter name “John Smith” from the return list.

Summarizing Text

The code for summarizing text is located in the directory src/categorize_summarize and can be loaded using:

({lang=”lisp”,linenos=off} (ql:quickload “categorize_summarize”)

The code for summarization depends on the categorization code we saw earlier.

There are many applications for summarizing text. As an example, if you are writing a document management system you will certainly want to use something like Solr to provide search functionality. Solr will return highlighted matches in snippets of indexed document field values. Using summarization, when you add documents to a Solr (or other) search index you could create a new unindexed field that contains a document summary. Then when the users of your system see search results they will see the type of highlighted matches in snippets they are used to seeing in Google, Bing, or DuckDuckGo search results, and, they will see a summary of the document.

Sounds good? The problem to solve is getting good summaries of text and the technique used may have to be modified depending on the type of text you are trying to summarize. There are two basic techniques for summarization: a practical way that almost everyone uses, and an area of research that I believe has so far seen little practical application. The techniques are sentence extraction and abstraction of text into a shorter form by combining and altering sentences. We will use sentence extraction.

How do we choose which sentences in text to extract for the summary? The idea I had in 1999 was simple. Since I usually categorize text in my NLP processing pipeline why not use the words that gave the strongest evidence for categorizing text, and find the sentences with the largest number of these words. As a concrete example, if I categorize text as being “politics”, I identify the words in the text like “president”, “congress”, “election”, etc. that triggered the “politics” classification, and find the sentences with the largest concentrations of these words.

Summarization is something that you will probably need to experiment with depending on your application. My old summarization code contained a lot of special cases, blocks of commented out code, etc. I have attempted to shorten and simplify my old summarization code for the purposes of this book as much as possible and still maintain useful functionality.

The function for summarizing text is fairly simple because when the function summarize is called by the top level NLP library function make-text-object, the input text has already been categorized. Remember from the example at the beginning of the chapter that the category data looks like this:

1 :CATEGORY-TAGS (("news_politics.txt" 0.38268)
2                 ("news_economy.txt" 0.31182)
3                 ("news_war.txt" 0.20174))

This category data is saved in the local variable cats on line 4 of the following listing.

 1 (defun summarize (txt-obj)
 2   (let* ((words (text-text txt-obj))
 3          (num-words (length words))
 4          (cats (text-category-tags txt-obj))
 5          (sentence-count 0)
 6          best-sentences sentence (score 0))
 7     ;; loop over sentences:
 8     (dotimes (i num-words)
 9       (let ((word (svref words i)))
10         (dolist (cat cats)
11           (let* ((hash (gethash (car cat) categoryToHash))
12                  (value (gethash word hash)))
13             (if value
14                 (setq score (+ score (* 0.01 value (cadr cat)))))))
15         (push word sentence)
16         (if (or (equal word ".") (equal word "!") (equal word ";"))
17             (let ()
18               (setq sentence (reverse sentence))
19               (setq score (/ score (1+ (length sentence))))
20               (setq sentence-count (1+ sentence-count))
21               (format t "~%~A : ~A~%" sentence score)
22               ;; process this sentence:
23               (if (and
24                    (> score 0.4)
25                    (> (length sentence) 4)
26                    (< (length sentence) 30))
27                   (progn
28                     (setq sentence
29                           (reduce
30                            #'(lambda (x y) (concatenate 'string x " " y))
31                            (coerce sentence 'list)))
32                     (push (list sentence score) best-sentences)))
33               (setf sentence nil score 0)))))
34     (setf
35      best-sentences
36      (sort
37       best-sentences
38       #'(lambda (x y) (> (cadr x) (cadr y)))))
39     (if best-sentences
40         (replace-all
41          (reduce #'(lambda (x y) (concatenate 'string x " " y))
42                  (mapcar #'(lambda (x) (car x)) best-sentences))
43          " ." ".")
44         "<no summary>")))

The nested loops in lines 8 through 33 look a little complicated, so let’s walk through it. Our goal is to calculate an importance score for each word token in the input text and to then select a few sentences containing highly scored words. The outer loop is over the word tokens in the input text. For each word token we loop over the list of categories, looking up the current word in each category hash and incrementing the score for the current word token. As we increment the word token scores we also look for sentence breaks and save sentences.

The complicated bit of code in lines 16 through 32 where I construct sentences and their scores, and store sentences with a score above a threshold value in the list best-sentences. After the two nested loops, in lines 34 through 44 we simply sort the sentences by score and select the “best” sentences for the summary. The extracted sentences are no longer in their original order, which can have strange effects, but I like seeing the most relevant sentences first.

Text Mining

Text mining in general refers to finding data in unstructured text. We have covered several text mining techniques in this chapter:

  • Named entity recognition - the NLP library covered in this chapter recognizes person and place entity names. I leave it as an exercise for you to extend this library to handle company and product names. You can start by collecting company and product names in the files src/kbnlp/linguistic_data/names/names.companies and src/kbnlp/data/names/names.products and extend the library code.
  • Categorizing text - you can increase the accuracy of categorization by adding more weighted words/terms that support categories. If you are already using Java in the systems you build, I recommend the Apache OpenNLP library that is more accurate than the simpler “bag of words” approach I used in my Common Lisp NLP library. If you use Python, then I recommend that you also try the NLTK library.
  • Summarizing text.

In the next chapter I am going to cover another “data centric” topic: performing information gathering on the web. You will likely find some synergy between being able to use NLP to create structured data from unstructured text.

Information Gathering

This chapter covers information gathering on the web using data sources and general techniques that I have found useful. When I was planning this new book edition I had intended to also cover some basics for using the Semantic Web from Common Lisp, basically distilling some of the data from my previous book “Practical Semantic Web and Linked Data Applications, Common Lisp Edition” published in 2011. However since a free PDF is now available for that book I decided to just refer you to my previous work if you are interested in the Semantic Web and Linked Data. You can also find the Java edition of this previous book on my web site.

Gathering information from the web in realtime has some real advantages:

  • You don’t need to worry about storing data locally.
  • Information is up to date (depending on which web data resources you choose to use).

There are also a few things to consider:

  • Data on the web may have legal restrictions on its use so be sure to read the terms and conditions on web sites that you would like to use.
  • Authorship and validity of data may be questionable.

DBPedia Lookup Service

Wikipedia is a great source of information. As you may know, you can download a data dump of all Wikipedia data with or without version information and comments. When I want fast access to the entire Wikipedia set of English language articles I choose the second option and just get the current pages with no comments of versioning information. This is the direct download link for current Wikipedia articles. There are no comments or user pages in this GZIP file. This is not as much data as you might think, only about 9 gigabytes compressed or about 42 gigabytes uncompressed.

To load and run an example, try:

(ql:quickload "dbpedia")
(dbpedia:dbpedia-lookup "berlin")

Wikipedia is a great resource to have on hand but I am going to show you in this section how to access the Semantic Web version or Wikipedia, DBPedia using the DBPedia Lookup Service in the next code listing that shows the contents of the example file dbpedia-lookup.lisp in the directory src/dbpedia:

 1 (ql:quickload :drakma)
 2 (ql:quickload :babel)
 3 (ql:quickload :s-xml)
 4 
 5 ;; utility from http://cl-cookbook.sourceforge.net/strings.html#manip:
 6 (defun replace-all (string part replacement &key (test #'char=))
 7   "Returns a new string in which all the occurrences of the part 
 8 is replaced with replacement."
 9   (with-output-to-string (out)
10     (loop with part-length = (length part)
11        for old-pos = 0 then (+ pos part-length)
12        for pos = (search part string
13                          :start2 old-pos
14                          :test test)
15        do (write-string string out
16                         :start old-pos
17                         :end (or pos (length string)))
18        when pos do (write-string replacement out)
19        while pos)))
20 
21 (defstruct dbpedia-data uri label description)
22 
23 (defun dbpedia-lookup (search-string)
24   (let* ((s-str (replace-all search-string " " "+"))
25          (s-uri 
26           (concatenate
27            'string
28            "http://lookup.dbpedia.org/api/search.asmx/KeywordSearch?QueryString="
29            s-str))
30          (response-body nil)
31          (response-status nil)
32          (response-headers nil)
33          (xml nil)
34          ret)
35     (multiple-value-setq
36         (response-body response-status response-headers)
37       (drakma:http-request
38        s-uri
39        :method :get
40        :accept "application/xml"))
41     ;; (print (list "raw response body as XML:" response-body))
42     ;;(print (list ("status:" response-status "headers:" response-headers)))
43     (setf xml
44           (s-xml:parse-xml-string
45            (babel:octets-to-string response-body)))
46     (dolist (r (cdr xml))
47       ;; assumption: data is returned in the order:
48       ;;   1. label
49       ;;   2. DBPedia URI for more information
50       ;;   3. description
51       (push
52        (make-dbpedia-data
53         :uri (cadr (nth 2 r))
54         :label (cadr (nth 1 r))
55         :description
56         (string-trim
57          '(#\Space #\NewLine #\Tab)
58          (cadr (nth 3 r))))
59        ret))
60     (reverse ret)))
61 
62 ;; (dbpedia-lookup "berlin")

I am only capturing the attributes for DBPedia URI, label and description in this example code. If you uncomment line 41 and look at the entire response body from the call to DBPedia Lookup, you can see other attributes that you might want to capture in your applications.

Here is a sample call to the function dbpedia:dbpedia-lookup (only some of the returned data is shown):

 1 * (ql:quickload "dbpedia")
 2 * (dbpedia:dbpedia-lookup "berlin")
 3 
 4 (#S(DBPEDIA-DATA
 5   :URI "http://dbpedia.org/resource/Berlin"
 6   :LABEL "Berlin"
 7   :DESCRIPTION
 8   "Berlin is the capital city of Germany and one of the 16 states of Germany.
 9    With a population of 3.5 million people, Berlin is Germany's largest city
10    and is the second most populous city proper and the eighth most populous
11    urban area in the European Union. Located in northeastern Germany, it is
12    the center of the Berlin-Brandenburg Metropolitan Region, which has 5.9
13    million residents from over 190 nations. Located in the European Plains,
14    Berlin is influenced by a temperate seasonal climate.")
15  ...)

Wikipedia, and the DBPedia linked data for of Wikipedia are great sources of online data. If you get creative, you will be able to think of ways to modify the systems you build to pull data from DPPedia. One warning: Semantic Web/Linked Data sources on the web are not available 100% of the time. If your business applications depend on having the DBPedia always available then you can follow the instructions on the DBPedia web site to install the service on one of your own servers.

Web Spiders

When you write web spiders to collect data from the web there are two things to consider:

  • Make sure you read the terms of service for web sites whose data you want to use. I have found that calling or emailing web site owners explaining how I want to use the data on their site usually works to get permission.
  • Make sure you don’t access a site too quickly. It is polite to wait a second or two between fetching pages and other assets from a web site.

We have already used the Drakma web client library in this book. See the files src/dbpedia/dbpedia-lookup.lisp (covered in the last section) and src/solr_examples/solr-client.lisp (covered in the Chapter on NoSQL). Paul Nathan has written library using Drakma to crawl a web site with an example to print out links as they are found. His code is available under the AGPL license at articulate-lisp.com/src/web-trotter.lisp and I recommend that as a starting point.

I find it is sometimes easier during development to make local copies of a web site so that I don’t have to use excess resources from web site hosts. Assuming that you have the wget utility installed, you can mirror a site like this:

1 wget -m -w 2 http://knowledgebooks.com/
2 wget -mk -w 2 http://knowledgebooks.com/

Both of these examples have a two-second delay between HTTP requests for resources. The option -m indicates to recursively follow all links on the web site. The -w 2 option delays for two seconds between requests. The option -mk converts URI references to local file references on your local mirror. The second example on line 2 is more convenient.

We covered reading from local files in the Chapter on Input and Output. One trick I use is to simply concatenate all web pages into one file. Assuming that you created a local mirror of a web site, cd to the top level directory and use something like this:

1 cd knowledgebooks.com
2 cat *.html */*.html > ../web_site.html

You can then open the file, search for text in in p, div, h1, etc. HTML elements to process an entire web site as one file.

Using Apache Nutch

Apache Nutch, like Solr, is built on Lucene search technology. I use Nutch as a “search engine in a box” when I need to spider web sites and I want a local copy with a good search index.

Nutch handles a different developer’s use case over Solr which we covered in the Chapter on NoSQL. As we saw, Solr is an effective tool for indexing and searching structured data as documents. With very little setup, Nutch can be set up to automatically keep an up to date index of a list of web sites, and optionally follow links to some desired depth from these “seed” web sites.

You can use the same Common Lisp client code that we used for Solr with one exception; you will need to change the root URI for the search service to:

1  http://localhost:8080/opensearch?query=

So the modified client code src/solr_examples/solr-client.lisp needs one line changed:

1 (defun do-search (&rest terms)
2   (let ((query-string (format nil "~{~A~^+AND+~}" terms)))
3    (cl-json:decode-json-from-string
4      (drakma:http-request
5        (concatenate 
6         'string
7         "http://localhost:8080/opensearch?query="
8         query-string
9         "&wt=json")))))

Early versions of Nutch were very simple to install and configure. Later versions of Nutch have been more complex, more performant, and have more services, but it will take you longer to get set up than earlier versions. If you just want to experiment with Nutch, you might want to start with an earlier version.

The OpenSearch.org web site contains many public OpenSearch services that you might want to try. If you want to modify the example client code in src/solr-client.lisp a good start is OpenSearch services that return JSON data and OpenSearch Community JSON formats web page is a good place to start. Some of the services on this web page like the New York Times service require that you sign up for a developer’s API key.

When I start writing an application that requires web data (no matter which programming language I am using) I start by finding services that may provide the type of data I need and do my initial development with a web browser with plugin support to nicely format XML and JSON data. I do a lot of exploring and take a lot of notes before I write any code.

Wrap Up

I tried to provide some examples and advice in this short chapter to show you that even though other languages like Ruby and Python have more libraries and tools for gathering information from the web, Common Lisp has good libraries for information gathering also and they are easily used via Quicklisp.

Using The CL Machine-Learning Library

The CL Machine-Learning (CLML) library was originally developed by MSI (NTT DATA Mathematical Systems Inc. in Japan) and is supported by many developers. You should visit the CLML web page for project documentation and follow the installation directions and read about the project before using the examples in this chapter. However if you just want to quickly try the following CLML examples then you can install CLML using Quicklisp:

1 mkdir -p ~/quicklisp/local-projects
2 cd ~/quicklisp/local-projects
3 git clone https://github.com/mmaul/clml.git
4 sbcl --dynamic-space-size 2560
5 > (ql:quickload :clml :verbose t)

The installation will take a while to run but after installation using the libraries via quickload is fast. You can now run the example Quicklisp project src/clml_examples:

$ sbcl --dynamic-space-size 2560
* (ql:quickload "clmltest")
* (clmltest:clml-tests-example)

Please be patient the first time you run this because the first time you load the example project, the one time installation of CLML will take a while to run but after installation then the example project loads quickly. CLML installation involves downloading and installing BLAS, LAPACK, and other libraries.

Other resources for CLML are the tutorials and contributed extensions that include support for plotting (using several libraries) and for fetching data sets.

Although CLML is fairly portable we will be using SBCL and we need to increase the heap space when starting SBCL when we want to use the CLML library:

sbcl --dynamic-space-size 5000

You can refer to the documentation at https://github.com/mmaul/clml. This documentation lists the packages with some information for each package but realistically I keep the source code for CLML in an editor or IDE and read source code while writing code that uses CLML. I will show you with short examples how to use the KNN (K nearest neighbors) and SVM (support vector machines) APIs. We will not cover other useful CLML APIs like time series processing, Naive Bayes, PCA (principle component analysis) and general matrix and tensor operations.

Even though the learning curve is a bit steep, CLML provides a lot of functionality for machine learning, dealing with time series data, and general matrix and tensor operations.

Using the CLML Data Loading and Access APIs

The CLML project uses several data sets and since the few that we will use are small files, they are included in the book’s repository in directory machine_learning_data under the src directory. The first few lines of labeled_cancer_training_data.csv are:

Cl.thickness,Cell.size,Cell.shape,Marg.adhesion,Epith.c.size,Bare.nuclei,Bl.cromatin\
,Normal.nucleoli,Mitoses,Class
5,4,4,5,7,10,3,2,1,benign
6,8,8,1,3,4,3,7,1,benign
8,10,10,8,7,10,9,7,1,malignant
2,1,2,1,2,1,3,1,1,benign

The first line in the CSV data files specifies names for each attribute with the name of the last column being “Class” which here takes on values benign or malignant. Later, the goal will be to create models that are constructed from training data and then make predictions of the “Class” of new input data. We will look at how to build and use machine learning models later but here we concentrate on reading and using input data.

The example file clml_data_apis.lisp shows how to open a file and loop over the values for each row:

 1 ;; note; run SBCL using: sbcl --dynamic-space-size 2560
 2 
 3 (ql:quickload '(:clml
 4                 :clml.hjs)) ; read data sets
 5 
 6 (defpackage #:clml-data-test
 7   (:use #:cl #:clml.hjs.read-data))
 8 
 9 (in-package #:clml-data-test)
10 
11 (defun read-data ()
12   (let ((train1
13          (clml.hjs.read-data:read-data-from-file
14           "./machine_learning_data/labeled_cancer_training_data.csv"
15           :type :csv
16           :csv-type-spec (append
17                            (make-list 9 :initial-element 'double-float)
18                            '(symbol)))))
19     (loop-over-and-print-data train1)))
20 
21 (defun loop-over-and-print-data (clml-data-set)
22   (print "Loop over and print a CLML data set:")
23   (let ((testdata (clml.hjs.read-data:dataset-points clml-data-set)))
24     (loop for td across testdata
25        do
26          (print td))))
27 
28 (read-data)

The function read-data defined in lines 11-19 uses the utility function clml.hjs.read-data:read-data-from-file to read a CSV (comma separated value) spreadsheet file from disk. The CSV file is expected to contain 10 columns (set in lines 17-18) with the first nine columns containing floating point values and the last column text data.

The function loop-over-and-print-data defined in lines 21-26 reads the CLML data set object, looping over each data sample (i.e., each row in the original spreadsheet file) and printing it.

Here is some output from loading this file:

 1 $ sbcl --dynamic-space-size 2560
 2 This is SBCL 1.3.16, an implementation of ANSI Common Lisp.
 3 More information about SBCL is available at <http://www.sbcl.org/>.
 4 
 5 SBCL is free software, provided as is, with absolutely no warranty.
 6 It is mostly in the public domain; some portions are provided under
 7 BSD-style licenses.  See the CREDITS and COPYING files in the
 8 distribution for more information.
 9 * (load "clml_data_apis.lisp")
10 
11 "Loop over and print a CLML data set:" 
12 #(5.0d0 4.0d0 4.0d0 5.0d0 7.0d0 10.0d0 3.0d0 2.0d0 1.0d0 |benign|) 
13 #(6.0d0 8.0d0 8.0d0 1.0d0 3.0d0 4.0d0 3.0d0 7.0d0 1.0d0 |benign|) 
14 #(8.0d0 10.0d0 10.0d0 8.0d0 7.0d0 10.0d0 9.0d0 7.0d0 1.0d0 |malignant|) 
15 #(2.0d0 1.0d0 2.0d0 1.0d0 2.0d0 1.0d0 3.0d0 1.0d0 1.0d0 |benign|) 

In the next section we will use the same cancer data training file, and another test data in the same format to cluster this cancer data into similar sets, one set for non-malignant and one for malignant samples.

K-Means Clustering of Cancer Data Set

We will now read the same University of Wisconsin cancer data set and cluster the input samples (one sample per row of the spreadsheet file) into similar classes. We will find after training a model that the data is separated into two clusters, representing non-malignant and malignant samples.

The function cancer-data-cluster-example-read-data defined in lines 33-47 is very similar to the function read-data in the last section except here we read in two data files: one for training and one for testing.

The function cluster-using-k-nn defined in lines 13-30 uses the training and test data objects to first train a model and then to test it with test data that was previously used for training. Notice how we call this function in line 47: the first two arguments are the two data set objects, the third is the string “Class” that is the label for the 10th column of the original spreadsheet CSV files, and the last argument is the type of distance measurement used to compare two data samples (i.e., comparing any two rows of the training CSV data file).

 1 ;; note; run SBCL using: sbcl --dynamic-space-size 2560
 2 
 3 (ql:quickload '(:clml
 4                 :clml.hjs ; utilities
 5                 :clml.clustering))
 6 
 7 (defpackage #:clml-knn-cluster-example1
 8   (:use #:cl #:clml.hjs.read-data))
 9 
10 (in-package #:clml-knn-cluster-example1)
11 
12 ;; folowing is derived from test code in CLML:
13 (defun cluster-using-k-nn (test train objective-param-name  manhattan)
14   (let (original-data-column-length)
15     (setq original-data-column-length
16           (length (aref (clml.hjs.read-data:dataset-points train) 0)))
17     (let* ((k 5)
18            (k-nn-estimator
19             (clml.nearest-search.k-nn:k-nn-analyze train
20               k
21               objective-param-name :all
22               :distance manhattan :normalize t)))
23       (loop for data across
24           (dataset-points
25             (clml.nearest-search.k-nn:k-nn-estimate k-nn-estimator test))
26          if (equal (aref data 0) (aref data original-data-column-length))
27          do
28            (format t "Correct: ~a~%" data)
29          else do
30            (format t "Wrong:   ~a~%" data)))))
31 
32 ;; folowing is derived from test code in CLML:
33 (defun cancer-data-cluster-example-read-data ()
34   (let ((train1
35          (clml.hjs.read-data:read-data-from-file
36           "./machine_learning_data/labeled_cancer_training_data.csv"
37           :type :csv
38           :csv-type-spec (append (make-list 9 :initial-element 'double-float)
39                                  '(symbol))))
40         (test1
41          (clml.hjs.read-data:read-data-from-file
42           "./machine_learning_data/labeled_cancer_test_data.csv"
43           :type :csv
44           :csv-type-spec (append (make-list 9 :initial-element 'double-float)
45                                  '(symbol)))))
46     ;;(print test1)
47     (print (cluster-using-k-nn test1 train1 "Class" :double-manhattan))))
48 
49 (cancer-data-cluster-example-read-data)

The following listing shows the output from running the last code example:

 1 Number of self-misjudgement : 13
 2 Correct: #(benign 5.0d0 1.0d0 1.0d0 1.0d0 2.0d0 1.0d0 3.0d0 1.0d0 1.0d0 benign)
 3 Correct: #(benign 3.0d0 1.0d0 1.0d0 1.0d0 2.0d0 2.0d0 3.0d0 1.0d0 1.0d0 benign)
 4 Correct: #(benign 4.0d0 1.0d0 1.0d0 3.0d0 2.0d0 1.0d0 3.0d0 1.0d0 1.0d0 benign)
 5 Correct: #(benign 1.0d0 1.0d0 1.0d0 1.0d0 2.0d0 10.0d0 3.0d0 1.0d0 1.0d0 benign)
 6 Correct: #(benign 2.0d0 1.0d0 1.0d0 1.0d0 2.0d0 1.0d0 1.0d0 1.0d0 5.0d0 benign)
 7 Correct: #(benign 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 1.0d0 3.0d0 1.0d0 1.0d0 benign)
 8 Wrong:   #(benign 5.0d0 3.0d0 3.0d0 3.0d0 2.0d0 3.0d0 4.0d0 4.0d0 1.0d0
 9            malignant)
10 Correct: #(malignant 8.0d0 7.0d0 5.0d0 10.0d0 7.0d0 9.0d0 5.0d0 5.0d0 4.0d0
11            malignant)
12 Correct: #(benign 4.0d0 1.0d0 1.0d0 1.0d0 2.0d0 1.0d0 2.0d0 1.0d0 1.0d0 benign)
13 Correct: #(malignant 10.0d0 7.0d0 7.0d0 6.0d0 4.0d0 10.0d0 4.0d0 1.0d0 2.0d0
14            malignant)
15  ...

SVM Classification of Cancer Data Set

We will now reuse the same cancer data set but use a different way to classify data into non-malignant and malignant categories: Support Vector Machines (SVM). SVMs are linear classifiers which means that they work best when data is linearly separable. In the case of the cancer data, there are nine dimensions of values that (hopefully) predict one of the two output classes (or categories). If we think of the first 9 columns of data as defining a 9-dimensional space, then SVM will work well when a 8-dimensional hyperplane separates the samples into the two output classes (categories).

To make this simpler to visualize, if we just had two input columns, that defines a two-dimensional space, and if a straight line can separate most of the examples into the two output categories, then the data is linearly separable so SVM is a good technique to use. The SVM algorithm is effectively determining the parameters defining this one-dimensional line (or in the cancer data case, the 9-dimensional hyperspace).

What if data is not linearly separable? Then use the backpropagation neural network code in the chapter “Backpropagation Neural Networks” or the deep learning code in the chapter “Using Armed Bear Common Lisp With DeepLearning4j” to create a model.

SVM is very efficient so it often makes sense to first try SVM and if trained models are not accurate enough then use neural networks, including deep learning.

The following listing of file clml_svm_classifier.lisp shows how to read data, build a model and evaluate the model with different test data. In line 15 we use the function clml.svm.mu:svm that requires the type of kernel function to use, the training data, and testing data. Just for reference, we usually use Gaussian kernel functions for processing numeric data and linear kernel functions for handling text in natural language processing applications. Here we use a Gaussian kernel.

The function cancer-data-svm-example-read-data defined on line 40 differs from how we read and processed data earlier because we need to separate out the positive and negative training examples. The data is split in the lexically scoped function in lines 42-52. The last block of code in lines 54-82 is just top-level test code that gets executed when the file clml_svm_classifier.lisp is loaded.

 1 ;; note; run SBCL using: sbcl --dynamic-space-size 2560
 2 
 3 (ql:quickload '(:clml
 4                 :clml.hjs ; utilities
 5                 :clml.svm))
 6 
 7 (defpackage #:clml-svm-classifier-example1
 8   (:use #:cl #:clml.hjs.read-data))
 9 
10 (in-package #:clml-svm-classifier-example1)
11 
12 (defun svm-classifier-test (kernel train test)
13   "train and test are lists of lists, with first elements being negative
14    samples and the second elements being positive samples"
15   (let ((decision-function (clml.svm.mu:svm kernel (cadr train) (car train)))
16         (correct-positives 0)
17         (wrong-positives 0)
18         (correct-negatives 0)
19         (wrong-negatives 0))
20     ;; type: #<CLOSURE (LAMBDA (CLML.SVM.MU::Z) :IN CLML.SVM.MU::DECISION)>
21     (print decision-function)
22     (princ "***** NEGATIVE TESTS: calling decision function:")
23     (terpri)
24     (dolist (neg (car test))  ;; negative test examples
25       (let ((prediction (funcall decision-function neg)))
26         (print prediction)
27         (if prediction (incf wrong-negatives) (incf correct-negatives))))
28     (princ "***** POSITIVE TESTS: calling decision function:")
29     (terpri)
30     (dolist (pos (cadr test)) ;; positive test examples
31       (let ((prediction (funcall decision-function pos)))
32         (print prediction)
33         (if prediction (incf correct-positives) (incf wrong-positives))))
34     (format t "Number of correct negatives ~a~%" correct-negatives)
35     (format t "Number of wrong negatives ~a~%" wrong-negatives)
36     (format t "Number of correct positives ~a~%" correct-positives)
37     (format t "Number of wrong positives ~a~%" wrong-positives)))
38 
39 
40 (defun cancer-data-svm-example-read-data ()
41 
42   (defun split-positive-negative-cases (data)
43     (let ((negative-cases '())
44           (positive-cases '()))
45       (dolist (d data)
46         ;;(print (list "*  d=" d))
47         (if (equal (symbol-name (first (last d))) "benign")
48             (setf negative-cases
49                   (cons (reverse (cdr (reverse d))) negative-cases))
50             (setf positive-cases
51                   (cons (reverse (cdr (reverse d))) positive-cases))))
52       (list negative-cases positive-cases)))
53 
54   (let* ((train1
55           (clml.hjs.read-data:read-data-from-file
56            "./machine_learning_data/labeled_cancer_training_data.csv"
57            :type :csv
58            :csv-type-spec (append (make-list 9 :initial-element 'double-float)
59                                   '(symbol))))
60          (train-as-list
61           (split-positive-negative-cases
62            (coerce
63             (map 'list
64                  #'(lambda (x) (coerce x 'list))
65                  (coerce (clml.hjs.read-data:dataset-points train1) 'list))
66             'list)))
67          (test1
68           (clml.hjs.read-data:read-data-from-file
69            "./machine_learning_data/labeled_cancer_test_data.csv"
70            :type :csv
71            :csv-type-spec (append (make-list 9 :initial-element 'double-float)
72                                   '(symbol))))
73          (test-as-list
74           (split-positive-negative-cases
75            (coerce
76             (map 'list
77                  #'(lambda (x) (coerce x 'list))
78                  (coerce (clml.hjs.read-data:dataset-points test1) 'list))
79             'list))))
80 
81     ;; we will use a gaussian kernel for numeric data.
82     ;; note: for text classification, use a clml.svm.mu:+linear-kernel+
83     (svm-classifier-test
84      (clml.svm.mu:gaussian-kernel 2.0d0)
85      train-as-list test-as-list)))
86 
87 (cancer-data-svm-example-read-data)

The sample code prints the prediction values for the test data which I will not show here. Here are the last four lines of output showing the cumulative statistics for the test data:

1 Number of correct negatives 219
2 Number of wrong negatives 4
3 Number of correct positives 116
4 Number of wrong positives 6

CLML Wrap Up

The CLML machine learning library is under fairly active development and I showed you enough to get started: understanding the data APIs and examples for KNN clustering and SVM classification.

A good alternative to CLML is MGL that supports backpropagation neural networks, boltzmann machines, and gaussian processes.

In the next two chapters we continue with the topic of machine learning with backpropagation andf Hopfield neural networks.

Backpropagation Neural Networks

Let’s start with an overview of how these networks work and then fill in more detail later. Backpropagation networks are trained by applying training inputs to the network input layer, propagate values through the network to the output neurons, compare the errors (or differences) between these propagated output values and the training data output values. These output errors are backpropagated though the network and the magnitude of backpropagated errors are used to adjust the weights in the network.

The example we look at here uses the plotlib package from an earlier chapter and the source code for the example is the file loving_snippet/backprop_neural_network.lisp.

We will use the following diagram to make this process more clear. There are four weights in this very simple network:

  • W1,1 is the floating point number representing the connection strength between input_neuron1 and output_neuron1
  • W2,1 connects input_neuron2 to output_neuron1
  • W1,2 connects input_neuron1 to output_neuron2
  • W2,2 connects input_neuron2 to output_neuron2
Understanding how connection weights connect neurons in adjacent layers
Understanding how connection weights connect neurons in adjacent layers

Before any training the weight values are all small random numbers.

Consider a training data element where the input neurons have values [0.1, 0.9] and the desired output neuron values are [0.9 and 0.1], that is flipping the input values. If the propagated output values for the current weights are [0.85, 0.5] then the value of the first output neuron has a small error abs(0.85 - 0.9) which is 0.05. However the propagated error of the second output neuron is high: abs(0.5 - 0.1) which is 0.4. Informally we see that the weights feeding input output neuron 1 (W1,1 and W2,1) don’t need to be changed much but the neuron that feeding input neuron 2 (W1,2 and W2,2) needs modification (the value of W2,2 is too large).

Of course, we would never try to manually train a network like this but it is important to have at least an informal understanding of how weights connect the flow of value (we will call this activation value later) between neurons.

In this neural network see in the first figure we have four weights connecting the input and output neurons. Think of these four weights forming a four-dimensional space where the range in each dimension is constrained to small positive and negative floating point values. At any point in this “weight space”, the numeric values of the weights defines a model that maps the inputs to the outputs. The error seen at the output neurons is accumulated for each training example (applied to the input neurons). The training process is finding a point in this four-dimensional space that has low errors summed across the training data. We will use gradient descent to start with a random point in the four-dimensional space (i.e., an initial random set of weights) and move the point towards a local minimum that represents the weights in a model that is (hopefully) “good enough” at representing the training data.

This process is simple enough but there are a few practical considerations:

  • Sometimes the accumulated error at a local minimum is too large even after many training cycles and it is best to just restart the training process with new random weights.
  • If we don’t have enough training data then the network may have enough memory capacity to memorize the training examples. This is not what we want: we want a model with just enough memory capacity (as represented by the number of weights) to form a generalized predictive model, but not so specific that it just memorizes the training examples. The solution is to start with small networks (few hidden neurons) and increase the number of neurons until the training data can be learned. In general, having a lot of training data is good and it is also good to use as small a network as possible.

In practice using backpropagation networks is an iterative process of experimenting with the size of a network.

In the example program (in the file backprop_neural_network.lisp) we use the plotting library developed earlier to visualize neuron activation and connecting weight values while the network trains.

The following three screen shots from running the function test3 defined at the bottom of the file backprop_neural_network.lisp illustrate the process of starting with random weights, getting random outputs during initial training, and as delta weights are used to adjust the weights in a network, then the training examples are learned:

At the start of the training run with random weights and large delta weights
At the start of the training run with random weights and large delta weights

In the last figure the initial weights are random so we get random mid-range values at the output neurons.

The trained weights start to produce non-random output
The trained weights start to produce non-random output

As we start to train the network, adjusting the weights, we start to see variation in the output neurons as a function of what the inputs are.

After training many cycles the training examples are learned, with only small output errors
After training many cycles the training examples are learned, with only small output errors

In the last figure the network is trained sufficiently well to map inputs [0, 0, 0, 1] to output values that are approximately [0.8, 0.2, 0.2, 0.3] which is close to the expected value [1, 0, 0, 0].

The example source file backprop_neural_network.lisp is long so we will only look at the more interesting parts here. Specifically we will not look at the code to plot neural networks using plotlib.

The activation values of individual neurons are limited to the range [0, 1] by first calculating their values based on the sum activation values of neurons in the previous layer times the values of the connecting weights and then using the Sigmoid function to map the sums to the desired range. The Sigmoid function and the derivative of the Sigmoid function (dSigmoid) look like:

Sigmoid and Derivative of the Sigmid Functions
Sigmoid and Derivative of the Sigmid Functions

Here are the definitions of these functions:

(defun Sigmoid (x)
  (/ 1.0 (+ 1.0 (exp (- x)))))

(defun dSigmoid (x)
  (let ((temp (Sigmoid x)))
    (* temp (- 1.0 temp)))

The function NewDeltaNetwork creates a new neual network object. This code allocates storage for input, hidden, output layers (I sometimes refer to neuron layers as “slabs”), and the connection weights. Connection weights are initialized to small random values.

 1 ; (NewDeltaNetwork sizeList)
 2 ;       Args:   sizeList = list of sizes of slabs. This also defines
 3 ;                          the number of slabs in the network.
 4 ;                          (e.g.,  '(10 5 4) ==> a 3-slab network with 10
 5 ;                           input neurons, 5 hidden neurons, and 4 output
 6 ;                           neurons).
 7 ;
 8 ;       Returned value = a list describing the network:
 9 ;          (nLayers sizeList
10 ;           (activation-array[1] .. activation-array[nLayers])
11 ;           (weight-array[2] .. weight-array[nLayers])
12 ;           (sum-of-products[2] .. sum-of-products[nLayers[nLayers])
13 ;           (back-prop-error[2] .. back-prop-error[nLayers]))
14 ;           (old-delta-weights[2] .. for momentum term
15 
16                                        :initial-element 0.0))
17            (reverse old-dw-list)))
18 
19     ;;
20      ;  Initialize values for all activations:
21      ;;
22     (mapc
23      (lambda (x)
24         (let ((num (array-dimension x 0)))
25           (dotimes (n num)
26             (setf (aref x n) (frandom 0.01 0.1)))))
27      a-list)
28 
29     ;;
30      ;  Initialize values for all weights:
31      ;;
32     (mapc
33      (lambda (x)
34         (let ((numI (array-dimension x 0))
35               (numJ (array-dimension x 1)))
36           (dotimes (j numJ)
37             (dotimes (i numI)
38               (setf (aref x i j) (frandom -0.5 0.5))))))
39      w-list)
40     (list numLayers sizeList a-list s-list w-list dw-list
41           d-list old-dw-list alpha beta)))

In the following listing the function DeltaLearn processes one pass through all of the training data. Function DeltaLearn is called repeatedly until the return value is below a desired error threshold. The main loop over each training example is implemented in lines 69-187. Inside this outer loop there are two phases of training for each training example: a forward pass propagating activation from the input neurons to the output neurons via any hidden layers (lines 87-143) and then the weight correcting backpropagation of output errors while making small adjustments to weights (lines 148-187):

  1 ;;
  2  ;  Utility function for training a delta rule neural network.
  3  ;  The first argument is the name of an output PNG plot file
  4  ;  and a nil value turns off plotting the network during training.
  5  ;  The second argument is a network definition (as returned from
  6  ;  NewDeltaNetwork), the third argument is a list of training
  7  ;  data cases (see the example test functions at the end of this
  8  ;  file for examples.
  9  ;;
 10 
 11 (defun DeltaLearn (plot-output-file-name
 12 		   netList trainList)
 13   (let ((nLayers (car netList))
 14         (sizeList (cadr netList))
 15         (activationList (caddr netList))
 16         (sumOfProductsList (car (cdddr netList)))
 17         (weightList (cadr (cdddr netList)))
 18         (deltaWeightList (caddr (cdddr netList)))
 19         (deltaList (cadddr (cdddr netList)))
 20         (oldDeltaWeightList (cadddr (cdddr (cdr netList))))
 21         (alpha (cadddr (cdddr (cddr netList))))
 22         (beta (cadddr (cdddr (cdddr netList))))
 23         (inputs nil)
 24         (targetOutputs nil)
 25         (iDimension nil)
 26         (jDimension nil)
 27         (iActivationVector nil)
 28         (jActivationVector nil)
 29         (n nil)
 30         (weightArray nil)
 31         (sumOfProductsArray nil)
 32         (iDeltaVector nil)
 33         (jDeltaVector nil)
 34         (deltaWeightArray nil)
 35         (oldDeltaWeightArray nil)
 36         (sum nil)
 37         (iSumOfProductsArray nil)
 38         (error nil)
 39         (outputError 0)
 40         (delta nil)
 41         (eida nil)
 42         (inputNoise 0))
 43 
 44     ;;
 45      ; Zero out deltas:
 46      ;;
 47     (dotimes (n (- nLayers 1))
 48       (let* ((dw (nth n deltaList))
 49              (len1 (array-dimension dw 0)))
 50         (dotimes (i len1)
 51           (setf (aref dw i) 0.0))))
 52 
 53     ;;
 54      ; Zero out delta weights:
 55      ;;
 56     (dotimes (n (- nLayers 1))
 57       (let* ((dw (nth n deltaWeightList))
 58              (len1 (array-dimension dw 0))
 59              (len2 (array-dimension dw 1)))
 60         (dotimes (i len1)
 61           (dotimes (j len2)
 62             (setf (aref dw i j) 0.0)))))
 63 
 64     (setq inputNoise *delta-default-input-noise-value*)
 65 
 66     ;;
 67      ;  Main loop on training examples:
 68      ;;
 69     (dolist (tl trainList)
 70 
 71       (setq inputs (car tl))
 72       (setq targetOutputs (cadr tl))
 73 
 74       (if *delta-rule-debug-flag*
 75         (print (list "Current targets:" targetOutputs)))
 76 
 77       (setq iDimension (car sizeList)) ; get the size of the input slab
 78       (setq iActivationVector (car activationList)) ; input activations
 79       (dotimes (i iDimension) ; copy training inputs to input slab
 80         (setf
 81          (aref iActivationVector i)
 82          (+ (nth i inputs) (frandom (- inputNoise) inputNoise))))
 83       ;;
 84        ; Propagate activation through all of the slabs:
 85        ;;
 86       (dotimes (n-1 (- nLayers 1))  ; update layer i to layer flowing to layer j
 87         (setq n (+ n-1 1))
 88         (setq jDimension (nth n sizeList)) ; get the size of the j'th layer
 89         (setq jActivationVector (nth n activationList)) ; activation  for slab j
 90         (setq weightArray (nth n-1 weightList))
 91         (setq sumOfProductsArray (nth n-1 sumOfProductsList))
 92         (dotimes (j jDimension) ; process each neuron in slab j
 93           (setq sum 0.0) ; init sum of products to zero
 94           (dotimes (i iDimension) ; activation from neurons in previous slab
 95             (setq
 96              sum
 97              (+ sum (* (aref weightArray i j) (aref iActivationVector i)))))
 98           (setf (aref sumOfProductsArray j) sum) ; save sum of products
 99           (setf (aref jActivationVector j) (Sigmoid sum)))
100         (setq iDimension jDimension)     ; reset index for next slab pair
101         (setq iActivationVector jActivationVector))
102       ;;
103        ; Activation is  spread through the network and sum of products
104        ; calculated. Now modify the weights in the network using back
105        ; error propagation. Start by calculating the error signal for
106        ; each neuron in the output layer:
107        ;;
108       (setq jDimension (nth (- nLayers 1) sizeList)) ; size of last layer
109       (setq jActivationVector (nth (- nLayers 1) activationList))
110       (setq jDeltaVector (nth (- nLayers 2) deltaList))
111       (setq sumOfProductsArray (nth (- nLayers 2) sumOfProductsList))
112       (setq outputError 0)
113       (dotimes (j jDimension)
114         (setq delta (- (nth j targetOutputs) (aref jActivationVector j)))
115         (setq outputError (+ outputError (abs delta)))
116         (setf
117          (aref jDeltaVector j)
118          (+
119           (aref jDeltaVector j)
120           (* delta (dSigmoid (aref sumOfProductsArray j))))))
121       ;;
122        ; Now calculate the backpropagated error signal for all hidden slabs:
123        ;;
124       (dotimes (nn (- nLayers 2))
125         (setq n (- nLayers 3 nn))
126         (setq iDimension (nth (+ n 1) sizeList))
127         (setq iSumOfProductsArray (nth n sumOfProductsList))
128         (setq iDeltaVector (nth n deltaList))
129         (dotimes (i iDimension)
130           (setf (aref iDeltaVector i) 0.0))
131         (setq weightArray (nth (+ n 1) weightList))
132         (dotimes (i iDimension)
133           (setq error 0.0)
134           (dotimes (j jDimension)
135             (setq error
136                   (+ error (* (aref jDeltaVector j) (aref weightArray i j)))))
137           (setf
138            (aref iDeltaVector i)
139            (+
140             (aref iDeltaVector i)
141             (* error (dSigmoid (aref iSumOfProductsArray i))))))
142         (setq jDimension iDimension)
143         (setq jDeltaVector iDeltaVector))
144 
145       ;;
146        ; Update all delta weights in the network:
147        ;;
148       (setq iDimension (car sizeList))
149       (dotimes (n (- nLayers 1))
150         (setq iActivationVector (nth n activationList))
151         (setq jDimension (nth (+ n 1) sizeList))
152         (setq jDeltaVector (nth n deltaList))
153         (setq deltaWeightArray (nth n deltaWeightList))
154         (setq weightArray (nth n weightList))
155         (setq eida (nth n eidaList))
156 
157         (dotimes (j jDimension)
158           (dotimes (i iDimension)
159             (setq delta (* eida (aref jDeltaVector j) (aref iActivationVector i)))
160             (setf
161              (aref DeltaWeightArray i j)
162              (+ (aref DeltaWeightArray i j) delta)))) ; delta weight changes
163 
164         (setq iDimension jDimension))
165 
166     ;;
167      ; Update all weights in the network:
168      ;;
169     (setq iDimension (car sizeList))
170     (dotimes (n (- nLayers 1))
171       (setq iActivationVector (nth n activationList))
172       (setq jDimension (nth (+ n 1) sizeList))
173       (setq jDeltaVector (nth n deltaList))
174       (setq deltaWeightArray (nth n deltaWeightList))
175       (setq oldDeltaWeightArray (nth n oldDeltaWeightList))
176       (setq weightArray (nth n weightList))
177       (dotimes (j jDimension)
178         (dotimes (i iDimension)
179           (setf
180            (aref weightArray i j)
181            (+ (aref weightArray i j)
182               (* alpha (aref deltaWeightArray i j))
183               (* beta  (aref oldDeltaWeightArray i j))))
184            (setf (aref oldDeltaWeightArray i j) ; save current delta weights
185                  (aref deltaWeightArray i j)))) ; ...for next momentum term.
186       (setq iDimension jDimension))
187 
188     (if plot-output-file-name
189         (DeltaPlot netList plot-output-file-name)))
190 
191     (/ outputError jDimension)))

The function DeltaRecall in the next listing can be used with a trained network to calculate outputs for new input values:

 1 ;;
 2  ;  Utility for using a trained neural network in the recall mode.
 3  ;  The first argument to this function is a network definition (as
 4  ;  returned from NewDeltaNetwork) and the second argument is a list
 5  ;  of input neuron activation values to drive through the network.
 6  ;  The output is a list of the calculated activation energy for
 7  ;  each output neuron.
 8  ;;
 9 (defun DeltaRecall (netList inputs)
10   (let ((nLayers (car netList))
11         (sizeList (cadr netList))
12         (activationList (caddr netList))
13         (weightList (cadr (cdddr netList)))
14         (iDimension nil)
15         (jDimension nil)
16         (iActivationVector nil)
17         (jActivationVector nil)
18         (n nil)
19         (weightArray nil)
20         (returnList nil)
21         (sum nil))
22     (setq iDimension (car sizeList)) ; get the size of the input slab
23     (setq iActivationVector (car activationList)) ; get input activations
24     (dotimes (i iDimension) ; copy training inputs to input slab
25       (setf (aref iActivationVector i) (nth i inputs)))
26     (dotimes (n-1 (- nLayers 1))  ; update layer j to layer i
27       (setq n (+ n-1 1))
28       (setq jDimension (nth n sizeList)) ; get the size of the j'th layer
29       (setq jActivationVector (nth n activationList)) ; activation for slab j
30       (setq weightArray (nth n-1 weightList))
31       (dotimes (j jDimension) ; process each neuron in slab j
32         (setq sum 0.0) ; init sum of products to zero
33         (dotimes (i iDimension) ; get activation from each neuron in last slab
34           (setq
35            sum
36            (+ sum (* (aref weightArray i j) (aref iActivationVector i)))))
37         (if *delta-rule-debug-flag*
38           (print (list "sum=" sum)))
39         (setf (aref jActivationVector j) (Sigmoid sum)))
40       (setq iDimension jDimension) ; get ready for next slab pair
41       (setq iActivationVector jActivationVector))
42       (dotimes (j jDimension)
43         (setq returnList (append returnList (list (aref jActivationVector j)))))
44       returnList))

We saw three output plots earlier that were produced during a training run using the following code:

 1 (defun test3 (&optional (restart 'yes) &aux RMSerror) ; three layer network
 2   (if
 3     (equal restart 'yes)
 4     (setq temp (newdeltanetwork '(5 4 5))))
 5   (dotimes (ii 3000)
 6     (let ((file-name
 7      (if (equal (mod ii 400) 0)
 8          (concatenate 'string "output_plot_" (format nil "~12,'0d" ii) ".png")
 9          nil)))
10       (setq
11        RMSerror
12        (deltalearn
13           file-name temp
14           '(((1 0 0 0 0) (0 1 0 0 0))
15             ((0 1 0 0 0) (0 0 1 0 0))
16             ((0 0 1 0 0) (0 0 0 1 0))
17             ((0 0 0 1 0) (0 0 0 0 1))
18             ((0 0 0 0 1) (1 0 0 0 0)))))
19       (if (equal (mod ii 50) 0) ;; print error out every 50 cycles
20         (progn
21           (princ "....training cycle \#")
22           (princ ii)
23           (princ " RMS error = ")
24           (princ RMSerror)
25           (terpri))))))

Here the function test3 defines training data for a very small test network for a moderately difficult function to learn: to rotate the values in the input neurons to the right, wrapping around to the first neuron. The start of the main loop in line calls the training function 3000 times, creating a plot of the network every 400 times through the main loop.

Backpropagation networks have been used sucessfully in production for about 25 years. In the next chapter we will look at a less practical type of network, Hopfield networks, that are still interesting because the in some sense Hopfield networks model how our brains work. In the final chapter we will look at deep learning neural networks.

Hopfield Neural Networks

A Hopfield network (named after John Hopfield) is a recurrent network since the flow of activation through the network has loops. These networks are trained by applying input patterns and letting the network settle in a state that stores the input patterns.

The example code is in the file src/loving_snippets/Hopfield_neural_network.lisp.

The example we look at recognizes patterns that are similar to the patterns seen in training examples and maps input patterns to a similar training input pattern. The following figure shows output from the example program showing an original training pattern, a similar pattern with one cell turned on and other off, and the reconstructed pattern:

To be clear, we have taken one of the original input patterns the network has learned, slightly altered it, and applied it as input to the network. After cycling the network, the slightly scrambled input pattern we just applied will be used as an associative memory key, look up the original pattern, and rewrite to input values with the original learned pattern. These Hopfield networks are very different than backpropagation networks: neuron activation are forced to values of -1 or +1 and not be differentiable and there are no separate output neurons.

The next example has the values of three cells modified from the original and the original pattern is still reconstructed correctly:

This last example has four of the original cells modified:

The following example program shows a type of content-addressable memory. After a Hopfield network learns a set of input patterns then it can reconstruct the original paterns when shown similar patterns. This reconstruction is not always perfecrt.

The following function Hopfield-Init (in file Hopfield_neural_network.lisp) is passed a list of lists of training examples that will be remembered in the network. This function returns a list containing the data defining a Hopfield neural network. All data for the network is encapsulated in the list returned by this function, so multiple Hopfield neural networks can be used in an application program.

In lines 9-12 we allocate global arrays for data storage and in lines 14-18 the training data is copied.

The inner function adjustInput on lines 20-29 adjusts data values to values of -1.0 or +1.0. In lines 31-33 we are initializing all of the weights in the Hopfield network to zero.

The last nested loop, on lines 35-52, calculates the autocorrelation weight matrix from the input test patterns.

On lines 54-56, the function returns a representation of the Hopfield network that will be used later in the function HopfieldNetRecall to find the most similar “remembered” pattern given a new (fresh) input pattern.

 1 (defun Hopfield-Init (training-data
 2                       &aux temp *num-inputs* *num-training-examples*
 3                            *training-list* *inputCells* *tempStorage*
 4                            *HopfieldWeights*)
 5 
 6   (setq *num-inputs* (length (car training-data)))
 7   (setq *num-training-examples* (length training-data))
 8 
 9   (setq *training-list* (make-array (list *num-training-examples* *num-inputs*)))
10   (setq *inputCells* (make-array (list *num-inputs*)))
11   (setq *tempStorage* (make-array (list *num-inputs*)))
12   (setq *HopfieldWeights* (make-array (list *num-inputs* *num-inputs*)))
13 
14   (dotimes (j *num-training-examples*) ;; copy training data
15     (dotimes (i *num-inputs*)
16       (setf
17        (aref *training-list* j i)
18        (nth i (nth j training-data)))))
19 
20   (defun adjustInput (value)  ;; this function is lexically scoped
21     (if (< value 0.1)
22       -1.0
23       +1.0))
24 
25   (dotimes (i *num-inputs*) ;; adjust training data
26     (dotimes (n *num-training-examples*)
27       (setf
28        (aref *training-list* n i)
29        (adjustInput (aref *training-list* n i)))))
30 
31   (dotimes (i *num-inputs*) ;; zero weights
32     (dotimes (j *num-inputs*)
33       (setf (aref *HopfieldWeights* i j) 0)))
34 
35   (dotimes (j-1 (- *num-inputs* 1)) ;; autocorrelation weight matrix
36     (let ((j (+ j-1 1)))
37       (dotimes (i j)
38         (dotimes (s *num-training-examples*)
39           (setq temp
40                 (truncate
41                  (+
42                   (*  ;; 2 if's truncate values to -1 or 1:
43                    (adjustInput (aref *training-list* s i))
44                    (adjustInput (aref *training-list* s j)))
45                   (aref *HopfieldWeights* i j))))
46           (setf (aref *HopfieldWeights* i j) temp)
47           (setf (aref *HopfieldWeights* j i) temp)))))
48   (dotimes (i *num-inputs*)
49     (setf (aref *tempStorage* i) 0)
50     (dotimes (j i)
51       (setf (aref *tempStorage* i)
52             (+ (aref *tempStorage* i) (aref *HopfieldWeights* i j)))))
53 
54   (list ;; return the value of the Hopfield network data object
55    *num-inputs* *num-training-examples* *training-list*
56    *inputCells* *tempStorage* *HopfieldWeights*))

The following function HopfieldNetRecall iterates the network to let it settle in a stable pattern which we hope will be the original training pattern most closely resembling the noisy test pattern.

The inner (lexically scoped) function deltaEnergy defined on lines 9-12 calculates a change in energy from old input values and the autocorrelation weight matrix. The main code uses the inner functions to iterate over the input cells, possibly modifying the cell at index i delta energy is greater than zero. Remember that the lexically scoped inner functions have access to the variables for the number of inputs, the number of training examples, the list of training examples, the input cell values, tempoary storage, and the Hopfield network weights.

 1 (defun HopfieldNetRecall (aHopfieldNetwork numberOfIterations)
 2   (let ((*num-inputs* (nth 0 aHopfieldNetwork))
 3         (*num-training-examples*  (nth 1 aHopfieldNetwork))
 4         (*training-list* (nth 2 aHopfieldNetwork))
 5         (*inputCells* (nth 3 aHopfieldNetwork))
 6         (*tempStorage* (nth 4 aHopfieldNetwork))
 7         (*HopfieldWeights* (nth 5 aHopfieldNetwork)))
 8 
 9     (defun deltaEnergy (row-index y &aux (temp 0.0))  ;; lexically scoped
10       (dotimes (j *num-inputs*)
11         (setq temp (+ temp (* (aref *HopfieldWeights* row-index j) (aref y j)))))
12       (- (* 2.0 temp) (aref *tempStorage* row-index)))
13 
14     (dotimes (ii numberOfIterations) ;; main code
15       (dotimes (i *num-inputs*)
16         (setf (aref *inputCells* i)
17               (if (> (deltaEnergy i *inputCells*) 0)
18                   1
19                   0))))))

Function test in the next listing uses three different patterns for each test. Note that only the last pattern gets plotted to the output graphics PNG file for the purpose of producing figures for this chapter. If you want to produce plots of other patterns, edit just the third pattern defined on line AAAAA. The following plotting functions are inner lexically scoped so they have access to the data defined in the enclosing let expression in lines 16-21:

  • plotExemplar - plots a vector of data
  • plot-original-inputCells - plots the original input cells from training data
  • plot-inputCells - plots the modified input cells (a few cells randomly flipped in value)
  • modifyInput - scrambles training inputs
 1 (defun test (&aux aHopfieldNetwork)
 2   (let ((tdata '(  ;; sample sine wave data with different periods:
 3                  (1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 1 0 0 0)
 4                  (0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 1 0)
 5                  (0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 1)))
 6         (width 300)
 7         (height 180))
 8     (vecto::with-canvas (:width width :height height)
 9       (plotlib:plot-string-bold 10 (- height 14) "Hopfield pattern classifier")
10 
11       ;; Set up network:
12       (print tdata)
13       (setq aHopfieldNetwork (Hopfield-Init tdata))
14 
15       ;; lexically scoped variables are accesible by inner functions:
16       (let ((*num-inputs* (nth 0 aHopfieldNetwork))
17             (*num-training-examples*  (nth 1 aHopfieldNetwork))
18             (*training-list* (nth 2 aHopfieldNetwork))
19             (*inputCells* (nth 3 aHopfieldNetwork))
20             (*tempStorage* (nth 4 aHopfieldNetwork))
21             (*HopfieldWeights* (nth 5 aHopfieldNetwork)))
22 
23         (defun plotExemplar (row &aux (dmin 0.0) (dmax 1.0) (x 20) (y 40))
24           (let ((YSize (array-dimension *training-list* 1)))
25             (plotlib:plot-string (+ x 20) (- height (- y 10))
26                                  "Original Training Exemplar")
27             (dotimes (j Ysize)
28               (plotlib:plot-fill-rect
29                (+ x (* j plot-size+1)) (- height y) plot-size plot-size
30                (truncate (*
31                            (/  (- (aref *training-list* row j) dmin)
32                                (- dmax dmin))
33                            5)))
34               (plotlib:plot-frame-rect (+ x (* j plot-size+1))
35                                        (- height y) plot-size plot-size))))
36 
37         (defun plot-original-inputCells (&aux (dmin 0.0) (dmax 1.0) (x 20) (y 80))
38           (let ((Xsize (array-dimension *inputCells* 0)))
39             (plotlib:plot-string (+ x 20) (- height (- y 10)) "Scrambled Inputs")
40             (dotimes (j Xsize)
41               (plotlib:plot-fill-rect
42                (+ x (* j plot-size+1)) (- height y) plot-size plot-size
43                (truncate (*
44                            (/  (- (aref *inputCells* j) dmin) (- dmax dmin))
45                            5)))
46               (plotlib:plot-frame-rect (+ x (* j plot-size+1))
47                                        (- height y) plot-size plot-size))))
48 
49         (defun plot-inputCells (&aux (dmin 0.0) (dmax 1.0) (x 20) (y 120))
50           (let ((Xsize (array-dimension *inputCells* 0)))
51             (plotlib:plot-string (+ x 20) (- height (- y 10))
52                                  "Reconstructed Inputs")
53             (dotimes (j Xsize)
54               (plotlib:plot-fill-rect
55                (+ x (* j plot-size+1)) (- height y) plot-size plot-size
56                (truncate (* (/
57                              (- (aref *inputCells* j) dmin)
58                              (- dmax dmin))
59                             5)))
60               (plotlib:plot-frame-rect
61                  (+ x (* j plot-size+1)) (- height y) plot-size plot-size))))
62 
63         (defun modifyInput (arrSize arr)  ;; modify input array for testing
64           (dotimes (i arrSize)
65             (if (< (random 50) 5)
66                 (if (> (aref arr i) 0)
67                     (setf (aref arr i) -1)
68                     (setf (aref arr i) 1)))))
69 
70         ;; Test network on training data that is randomly modified:
71 
72         (dotimes (iter 10) ;; cycle 10 times and make 10 plots
73           (dotimes (s *num-training-examples*)
74             (dotimes (i *num-inputs*)
75               (setf (aref *inputCells* i) (aref *training-list* s i)))
76             (plotExemplar s)
77             (modifyInput *num-inputs* *inputCells*)
78             (plot-original-inputCells)
79             (dotimes (call-net 5) ;; iterate Hopfield net 5 times
80               (HopfieldNetRecall aHopfieldNetwork 1)  ;; calling with 1 iteration
81               (plot-inputCells)))
82 
83           (vecto::save-png
84            (concatenate
85             'string
86             "output_plot_hopfield_nn_" (format nil "~5,'0d" iter) ".png")))))))

The plotting functions in lines 23-62 use the plotlib library to make the plots you saw earlier. The function modifyInput in lines 64-69 randomly flips the values of the input cells, taking an original pattern and slightly modifying it.

Hopfield neural networks, at least to some extent, seem to model some aspects of human brains in the sense that they can function as content-addressable (also called associative) memories. Ideally a partial input pattern from a remembered input can reconstruct the complete original pattern. Another interesting feature of Hopfield networks is that these memories really are stored in a distributed fashion: some of the weights can be randomly altered and patterns are still remembered, but with more recall errors.

Using Python Deep Learning Models In Common Lisp With a Web Services Interface

In older editions of this book I had an example of using the Java DeepLearning4J deep learning library using Armed Bear Common Lisp, implemented in Java. I no longer use hybrid Java and Common Lisp applications in my own work and I decided to remove this example and replace it with two projects that use simple Python web services that act as wrappers for state of the art deep learning models with Common Lisp clients in the subdirectories:

  • src/spacy_web_client: use the spaCy deep learning models for general NLP. I sometimes use my own pure Common Lisp NLP libraries we saw in earlier chapters and sometimes I use a Common Lisp client calling deep learning libraries like spaCy and TensorFlow.
  • src/coref_web_client: coreference or anaphora resolution is the act of replacing pronouns in text with the original nouns that they refer to. This has traditionally been a very difficult and only partially solved problem until recent advances in deep learning models like BERT.

Note: in the next chapter we will cover similar functionality but we will use the py4cl library to more directly use Python and libraries like spaCy by starting another Python process and using streams for communication.

Setting up the Python Web Services Used in this Chapter

You will need python and pip installed on your system. The source e code for the Python web services is found in the directory loving-common-lisp/python.

Installing the spaCY NLP Services

I assume that you have some familiarity with using Python. If not, you will still be able to follow these directions assuming that you have the utilities pip, and python installed. I recommend installing Python and Pip using Anaconda.

The server code is in the subdirectory python/python_spacy_nlp_server where you will work when performing a one time initialization. After the server is installed you can then run it from the command line from any directory on your laptop.

I recommend that you use virtual Python environments when using Python applications to separate the dependencies required for each application or development project. Here I assume that you are running in a Python version 3.6 or higher environment. First you must install the dependencies:

1 pip install -U spacy
2 python -m spacy download en
3 pip install falcon

Then change directory to the subdirectory python/python_spacy_nlp_server in the git repo for this book and install the NLP server:

1 cd python/python_spacy_nlp_server
2 python setup.py install

Once you install the server, you can run it from any directory on your laptop or server using:

1 spacynlpserver

I use deep learning models written in Python using TensorFlow or PyTorch and provide Python web services that can be used in applications I write in Haskell or Common Lisp using web client interfaces for the services written in Python. While it is possible to directly embed models in Haskell and Common Lisp, I find it much easier and developer friendly to wrap deep learning models I use a REST services as I have done here. Often deep learning models only require about a gigabyte of memory and using pre-trained models has lightweight CPU resource needs so while I am developing on my laptop I might have two or three models running and available as wrapped REST services. For production, I configure both the Python services and my Haskell and Common Lisp applications to start automatically on system startup.

This is not a Python programming book and I will not discuss the simple Python wrapping code but if you are also a Python developer you can easily read and understand the code.

Installing the Coreference NLP Services

I recommend that you use virtual Python environments when using Python applications to separate the dependencies required for each application or development project. Here I assume that you are running in a Python version 3.6 environment. First you should install the dependencies:

1 pip install spacy==2.1.0
2 pip install neuralcoref 
3 pip install falcon

As I write this chapter the neuralcoref model and library require a slightly older version of SpaCy (the current latest version is 2.1.4).

Then change directory to the subdirectory python/python_coreference_anaphora_resolution_server in the git repo for this book and install the coref server:

1 cd python_coreference_anaphora_resolution_server
2 python setup.py install

Once you install the server, you can run it from any directory on your laptop or server using:

1 corefserver

While. as we saw in the last example, it is possible to directly embed models in Haskell and Common Lisp, I find it much easier and developer friendly to wrap deep learning models I use a REST services as I have done here. Often deep learning models only require about a gigabyte of memory and using pre-trained models has lightweight CPU resource needs so while I am developing on my laptop I might have two or three models running and available as wrapped REST services. For production, I configure both the Python services and my Haskell and Common Lisp applications to start automatically on system startup.

This is not a Python programming book and I will not discuss the simple Python wrapping code but if you are also a Python developer you can easily read and understand the code.

Common Lisp Client for the spaCy NLP Web Services

Before looking at the code, I will show you typical output from running this example:

 1 $ sbcl          
 2 This is SBCL 1.3.16, an implementation of ANSI Common Lisp.
 3 * (ql:quickload "spacy-web-client")
 4 To load "spacy":
 5   Load 1 ASDF system:
 6     spacy-web-client
 7 ; Loading "spacy-web-client"
 8 .........
 9 ("spacy-web-client")
10 * (defvar x
11    (spacy-web-client:spacy-client
12   "President Bill Clinton went to Congress. He gave a speech on taxes and Mexico."))
13 * (spacy-web-client:spacy-data-entities x)
14 "Bill Clinton/PERSON"
15 * (spacy-web-client:spacy-data-tokens x)
16 ("President" "Bill" "Clinton" "went" "to" "Congress" "." "He" "gave" "a"
17  "speech" "on" "taxes" "and" "Mexico" ".")

The client library is implemented in the file src/spacy_web_client/spacy-web-client.lisp:

 1 (in-package spacy-web-client)
 2 
 3 (defvar base-url "http://127.0.0.1:8008?text=")
 4 
 5 (defstruct spacy-data entities tokens)
 6   
 7 (defun spacy-client (query)
 8   (let* ((the-bytes
 9 	  (drakma:http-request
10 	   (concatenate 'string
11 			base-url
12 			(do-urlencode:urlencode  query))
13 	   :content-type "application/text"))
14 	 (fetched-data
15 	  (flexi-streams:octets-to-string the-bytes :external-format :utf-8))
16 	 (lists (with-input-from-string (s fetched-data)
17 		  (json:decode-json s))))
18     (print lists)
19     (make-spacy-data :entities (cadar lists) :tokens (cdadr lists))))

On line 3 we define base URL for accessing the spaCy web service, assuming that it is running on your laptop and not a remote server. On line 5 we define a defstruct named spacy-data that has two fields: a list of entities in the input text and a list of word tokens in the input text.

The function spacy-client builds a query string on lines 10-12 that consists of the base-url and the input query text URL encoded. The drakma library, that we used before, is used to make a HTTP request from the Python spaCy server. Lines 14-15 uses the flexi-streams package to convert raw byte data to UTF8 characters. Lines 16-17 use the json package to parse the UTF8 encoded string, getting two lists of strings. I left the debug printout expression in line 18 so that you can see the results of parsing the JSON data. The function make-spacy-data was generated for us by the defstruct statement on line 5.

Common Lisp Client for the Coreference NLP Web Services

Let’s look at some typical output from this example, then we will look at the code:

 1 $ sbcl
 2 This is SBCL 1.3.16, an implementation of ANSI Common Lisp.
 3 More information about SBCL is available at <http://www.sbcl.org/>.
 4 
 5 SBCL is free software, provided as is, with absolutely no warranty.
 6 It is mostly in the public domain; some portions are provided under
 7 BSD-style licenses.  See the CREDITS and COPYING files in the
 8 distribution for more information.
 9 
10 #P"/Users/markw/quicklisp/setup.lisp" 
11 "starting up quicklisp" 
12 * (ql:quickload "coref")
13 To load "coref":
14   Load 1 ASDF system:
15     coref
16 ; Loading "coref"
17 ..................................................
18 [package coref]
19 ("coref")
20 * (coref:coref-client "My sister has a dog Henry. She loves him.")
21 
22 "My sister has a dog Henry. My sister loves a dog Henry."
23 * (coref:coref-client "My sister has a dog Henry. He often runs to her.")
24 
25 "My sister has a dog Henry. a dog Henry often runs to My sister."

Notice that pronouns in the input text are correctly replaced by the noun phrases that the pronoun refer to.

The implementation for the core client is in the file src/coref_web_client/coref.lisp:

 1 (in-package #:coref)
 2 
 3 ;; (ql:quickload :do-urlencode)
 4 
 5 (defvar base-url "http://127.0.0.1:8000?text=")
 6 
 7 (defun coref-client (query)
 8   (let ((the-bytes
 9 	 (drakma:http-request
10 	  (concatenate 'string
11 		       base-url
12 		       (do-urlencode:urlencode  query)
13 		       "&no_detail=1")
14 	  :content-type "application/text")))
15     (flexi-streams:octets-to-string the-bytes :external-format :utf-8)))

This code is similar to the example in the last section for setting up a call to http-request but is simpler: here the Python coreference web service accepts a string as input and returns a string as output with pronouns replaced by the nouns or noun phrases that they refer to. The example in the last section had to parse returned JSON data, this example does not.

Trouble Shooting Possible Problems - Skip if this Example Works on Your System

If you run Common Lisp in an IDE (for example in LispWorks’ IDE or VSCode with a Common Lisp plugin) make sure you start the IDE from the command line so your PATH environment variable will be set as it is in our bash or zsh shell.

Make sure you are starting your Common Lisp program or running a Common Lisp repl with the same Python installation (if you have Quicklisp installed, then you also have the package uiop installed):

1 $ which python
2 /Users/markw/bin/anaconda3/bin/python
3 $ sbcl
4 This is SBCL 2.0.2, an implementation of ANSI Common Lisp.
5 * (uiop:run-program "which python" :output :string)
6 "/Users/markw/bin/anaconda3/bin/python"
7 nil
8 0
9 * 

Python Interop Wrap-up

Much of my professional work in the last five years involved deep learning models and currently most available software is written in Python. While there are available libraries for calling Python code from Common Lisp, these libraries tend to not work well for Python code using libraries like TensorFlow, spaCy, PyTorch, etc., especially if the Python code is configured to use GPUs via CUDA of special hardware like TPUs. I find it simpler to simply wrap functionality implemented in Python as a simple web service.

Using the PY4CL Library to Embed Python in Common Lisp

We will tackle the same problem as the previous chapter but take a different approach. Now we will use Ben Dudson’s project Py4CL that automatically starts a Python process and communicates with the Python process via a stream interface. The approach we took before is appropriate for large scale systems where you might want scale horizontally by having Python processes running on different servers than the servers used for the Common Lisp parts of your application. The approach we now take is much more convenient for what I call “laptop development” where the management of a Python process and communication is handled for you by the Py4CL library. If you need to build multi-server distributed systems for scaling reasons then use the examples in the last chapter.

While Py4CL provides a lot of flexibility for passing primitive types between Common Lisp and Python (in both directions), I find it easiest to write small Python wrappers that only use lists, arrays, numbers, and strings as arguments and return types. You might want to experiment with the examples on the Py4CL GitHub page that let you directly call Python libraries without writing wrappers. When I write code for my own projects I try to make code as simple as possible so when I need to later revisit my own code it is immediately obvious what it is doing. Since I have been using Common Lisp for almost 40 years, I often find myself reusing bits of my own old code and I optimize for making this as easy as possible. In other words I favor readability over “clever” code.

Project Structure, Building the Python Wrapper, and Running an Example

The packaging of the Lisp code for my spacy-py4cl package is simple. Here is the listing of package.lisp for this project:

1 ;;;; package.lisp
2 
3 (defpackage #:spacy-py4cl
4   (:use #:cl #:py4cl)
5   (:export #:nlp))

Listing of spacy-py4cl.asd:

 1 ;;;; spacy-py4cl.asd
 2 
 3 (asdf:defsystem #:spacy-py4cl
 4   :description "Use py4cl to use Python spaCy library embedded in Common Lisp"
 5   :author "Mark Watson <markw@markwatson.com>"
 6   :license "Apache 2"
 7   :depends-on (#:py4cl)
 8   :serial t
 9   :components ((:file "package")
10                (:file "spacy-py4cl")))

You need to run a Python setup procedure to install the Python wrapper for space-py4cl on your system. Some output is removed for conciseness:

 1 $ cd loving-common-lisp/src/spacy-py4cl
 2 $ cd PYTHON_SPACY_SETUP_install/spacystub
 3 $ pip install -U spacy
 4 $ python -m spacy download en
 5 $ python setup.py install
 6 running install
 7 running build
 8 running build_py
 9 running install_lib
10 running install_egg_info
11 Writing /Users/markw/bin/anaconda3/lib/python3.7/site-packages/spacystub-0.21-py3.7.\
12 egg-info

You only need to do this once unless you update to a later version of Python on your system.

If you are not familiar with Python, it is worth looking at the wrapper implementation, otherwise skip the next few paragraphs.

$ ls -R PYTHON_SPACY_SETUP_install 
spacystub

PYTHON_SPACY_SETUP_install/spacystub:
README.md		setup.py	spacystub

PYTHON_SPACY_SETUP_install/spacystub/build/lib:
spacystub

PYTHON_SPACY_SETUP_install/spacystub/spacystub:
parse.py

Here is the implementation of setup.py that specifies how to build and install the wrapper globally for use on your system:

1 from distutils.core import setup
2 
3 setup(name='spacystub',
4       version='0.21',
5       packages=['spacystub'],
6       license='Apache 2',
7       py_modules=['pystub'],
8       long_description=open('README.md').read())

The definition of the library in file PYTHON_SPACY_SETUP_install/spacystub/spacystub/parse.py:

 1 import spacy
 2 
 3 nlp = spacy.load("en")
 4 
 5 def parse(text):
 6   doc = nlp(text)
 7   response = {}
 8   response['entities'] = [(ent.text, ent.start_char, ent.end_char, ent.label_) for e\
 9 nt in doc.ents]
10   response['tokens'] = [token.text for token in doc]
11   return [response['tokens'], response['entities']]

Here is a Common Lisp repl session showing you how to use the library implemented in the next section:

 1 $ ccl 
 2 Clozure Common Lisp Version 1.12  DarwinX8664
 3 
 4 For more information about CCL, please see http://ccl.clozure.com.
 5 
 6 CCL is free software.  It is distributed under the terms of the Apache Licence, Vers\
 7 ion 2.0.
 8 ? (ql:quickload "spacy-py4cl")
 9 To load "spacy-py4cl":
10   Load 1 ASDF system:
11     spacy-py4cl
12 ; Loading "spacy-py4cl"
13 [package spacy-py4cl]
14 ("spacy-py4cl")
15 ? (spacy-py4cl:nlp "The President of Mexico went to Canada")
16 #(#("The" "President" "of" "Mexico" "went" "to" "Canada") #(("Mexico" 17 23 "GPE") (\
17 "Canada" 32 38 "GPE")))
18 ? (spacy-py4cl:nlp "Bill Clinton bought a red car. He drove it to the beach.")
19 #(#("Bill" "Clinton" "bought" "a" "red" "car" "." "He" "drove" "it" "to" "the" "beac\
20 h" ".") #(("Bill Clinton" 0 12 "PERSON")))

Entities in text are identified with the starting and ending character indices that refer to the input string. For example, the entity “Mexico” starts at character position 17 and character index 23 is the character after the entity name in the input string. The entity type “GPE” refers to a country name and “PERSON” refers to a person’s name in the input text.

Implementation of spacy-py4cl

The Common Lisp implementation for this package is simple. In line 5 the call to py4cl:python-exec starts a process to run Python and imports the function parse from my Python wrapper. The call to py4cl:import-function in line 6 finds a function named “parse” in the attached Python process and generates a Common Lisp function with the same name that handles calling into Python and converting handling the returned values to Common Lisp values:

1 ;;;; spacy-py4cl.lisp
2 
3 (in-package #:spacy-py4cl)
4 
5 (py4cl:python-exec "from spacystub.parse import parse")
6 (py4cl:import-function "parse")
7 
8 (defun nlp (text)
9   (parse text))

While it is possible to call Python libraries directly using Py4CL, when I need to frequently use Python libraries like spaCY, TensorFlow, fast.ai, etc. in Common Lisp, I like to use wrappers that use simple as possible data types and APIs to communicate between a Common Lisp process and the spawned Python process.

Trouble Shooting Possible Problems - Skip if this Example Works on Your System

When you install my wrapper library in Python on the command line whatever your shell if (bash, zsh, etc.) you should then try to import the library in a Python repl:

1 $ python
2 Python 3.7.4 (default, Aug 13 2019, 15:17:50) 
3 [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
4 Type "help", "copyright", "credits" or "license" for more information.
5 >>> from spacystub.parse import parse
6 >>> parse("John Smith is a Democrat")
7 [['John', 'Smith', 'is', 'a', 'Democrat'], [('John Smith', 0, 10, 'PERSON'), ('Democ\
8 rat', 16, 24, 'NORP')]]
9 >>>

If this works and the Common Lisp library spacy-py4cl does not, then make sure you are starting your Common Lisp program or running a Common Lisp repl with the same Python installation (if you have Quicklisp installed, then you also have the package uiop installed):

1 $ which python
2 /Users/markw/bin/anaconda3/bin/python
3 $ sbcl
4 This is SBCL 2.0.2, an implementation of ANSI Common Lisp.
5 * (uiop:run-program "which python" :output :string)
6 "/Users/markw/bin/anaconda3/bin/python"
7 nil
8 0
9 * 

If you run Common Lisp in an IDE (for example in LispWorks’ IDE or VSCode with a Common Lisp plugin) make sure you start the IDE from the command line so your PATH environment variable will be set as it is in our bash or zsh shell.

Wrap-up for Using Py4CL

While I prefer Common Lisp for general development and also AI research, there are useful Python libraries that I want to integrate into my projects. I hope that the last chapter and this chapter provide you with two solid approaches for you to use in your own work to take advantage of Python libraries.

Semantic Web and Linked Data

I have written two previous books on the semantic web and linked data and most of my programming books have semantic web examples. Please note that the background material here on the semantic web standards RDF, RDFS, and SPARQL is shared with my book Practical Artificial Intelligence Programming With Java so if you have read that book then the first several pages of this chapter will seem familiar.

Construction of Knowledge Graphs, as we will do in later chapters, is a core technology at many corporations and organizations to prevent data silos where different database systems are poorly connected and not as useful in combination than they could be. The use of RDF data stores is a powerful technique for data interoperability within organizations. Semantic Web standards like RDF, RDFS, and SPARQL support both building Knowledge Graphs and also key technologies for automating the collection and use of web data.

I worked as a contractor at Google on an internal Knowledge Graph project and I currently work at Olive AI on their Knowledge Graph team.

The semantic web is intended to provide a massive linked set of data for use by software systems just as the World Wide Web provides a massive collection of linked web pages for human reading and browsing. The semantic web is like the web in that anyone can generate any content that they want. This freedom to publish anything works for the web because we use our ability to understand natural language to interpret what we read – and often to dismiss material that based upon our own knowledge we consider to be incorrect.

Semantic web and linked data technologies are also useful for smaller amounts of data, an example being a Knowledge Graph containing information for a business. We will further explore Knowledge Graphs in the next two chapters.

The core concept for the semantic web is data integration and use from different sources. As we will soon see, the tools for implementing the semantic web are designed for encoding data and sharing data from many different sources.

I cover the semantic web in this book because I believe that semantic web technologies are complementary to AI systems for gathering and processing data on the web. As more web pages are generated by applications (as opposed to simply showing static HTML files) it becomes easier to produce both HTML for human readers and semantic data for software agents.

There are several very good semantic web toolkits for the Java language and platform. Here we use Apache Jena because it is what I often use in my own work and I believe that it is a good starting technology for your first experiments with semantic web technologies. This chapter provides an incomplete coverage of semantic web technologies and is intended as a gentle introduction to a few useful techniques and how to implement those techniques in Java. This chapter is the start of a journey in the technology that I think is as important as technologies like deep learning that get more public mindshare.

The following figure shows a layered hierarchy of data models that are used to implement semantic web applications. To design and implement these applications we need to think in terms of physical models (storage and access of RDF, RDFS, and perhaps OWL data), logical models (how we use RDF and RDFS to define relationships between data represented as unique URIs and string literals and how we logically combine data from different sources) and conceptual modeling (higher level knowledge representation and reasoning using OWL). Originally RDF data was serialized as XML data but other formats have become much more popular because they are easier to read and manually create. The top three layers in the figure might be represented as XML, or as LD-JSON (linked data JSON) or formats like N-Triples and N3 that we will use later.

Semantic Web Data Models
Semantic Web Data Models

Resource Description Framework (RDF) Data Model

The Resource Description Framework (RDF) is used to encode information and the RDF Schema (RDFS) facilitates using data with different RDF encodings without the need to convert one set of schemas to another. Later, using OWL we can simply declare that one predicate is the same as another, that is, one predicate is a sub-predicate of another (e.g., a property containsCity can be declared to be a sub-property of containsPlace so if something contains a city then it also contains a place), etc. The predicate part of an RDF statement often refers to a property.

RDF data was originally encoded as XML and intended for automated processing. In this chapter we will use two simple to read formats called “N-Triples” and “N3.” Apache Jena can be used to convert between all RDF formats so we might as well use formats that are easier to read and understand. RDF data consists of a set of triple values:

  • subject
  • predicate
  • object

Some of my work with semantic web technologies deals with processing news stories, extracting semantic information from the text, and storing it in RDF. I will use this application domain for the examples in this chapter and the next chapter when we implement code to automatically generate RDF for Knowledge Graphs. I deal with triples like:

  • subject: a URL (or URI) of a news article.
  • predicate: a relation like “containsPerson”.
  • object: a literal value like “Bill Clinton” or a URI representing Bill Clinton.

In the next chapter we will use the entity recognition library we developed in an earlier chapter to create RDF from text input.

We will use either URIs or string literals as values for objects. We will always use URIs for representing subjects and predicates. In any case URIs are usually preferred to string literals. We will see an example of this preferred use but first we need to learn the N-Triple and N3 RDF formats.

I proposed the idea that RDF was more flexible than Object Modeling in programming languages, relational databases, and XML with schemas. If we can tag new attributes on the fly to existing data, how do we prevent what I might call “data chaos” as we modify existing data sources? It turns out that the solution to this problem is also the solution for encoding real semantics (or meaning) with data: we usually use unique URIs for RDF subjects, predicates, and objects, and usually with a preference for not using string literals. The definitions of predicates are tied to a namespace and later with OWL we will state the equivalence of predicates in different namespaces with the same semantic meaning. I will try to make this idea more clear with some examples and Wikipedia has a good writeup on RDF.

Any part of a triple (subject, predicate, or object) is either a URI or a string literal. URIs encode namespaces. For example, the containsPerson predicate in the last example could be written as:

http://knowledgebooks.com/ontology/#containsPerson

The first part of this URI is considered to be the namespace for this predicate “containsPerson.” When different RDF triples use this same predicate, this is some assurance to us that all users of this predicate understand to the same meaning. Furthermore, we will see later that we can use RDFS to state equivalency between this predicate (in the namespace http://knowledgebooks.com/ontology/) with predicates represented by different URIs used in other data sources. In an “artificial intelligence” sense, software that we write does not understand predicates like “containsCity”, “containsPerson”, or “isLocation” in the way that a human reader can by combining understood common meanings for the words “contains”, “city”, “is”, “person”, and “location” but for many interesting and useful types of applications that is fine as long as the predicate is used consistently. We will see shortly that we can define abbreviation prefixes for namespaces which makes RDF and RDFS files shorter and easier to read.

The Jena library supports most serialization formats for RDF:

  • Turtle
  • N3
  • N-Triples
  • NQuads
  • TriG
  • JSON-LD
  • RDF/XML
  • RDF/JSON
  • TriX
  • RDF Binary

A statement in N-Triple format consists of three URIs (two URIs and a string literals for the object) followed by a period to end the statement. While statements are often written one per line in a source file they can be broken across lines; it is the ending period which marks the end of a statement. The standard file extension for N-Triple format files is *.nt and the standard format for N3 format files is *.n3.

My preference is to use N-Triple format files as output from programs that I write to save data as RDF. N-Triple files don’t use any abbreviations and each RDF statement is self-contained. I often use tools like the command line commands in Jena or RDF4J to convert N-Triple files to N3 or other formats if I will be reading them or even hand editing them. Here is an example using the N3 syntax:

@prefix kb:  <http://knowledgebooks.com/ontology#>

<http://news.com/201234/> kb:containsCountry "China" .

The N3 format adds prefixes (abbreviations) to the N-Triple format. In practice it would be better to use the URI http://dbpedia.org/resource/China instead of the literal value “China.”

Here we see the use of an abbreviation prefix “kb:” for the namespace for my company KnowledgeBooks.com ontologies. The first term in the RDF statement (the subject) is the URI of a news article. The second term (the predicate) is “containsCountry” in the “kb:” namespace. The last item in the statement (the object) is a string literal “China.” I would describe this RDF statement in English as, “The news article at URI http://news.com/201234 mentions the country China.”

This was a very simple N3 example which we will expand to show additional features of the N3 notation. As another example, let’s look at the case if this news article also mentions the USA. Instead of adding a whole new statement like this we can combine them using N3 notation. Here we have two separate RDF statements:

@prefix kb:  <http://knowledgebooks.com/ontology#> .

<http://news.com/201234/>
  kb:containsCountry
  <http://dbpedia.org/resource/China>  .
  
<http://news.com/201234/>
  kb:containsCountry
  <http://dbpedia.org/resource/United_States>  .

We can collapse multiple RDF statements that share the same subject and optionally the same predicate:

@prefix kb:  <http://knowledgebooks.com/ontology#> .

<http://news.com/201234/>
  kb:containsCountry
    <http://dbpedia.org/resource/China> ,
    <http://dbpedia.org/resource/United_States>  .

The indentation and placement on separate lines is arbitrary - use whatever style you like that is readable. We can also add in additional predicates that use the same subject (I am going to use string literals here instead of URIs for objects to make the following example more concise but in practice prefer using URIs):

@prefix kb:  <http://knowledgebooks.com/ontology#> .

<http://news.com/201234/>
        kb:containsCountry "China" ,
                           "USA" .
        kb:containsOrganization "United Nations" ;
        kb:containsPerson "Ban Ki-moon" , "Gordon Brown" ,
                          "Hu Jintao" , "George W. Bush" ,
                          "Pervez Musharraf" ,
                          "Vladimir Putin" , 
                          "Mahmoud Ahmadinejad" .

This single N3 statement represents ten individual RDF triples. Each section defining triples with the same subject and predicate have objects separated by commas and ending with a period. Please note that whatever RDF storage system you use (we will be using Jena) it makes no difference if we load RDF as XML, N-Triple, of N3 format files: internally subject, predicate, and object triples are stored in the same way and are used in the same way. RDF triples in a data store represent directed graphs that may not all be connected.

I promised you that the data in RDF data stores was easy to extend. As an example, let us assume that we have written software that is able to read online news articles and create RDF data that captures some of the semantics in the articles. If we extend our program to also recognize dates when the articles are published, we can simply reprocess articles and for each article add a triple to our RDF data store using a form like:

@prefix kb:  <http://knowledgebooks.com/ontology#> .

<http://news.com/201234/> kb:datePublished "2008-05-11" .

Here we just represent the date as a string. We can add a type to the object representing a specific date:

@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix kb:  <http://knowledgebooks.com/ontology#> .
 
 <http://news.com/201234/> kb:datePublished "2008-05-11"^^xsd:date .

Furthermore, if we do not have dates for all news articles that is often acceptable because when constructing SPARQL queries you can match optional patterns. If for example you are looking up articles on a specific subject then some results may have a publication date attached to the results for that article and some might not. In practice RDF supports types and we would use a date type as seen in the last example, not a string. However, in designing the example programs for this chapter I decided to simplify our representation of URIs and often use string literals as simple Java strings. For many applications this isn’t a real limitation.

Extending RDF with RDF Schema

RDF Schema (RDFS) supports the definition of classes and properties based on set inclusion. In RDFS classes and properties are orthogonal. Let’s start with looking at an example using additional namespaces:

@prefix kb:  <http://knowledgebooks.com/ontology#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
@prefix rdfs:   <http://www.w3.org/2000/01/rdf-schema#>
@prefix dbo: <http://dbpedia.org/ontology/>

<http://news.com/201234/>
  kb:containsCountry
  <http://dbpedia.org/resource/China>  .
  
<http://news.com/201234/>
  kb:containsCountry
  <http://dbpedia.org/resource/United_States>  .
  
<http://dbpedia.org/resource/China>
  rdfs:label "China"@en,
  rdf:type dbo:Place ,
  rdf:type dbo:Country .

Because the semantic web is intended to be processed automatically by software systems it is encoded as RDF. There is a problem that must be solved in implementing and using the semantic web: everyone who publishes semantic web data is free to create their own RDF schemas for storing data; for example, there is usually no single standard RDF schema definition for topics like news stories and stock market data. The SKOS is a namespace containing standard schemas and the most widely used standard is schema.org. Understanding the ways of integrating different data sources using different schemas helps to understand the design decisions behind the semantic web applications. In this chapter I often use my own schemas in the knowledgebooks.com namespace for the simple examples you see here. When you build your own production systems part of the work is searching through schema.org and SKOS to use standard name spaces and schemas when possible. The use of standard schemas helps when you link internal proprietary Knowledge Graphs used in organization with public open data from sources like WikiData and DBPedia.

We will start with an example that is an extension of the example in the last section that also uses RDFS. We add a few additional RDF statements:

@prefix kb:  <http://knowledgebooks.com/ontology#> .
@prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .

kb:containsCity rdfs:subPropertyOf kb:containsPlace .
kb:containsCountry rdfs:subPropertyOf kb:containsPlace .
kb:containsState rdfs:subPropertyOf kb:containsPlace .

The last three lines declare that:

  • The property containsCity is a sub-property of containsPlace.
  • The property containsCountry is a sub-property of containsPlace.
  • The property containsState is a sub-property of containsPlace.

Why is this useful? For at least two reasons:

  • You can query an RDF data store for all triples that use property containsPlace and also match triples with properties equal to containsCity, containsCountry, or containsState. There may not even be any triples that explicitly use the property containsPlace.
  • Consider a hypothetical case where you are using two different RDF data stores that use different properties for naming cities: cityName and city. You can define cityName to be a sub-property of city and then write all queries against the single property name city. This removes the necessity to convert data from different sources to use the same Schema. You can also use OWL to state property and class equivalency.

In addition to providing a vocabulary for describing properties and class membership by properties, RDFS is also used for logical inference to infer new triples, combine data from different RDF data sources, and to allow effective querying of RDF data stores. We will see examples of all of these features of RDFS when we later start using the Jena libraries to perform SPARQL queries.

The SPARQL Query Language

SPARQL is a query language used to query RDF data stores. While SPARQL may initially look like SQL, we will see that there are some important differences like support for RDFS and OWL inferencing and graph-based instead of relational matching operations. We will cover the basics of SPARQL in this section and then see more examples later when we learn how to embed Jena in Java applications, and see more examples in the last chapter Knowledge Graph Navigator.

We will use the N3 format RDF file test_data/news.n3 for the examples. I created this file automatically by spidering Reuters news stories on the news.yahoo.com web site and automatically extracting named entities from the text of the articles. We saw techniques for extracting named entities from text in earlier chapters. In this chapter we use these sample RDF files.

You have already seen snippets of this file and I list the entire file here for reference, edited to fit line width: you may find the file news.n3 easier to read if you are at your computer and open the file in a text editor so you will not be limited to what fits on a book page:

@prefix kb:  <http://knowledgebooks.com/ontology#> .
@prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .

kb:containsCity rdfs:subPropertyOf kb:containsPlace .

kb:containsCountry rdfs:subPropertyOf kb:containsPlace .

kb:containsState rdfs:subPropertyOf kb:containsPlace .

<http://yahoo.com/20080616/usa_flooding_dc_16/>
        kb:containsCity "Burlington" , "Denver" ,
                        "St. Paul" ," Chicago" ,
                        "Quincy" , "CHICAGO" ,
                        "Iowa City" ;
        kb:containsRegion "U.S. Midwest" , "Midwest" ;
        kb:containsCountry "United States" , "Japan" ;
        kb:containsState "Minnesota" , "Illinois" , 
                         "Mississippi" , "Iowa" ;
        kb:containsOrganization "National Guard" ,
                         "U.S. Department of Agriculture" ,
                         "White House" ,
                         "Chicago Board of Trade" ,
                         "Department of Transportation" ;
        kb:containsPerson "Dena Gray-Fisher" ,
                          "Donald Miller" ,
                          "Glenn Hollander" ,
                          "Rich Feltes" ,
                          "George W. Bush" ;
        kb:containsIndustryTerm "food inflation" , "food" ,
                                "finance ministers" ,
                                "oil" .

<http://yahoo.com/78325/ts_nm/usa_politics_dc_2/>
        kb:containsCity "Washington" , "Baghdad" ,
                        "Arlington" , "Flint" ;
        kb:containsCountry "United States" ,
                           "Afghanistan" ,
                           "Iraq" ;
        kb:containsState "Illinois" , "Virginia" ,
                         "Arizona" , "Michigan" ;
        kb:containsOrganization "White House" ,
                                "Obama administration" ,
                                "Iraqi government" ;
        kb:containsPerson "David Petraeus" ,
                          "John McCain" ,
                          "Hoshiyar Zebari" ,
                          "Barack Obama" ,
                          "George W. Bush" ,
                          "Carly Fiorina" ;
        kb:containsIndustryTerm "oil prices" .

<http://yahoo.com/10944/ts_nm/worldleaders_dc_1/>
        kb:containsCity "WASHINGTON" ;
        kb:containsCountry "United States" , "Pakistan" ,
                           "Islamic Republic of Iran" ;
        kb:containsState "Maryland" ;
        kb:containsOrganization "University of Maryland" ,
                                "United Nations" ;
        kb:containsPerson "Ban Ki-moon" , "Gordon Brown" ,
                          "Hu Jintao" , "George W. Bush" ,
                          "Pervez Musharraf" ,
                          "Vladimir Putin" ,
                          "Steven Kull" ,
                          "Mahmoud Ahmadinejad" .

<http://yahoo.com/10622/global_economy_dc_4/>
        kb:containsCity "Sao Paulo" , "Kuala Lumpur" ;
        kb:containsRegion "Midwest" ;
        kb:containsCountry "United States" , "Britain" ,
                           "Saudi Arabia" , "Spain" ,
                           "Italy" , India" , 
                           ""France" , "Canada" ,
                           "Russia" , "Germany" , "China" ,
                           "Japan" , "South Korea" ;
        kb:containsOrganization "Federal Reserve Bank" ,
                                "European Union" ,
                                "European Central Bank" ,
                                "European Commission" ;
        kb:containsPerson "Lee Myung-bak" , "Rajat Nag" ,
                          "Luiz Inacio Lula da Silva" ,
                          "Jeffrey Lacker" ;
        kb:containsCompany "Development Bank Managing" ,
                           "Reuters" ,
                           "Richmond Federal Reserve Bank" ;
        kb:containsIndustryTerm "central bank" , "food" ,
                                "energy costs" ,
                                "finance ministers" ,
                                "crude oil prices" ,
                                "oil prices" ,
                                "oil shock" ,
                                "food prices" ,
                                "Finance ministers" ,
                                "Oil prices" , "oil" .

In the following examples, we will use the main method in the class JenaApi (developed in the next section) that allows us to load multiple RDF input files and then to interactively enter SPARQL queries.

We will start with a simple SPARQL query for subjects (news article URLs) and objects (matching countries) with the value for the predicate equal to containsCountry. Variables in queries start with a question mark character and can have any names:

SELECT ?subject ?object
      WHERE {
        ?subject
        <http://knowledgebooks.com/ontology#containsCountry>
        ?object .
}

It is important for you to understand what is happening when we apply the last SPARQL query to our sample data. Conceptually, all the triples in the sample data are scanned, keeping the ones where the predicate part of a triple is equal to http://knowledgebooks.com/ontology#containsCountry. In practice RDF data stores supporting SPARQL queries index RDF data so a complete scan of the sample data is not required. This is analogous to relational databases where indices are created to avoid needing to perform complete scans of database tables.

In practice, when you are exploring a Knowledge Graph like DBPedia or WikiData (that are just very large collections of RDF triples), you might run a query and discover a useful or interesting entity URI in the triple store, then drill down to find out more about the entity. In a later chapter Knowledge Graph Navigator we attempt to automate this exploration process using the DBPedia data as a Knowledge Graph.

We will be using the same code to access the small example of RDF statements in our sample data as we will for accessing DBPedia or WikiData.

We can make this last query easier to read and reduce the chance of misspelling errors by using a namespace prefix:

PREFIX kb:  <http://knowledgebooks.com/ontology#>
SELECT ?subject ?object
  WHERE {
      ?subject kb:containsCountry ?object .
  }

Later in the chapter Knowledge Graph Navigator we will write an application that automatically generates SPARQL queries for the DBPedia public knowledge Graph. These queries will be be more complex than the simpler examples here. Reading this chapter before Knowledge Graph Navigator is recommended.

Case Study: Using SPARQL to Find Information about Board of Directors Members of Corporations and Organizations

Before we write software to automate the process of using SPARQL queries to find information on DBPedia, let’s perform a few manual queries for finding information on board of directors of corportations. To start with, we would like to find an RDF property that indicates board membership. There is a common expression for finding information on the web using search engines and also for using SPARQL queries: “follow your nose,” that is, when you see something interesting, dig down with more queries on whatever interests you.

SELECT DISTINCT ?s 
WHERE { 
    ?s ?p "Board of Directors"@en .
    FILTER (?p IN (<http://www.w3.org/2000/01/rdf-schema#label>, <http://xmlns.com/f\
oaf/0.1/name>) && !regex(str(?s), "category", "i"))
}

We will find the property:

http://dbpedia.org/resource/Board_of_Directors

RDF

select ?s ?p { ?s ?p <http://dbpedia.org/resource/Board_of_Directors> } limit 6
s	p
http://en.wikipedia.org/wiki/Board_of_Directors	http://xmlns.com/foaf/0.1/primaryTop\
ic
http://dbpedia.org/resource/Lynn_D._Stewart_(businessman)	http://dbpedia.org/ontolog\
y/board
http://dbpedia.org/resource/Advance_America_Cash_Advance	http://dbpedia.org/ontology\
/keyPerson
http://dbpedia.org/resource/Railways_of_Slovak_Republic	http://dbpedia.org/ontology/\
keyPerson
http://dbpedia.org/resource/Divine_Word_University_of_Tacloban__DWU_Jubilee_Foundati\
on,_Inc.__1	http://dbpedia.org/ontology/keyPerson
http://dbpedia.org/resource/Mathys_Medical

The property http://dbpedia.org/ontology/board is what we are looking for. Let’s keep “following our nose” to find examples of board members and the companies they server:

select ?person ?company { ?person <http://dbpedia.org/ontology/board> ?company} limi\
t 6

The results are:

person	company
http://dbpedia.org/resource/Matthew_Buckland	http://dbpedia.org/resource/Creative_Co\
mmons
http://dbpedia.org/resource/Jimmy_Wales	http://dbpedia.org/resource/Creative_Commons
http://dbpedia.org/resource/Nabeel_Rajab	http://dbpedia.org/resource/Human_Rights_Wa\
tch
http://dbpedia.org/resource/Vincent_Tewson	http://dbpedia.org/resource/International\
_Confederation_of_Free_Trade_Unions
http://dbpedia.org/resource/William_T._Young	http://dbpedia.org/resource/KFC
http://dbpedia.org/resource/Colonel_Sanders	http://dbpedia.org/resource/KFC

Let’s see what information we can find on the founder of WikiPedi Jimmy Wales:

select ?p ?o { <http://dbpedia.org/resource/Jimmy_Wales> ?p ?o } limit 200

A few of the many results are:

p	o
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://www.w3.org/2002/07/owl#Thing
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://xmlns.com/foaf/0.1/Person
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://dbpedia.org/ontology/Person
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://www.wikidata.org/entity/Q2156\
27
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://www.wikidata.org/entity/Q2422\
9398
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://www.wikidata.org/entity/Q5
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://dbpedia.org/ontology/Agent
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://schema.org/Person
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://dbpedia.org/class/yago/Wikica\
tAmericanComputerScientists
http://www.w3.org/1999/02/22-rdf-syntax-ns#type	http://dbpedia.org/class/yago/Wikica\
tAmericanExpatriatesInTheUnitedKingdom
http://www.w3.org/2000/01/rdf-schema#label	
"Jimmy Wales"@en

Installing the Apache Jena Fuseki RDF Server

TBD

I have a github repository mark-watson/fuseki-semantic-web-dev-setupmthat you shoud clone:

1 git clone https://github.com/mark-watson/fuseki-semantic-web-dev-setup.git
2 cd fuseki-semantic-web-dev-setup
3 ./fuseki-server --file RDF/sample_news.nt /news

This will run the SPARQL server Fuseki locally on your laptop and the default graph is “news” and you will see output like:

 1 $ ./fuseki-server --file RDF/sample_news.nt /news
 2 [2020-11-07 09:31:13] Server     INFO  Dataset: in-memory: load file: RDF/sample_new\
 3 s.nt
 4 [2020-11-07 09:31:14] Server     INFO  Running in read-only mode for /news
 5 [2020-11-07 09:31:14] Server     INFO  Apache Jena Fuseki 3.16.0
 6 [2020-11-07 09:31:14] Config     INFO  FUSEKI_HOME=/Users/markw/GITHUB/fuseki-semant\
 7 ic-web-dev-setup/.
 8 [2020-11-07 09:31:14] Config     INFO  FUSEKI_BASE=/Users/markw/GITHUB/fuseki-semant\
 9 ic-web-dev-setup/run
10 [2020-11-07 09:31:14] Config     INFO  Shiro file: file:///Users/markw/GITHUB/fuseki\
11 -semantic-web-dev-setup/run/shiro.ini
12 [2020-11-07 09:31:15] Server     INFO  Dataset Type: in-memory, with files loaded
13 [2020-11-07 09:31:15] Server     INFO  Path = /news
14 [2020-11-07 09:31:15] Server     INFO  System
15 [2020-11-07 09:31:15] Server     INFO    Memory: 4.0 GiB
16 [2020-11-07 09:31:15] Server     INFO    Java:   14.0.1
17 [2020-11-07 09:31:15] Server     INFO    OS:     Mac OS X 10.15.7 x86_64
18 [2020-11-07 09:31:15] Server     INFO    PID:    3855
19 [2020-11-07 09:31:15] Server     INFO  Started 2020/11/07 09:31:15 MST on port 3030

You can access a web interface for SPARQL queries by accessing localhost:3030 or http:127.0.0.1:3030.

Common Lisp Client Examples for the Apache Jena Fuseki RDF Server

Later in the chapter “Knowledge Graph Navigator” we will develop a simple Common Lisp SPARQL query library and use it for querying DBPedia. Here we will use it to query our local Fuseki server.

 1 $ sbcl
 2 This is SBCL 2.0.7, an implementation of ANSI Common Lisp.
 3 More information about SBCL is available at <http://www.sbcl.org/>.
 4 
 5 SBCL is free software, provided as is, with absolutely no warranty.
 6 It is mostly in the public domain; some portions are provided under
 7 BSD-style licenses.  See the CREDITS and COPYING files in the
 8 distribution for more information.
 9 
10 #P"/Users/markw/quicklisp/setup.lisp" 
11 "starting up quicklisp" 
12 * (quicklisp:quickload "sparql")
13 To load "sparql":
14   Load 1 ASDF system:
15     sparql
16 ; Loading "sparql"
17 ...............
18 ("sparql")
19 * (sparql::fuseki "select ?s ?p ?o { ?s ?p ?o } limit 20")
20 (((:s "http://kbsportal.com/trout_season/")
21   (:p "http://knowledgebooks.com/ontology/#storyType")
22   (:o "http://knowledgebooks.com/ontology/#recreation"))
23  ((:s "http://kbsportal.com/trout_season/")
24   (:p "http://knowledgebooks.com/ontology/#storyType")
25   (:o "http://knowledgebooks.com/ontology/#sports"))
26  ((:s "http://kbsportal.com/bear_mountain_fire/")
27   (:p "http://knowledgebooks.com/ontology/#storyType")
28   (:o "http://knowledgebooks.com/ontology/#disaster"))
29  ((:s "http://kbsportal.com/bear_mountain_fire/")
30   (:p "http://knowledgebooks.com/ontology/#summary")
31   (:o "The fire on Bear Mountain was caused by lightening"))
32  ((:s "http://kbsportal.com/jc_basketball/")
33   (:p "http://knowledgebooks.com/ontology/#storyType")
34   (:o "http://knowledgebooks.com/ontology/#sports"))
35  ((:s "http://kbsportal.com/oak_creek_flooding/")
36   (:p "http://knowledgebooks.com/ontology/#storyType")
37   (:o "http://knowledgebooks.com/ontology/#disaster"))
38  ((:s "http://kbsportal.com/oak_creek_flooding/")
39   (:p "http://knowledgebooks.com/ontology/#summary")
40   (:o "Oak Creek flooded last week affecting 5 businesses")))

Here is an example of using the same library to query the public DBPedia SPARQL endpoint (most output is not shown):

 1 * (sparql:dbpedia "select ?s ?p { ?s ?p \"Bill Gates\"@en }")
 2 
 3 ("ndbpeia SPARQL:n" "select ?s ?p { ?s ?p \"Bill Gates\"@en }" "n") 
 4 (((:s "http://dbpedia.org/resource/Category:Bill_Gates")
 5   (:p "http://www.w3.org/2000/01/rdf-schema#label"))
 6  ((:s "http://www.wikidata.org/entity/Q5284")
 7   (:p "http://www.w3.org/2000/01/rdf-schema#label"))
 8  ((:s "http://dbpedia.org/resource/Bill_Gates")
 9   (:p "http://xmlns.com/foaf/0.1/name"))
10  )

The SPARQL library in the github repository for this book also supports the commercial products AllegroGraph and Stardog RDF servers.

Implementing a Simple RDF Datastore and Partial SPARQL Support in Common Lisp

This chapter explores a Common Lisp implementation of a basic RDF (Resource Description Framework) datastore with partial SPARQL (SPARQL Protocol and RDF Query Language) support. You can find the source code on GitHub https://github.com/mark-watson/simple_rdf_sparql.

Before we look at the implementation let’s look at code for an example use case and run it:

 1 (defun test ()
 2   (setf *rdf-store* nil)
 3 
 4   (add-triple "John" "age" "30")
 5   (add-triple "John" "likes" "pizza")
 6   (add-triple "Mary" "age" "25")
 7   (add-triple "Mary" "likes" "sushi")
 8   (add-triple "Bob" "age" "35")
 9   (add-triple "Bob" "likes" "burger")
10 
11   (print-all-triples)
12 
13   (defun print-query-results (query-string)
14     (format t "Query: ~A~%" query-string)
15     (let ((results (execute-sparql-query query-string)))
16       (format t "Final Results:~%")
17       (if results
18           (dolist (result results)
19             (format t "  ~{~A: ~A~^, ~}~%"
20                     (loop for (var . value) in result
21                           collect var collect value)))
22           (format t "  No results~%"))
23       (format t "~%")))
24 
25   (print-query-results "select * where { ?name age ?age . ?name likes ?food }")
26   (print-query-results "select ?s ?o where { ?s likes ?o }")
27   (print-query-results "select * where { ?name age ?age . ?name likes pizza }"))

Assuming that you cloned the repo into ~/quicklisp/local-projects, the library simple_rdf_sparql is available, for example:

 1  $ sbcl
 2 * (ql:quickload :simple_rdf_sparql)
 3 To load "simple_rdf_sparql":
 4   Load 1 ASDF system:
 5     simple_rdf_sparql
 6 ; Loading "simple_rdf_sparql"
 7 [package simple_rdf_sparql]..
 8 (:simple_rdf_sparql)
 9 * (simple_rdf_sparql:test)
10 All triples in the datastore:
11 Bob likes burger
12 Bob age 35
13 Mary likes sushi
14 Mary age 25
15 John likes pizza
16 John age 30
17 
18 Query: select * where { ?name age ?age . ?name likes ?food }
19 Final Results:
20   ?age: 35, ?food: burger, ?name: Bob
21   ?age: 25, ?food: sushi, ?name: Mary
22   ?age: 30, ?food: pizza, ?name: John
23 
24 Query: select ?s ?o where { ?s likes ?o }
25 Final Results:
26   ?s: Bob, ?o: burger
27   ?s: Mary, ?o: sushi
28   ?s: John, ?o: pizza
29 
30 Query: select * where { ?name age ?age . ?name likes pizza }
31 Final Results:
32   ?age: 30, ?name: John
33 
34 nil
35 * 

The GitHub repository https://github.com/mark-watson/simple_rdf_sparql contains two project files package.lisp and simple_rdf_sparql.asd that I won’t list here.

We’ll now break down the implementation code into several key components and explain each part step by step.

1. RDF Triple Structure

The foundation of our RDF datastore is the triple structure:

1 (defstruct triple
2   subject
3   predicate
4   object)

This structure represents the basic unit of data in RDF, consisting of a subject, predicate, and object. Each triple acts as a simple statement of fact, linking a subject to an object through a predicate. For example, consider the statement: “Mark authored the book.” Here, “Mark” is the subject, “authored” is the predicate, and “book” is the object.

In RDF, triples allow us to represent complex relationships between entities in a highly structured manner, which can then be queried and analyzed. The subject typically represents the entity or resource being described, while the predicate denotes the type of relationship, and the object identifies the value or entity linked to the subject. These three components work together to form a directed graph, making the RDF model highly expressive for semantic web and linked data applications.

2. RDF Datastore

The datastore itself is a simple global variable:

1 (defvar *rdf-store* nil)

This variable will hold a list of triples.

3. Basic Datastore Operations

Triples are stored in memory and two utility functions support creating new triples and deleting triples.

Adding Triples

1 (defun add-triple (subject predicate object)
2   (push (make-triple :subject subject
3                      :predicate predicate
4                      :object object)
5         *rdf-store*))

This function creates a new triple and adds it to the datastore.

Removing Triples

1 (defun remove-triple (subject predicate object)
2   (setf *rdf-store*
3         (remove-if (lambda (triple)
4                      (and (equal (triple-subject triple) subject)
5                           (equal (triple-predicate triple) predicate)
6                           (equal (triple-object triple) object)))
7                    *rdf-store*)))

This function removes a specific triple from the datastore.

4. Query Support

Identifying Variables

1 (defun variable-p (str)
2   (and (stringp str) (> (length str) 0) (char= (char str 0) #\?)))

This helper function identifies if a string represents a variable (starting with ‘?’).

Converting Triples to Bindings

The function triple-to-binding (triple &optional pattern) is designed to convert a given RDF triple into a set of variable bindings based on a specified pattern. This is a crucial step in evaluating SPARQL queries like:

1 select * where { ?name age ?age . ?name likes pizza }

In this context, the function matches variables in the SPARQL pattern against actual values in the datastore’s triples. Let’s break down what the function does:

Function parameters:

  • triple: This is the triple we want to match against. In RDF terms, a triple has three components: subject, predicate, and object.
  • pattern (optional): The pattern here is a list of three elements, potentially containing variables (e.g., ?name, ?age).

A variable is represented as a symbol prefixed by ? in SPARQL, and the function uses function variable-p to check if an element in the pattern is a variable.

Breakdown of the Function Logic

The function starts by setting up an empty list called binding, which will store any matched variable-value pairs:

1 (defun triple-to-binding (triple &optional pattern)
2   (let ((binding nil))

Subject Matching in function triple-to-binding:

  • The first when clause checks if the first element of the pattern is a variable ((variable-p (first pattern))).
  • If true, it pairs the first variable in the pattern (?name in the example query) with the subject of the triple ((triple-subject triple)).
1   (when (and pattern (variable-p (first pattern)))
2     (push (cons (first pattern)
3                 (triple-subject triple))
4           binding))

This might generate a binding like (?name . “Alice”) if Alice is the subject in a matching triple.

Predicate Matching:

  • Similarly, the second when clause in function triple-to-binding checks if the predicate in the pattern is a variable.
  • If so, it pairs the second element of the pattern with the predicate value in the triple.
1   (when (and pattern (variable-p (second pattern)))
2     (push (cons (second pattern)
3                 (triple-predicate triple))
4           binding))

This will match patterns like ?relation in ?name ?relation ?age*.

Object Matching:

  • The final when clause in function triple-to-binding checks if the object in the pattern is a variable.
  • If so, it pairs the third element of the pattern with the object value in the triple.
1   (when (and pattern (variable-p (third pattern)))
2     (push (cons (third pattern)
3                 (triple-object triple))
4           binding))

For example, this would produce a binding like (?age . 25) if a triple has 25 as the object and the third predicate in the matching list was ?age.

Now, return the binding as the returned value for a function call to function triple-to-binding: after processing the pattern, the function returns the generated bindings as a list of (variable . value) pairs:

1   binding))

This function converts a triple to a binding, matching variables in the pattern to values in the triple.

If we call triple-to-binding with this triple and the pattern ( ?name age ?age ), the function performs the following steps:

Checks if ?name is a variable (using function variable-p), which if true, then creates a binding for ?name as:

1 (?name . "Alice")

and bind ?age, for example:

1 (?age 25) 

How This Fits into SPARQL Processing

In the larger scope of evaluating a SPARQL query:

  • Pattern Matching: triple-to-binding is applied to match patterns like { ?name age ?age } against all triples in the datastore.
  • Generating Bindings: Each matching triple generates a set of bindings for the variables in the query.
  • Query Evaluation: These bindings are used to check additional conditions (like { ?name likes pizza }) and produce the final results for the select * query.

Querying Triples

The query-triples function performs a search over an RDF datastore (rdf-store) to find triples that match the given subject, predicate, and object parameters. It iterates through all triples stored in the datastore and uses function remove-if-not to filter out those that do not match the specified criteria. For each triple, the function query-triples evaluates whether its subject, predicate, and object match the respective parameters provided in the function call.

Specifically, function query-triples checks each parameter using a combination of and and or conditions: if a parameter is nil, it is treated as a wildcard and will match any value; if the parameter is a variable (identified by the variable-p function), it is considered to match any value as well; otherwise, the parameter must be equal to the corresponding component of the triple (checked using equal). If a triple satisfies all three conditions, either matching the specified values or variables, it is kept in the result set; otherwise, it is filtered out. The function ultimately returns a list of triples from rdf-store that match the given subject, predicate, and object pattern, enabling flexible querying capabilities over the RDF datastore:

 1 (defun query-triples (subject predicate object)
 2   (remove-if-not
 3     (lambda (triple)
 4        (and (or (null subject)
 5                 (variable-p subject)
 6                 (equal (triple-subject triple) subject))
 7             (or (null predicate)
 8                 (variable-p predicate)
 9                 (equal (triple-predicate triple) predicate))
10             (or (null object)
11                 (variable-p object)
12                 (equal (triple-object triple) object))))
13      *rdf-store*))

This function queries the datastore for triples matching the given pattern.

5. SPARQL Query Structure

1 (defstruct sparql-query
2   select-vars
3   where-patterns)

This structure represents a simplified SPARQL query with select variables and where patterns. Please note that this defstruct creates the function make-sparql-query that we use in the next section.

6. SPARQL Query Parsing

The parse-sparql-query function takes a SPARQL query string as input and processes it to extract its main components: the variables to be selected and the patterns in the WHERE clause. It starts by splitting the query string into individual tokens and removes curly braces ({ and }), which are not needed for the parsing. It then identifies the positions of the “select” and “where” keywords within the tokenized list using position. Using these indices, the function extracts the variables listed after the “select” keyword up to the “where” keyword, storing them in the select-vars variable. Next, it extracts the tokens following the “where” keyword, storing them as where-clause. This where-clause is further processed using a helper function, parse-where-patterns, which converts the clause into a structured list of patterns that represent the triple patterns in the query. Finally, the function returns a new sparql-query object (constructed using make-sparql-query) that encapsulates both the extracted variables and parsed patterns, making it suitable for further evaluation against an RDF datastore:

 1 (defun parse-sparql-query (query-string)
 2   (let* ((tokens
 3           (remove-if
 4             (lambda (token)
 5               (member token '("{" "}") :test #'string=))
 6             (split-string query-string)))
 7          (select-index (position "select" tokens :test #'string-equal))
 8          (where-index (position "where" tokens :test #'string-equal))
 9          (select-vars (subseq tokens (1+ select-index) where-index))
10          (where-clause (subseq tokens (1+ where-index)))
11          (where-patterns (parse-where-patterns where-clause)))
12     (make-sparql-query :select-vars select-vars
13                        :where-patterns where-patterns)))

This function parses a SPARQL query string into a structured representation.

7. Query Execution

The execute-where-patterns function recursively evaluates a list of SPARQL WHERE patterns against an RDF datastore, returning a list of variable bindings that satisfy the entire pattern sequence. If the patterns list is empty, it returns a list containing an empty binding ((list nil)), indicating that no more patterns need to be matched. Otherwise, it starts by extracting the first pattern and the remaining patterns, and uses query-triples to find triples in the RDF datastore that match the first pattern. It then generates a set of initial bindings for each matching triple using the triple-to-binding function. If there are no more patterns left to evaluate, it returns these bindings as the final results. If there are remaining patterns, it recursively processes the rest of the patterns by invoking execute-where-patterns-with-bindings, using each binding to constrain the search for subsequent patterns. For each binding in the current level, it combines the results of the recursive call using mapcan and merges the bindings (merge-bindings) to form a unified set of results that meet all the patterns in sequence, thereby producing a comprehensive solution for the entire WHERE clause:

 1 (defun execute-where-patterns (patterns)
 2   (if (null patterns)
 3       (list nil)
 4       (let* ((pattern (first patterns))
 5              (remaining-patterns (rest patterns))
 6              (matching-triples (apply #'query-triples pattern)))
 7         (let ((bindings
 8                (mapcar
 9                  (lambda (triple)
10                    (triple-to-binding triple pattern))
11                  matching-triples)))
12           (if (null remaining-patterns)
13               bindings
14               (mapcan
15                 (lambda (binding)
16                   (let ((results
17                          (execute-where-patterns-with-bindings
18                             remaining-patterns binding)))
19                     (mapcar (lambda (result)
20                               (merge-bindings binding result))
21                             results)))
22                       bindings))))))

This function executes the WHERE patterns of a SPARQL query, finding matching triples and generating bindings.

8. Result Projection

The project-results function processes a list of query results to filter or project only the variables specified in the select-vars list. If select-vars is set to “*”, indicating that all variables should be included, it simply returns the results after removing any duplicate bindings. Otherwise, it iterates over each result, extracts only the variables listed in select-vars, and constructs a new binding list for each result that includes these selected variables and their corresponding values. Afterward, it removes any duplicate bindings in the filtered results and returns the final projected set, ensuring that only the desired variables are included in the output:

1 (defun project-results (results select-vars)
2   (if (equal select-vars '("*"))
3       (mapcar #'remove-duplicate-bindings results)
4       (mapcar (lambda (result)
5                 (remove-duplicate-bindings
6                  (mapcar (lambda (var)
7                            (cons var (cdr (assoc var result :test #'string=))))
8                          select-vars)))
9               results)))

This function projects the query results based on the SELECT variables.

9. Main Query Execution

The execute-sparql-query function orchestrates the entire process of executing a SPARQL query by integrating various helper functions to parse, evaluate, and project the query results. It starts by parsing the input query string using parse-sparql-query, extracting the WHERE patterns and SELECT variables from the parsed query structure. It then uses execute-where-patterns to evaluate the WHERE clause against the RDF datastore, generating a list of variable bindings that satisfy the given patterns. After obtaining these intermediate results, it applies project-results to filter and project only the variables specified in the SELECT clause, resulting in the final output. This function effectively takes a complete SPARQL query, processes it step-by-step, and returns the matching results in the desired format:

1 (defun execute-sparql-query (query-string)
2   (let* ((query (parse-sparql-query query-string))
3          (where-patterns (sparql-query-where-patterns query))
4          (select-vars (sparql-query-select-vars query))
5          (results (execute-where-patterns where-patterns))
6          (projected-results (project-results results select-vars)))
7     projected-results))

This function ties everything together, executing a SPARQL query from start to finish.

Conclusion

This implementation provides a basic framework for an RDF datastore with partial SPARQL support in Common Lisp. While it lacks many features of a full-fledged RDF database and SPARQL engine, it demonstrates the core concepts and can serve as a starting point for more complex implementations. The code showcases Common Lisp’s strengths in list processing and symbolic computation, making it well-suited for working with semantic data structures like RDF.

Automatically Generating Data for Knowledge Graphs

We develop a complete application. The Knowledge Graph Creator (KGcreator) is a tool for automating the generation of data for Knowledge Graphs from raw text data. We will see how to create a single standalone executable file using SBCL Common Lisp. The application can also be run during development from a repl. This application also implements a web application interface. In addition to the KGcreator application we will close the chapter with a utiity library that processes a file of RDF in N-Triple format and generates an extention file with triples pulled from DBedia defining URIs found in the input data file.

Data created by KGcreator generates data in two formats:

  • Neo4j graph database format (text format)
  • RDF triples suitable for loading into any linked data/semantic web data store.

This example application works by identifying entities in text. Example entity types are people, companies, country names, city names, broadcast network names, political party names, and university names. We saw earlier code for detecting entities in the chapter on natural language processing (NLP) and we will reuse this code. We will discuss later three strategies for reusing code from different projects.

When I originally wrote KGCreator I intended to develop a commercial product. I wrote two research prototypes, one in Common Lisp (the example in this chapter) and one in Haskell (which I also use as an example in my book Haskell Tutorial and Cookbook. I decided to open source both versions of KGCreator and if you work with Knowledge Graphs I hope you find KGCreator useful in your work.

The following figure shows part of a Neo4j Knowledge Graph created with the example code. This graph has shortened labels in displayed nodes but Neo4j offers a web browser-based console that lets you interactively explore Knowledge Graphs. We don’t cover setting up Neo4j here so please use the Neo4j documentation. As an introduction to RDF data, the semantic web, and linked data you can get free copies of my two books Practical Semantic Web and Linked Data Applications, Common Lisp Edition and Practical Semantic Web and Linked Data Applications, Java, Scala, Clojure, and JRuby Edition.

Part of a Knowledge Graph shown in Neo4j web application console
Part of a Knowledge Graph shown in Neo4j web application console

Here is a detail view:

Detail of Neo4j console
Detail of Neo4j console

Implementation Notes

As seen in the file src /kgcreator/package.lisp this application uses several other packages:

1 (defpackage #:kgcreator
2   (:use #:cl
3         #:entities_dbpedia #:categorize_summarize #:myutils
4         #:cl-who #:hunchentoot #:parenscript)
5   (:export kgcreator))

The implementation of the packages shown on line 3 were in a previous chapter. The package myutils are mostly miscellaneous string utilities that we won’t look at here; I leave it to you to read the source code.

As seen in the configuration file src/kgcreator/kgcreator.asd we split the implementation of the application into four source files:

 1 ;;;; kgcreator.asd
 2 
 3 (asdf:defsystem #:kgcreator
 4   :description "Describe plotlib here"
 5   :author "Mark Watson <mark.watson@gmail.com>"
 6   :license "AGPL version 3"
 7   :depends-on (#:entities_dbpedia #:categorize_summarize
 8                #:myutils #:unix-opts #:cl-who
 9                #:hunchentoot #:parenscript)
10   :components
11     ((:file "package")
12                 (:file "kgcreator")
13                 (:file "neo4j")
14                 (:file "rdf")
15                 (:file "web"))
16                  )

The application is separated into four source files:

  • kgcreator.lisp: top level APIs and functionality. Uses the code in neo4j.lisp and rdf.lisp. Later we will generate a standalone application that uses these top level APIs
  • neo4j.lisp: generates Cyper text files that can be imported into Neo4j
    • rdf.lisp: generates RDF text data that can be loaded or imported into RDF data stores
  • web.lisp: a simple web application for running KGCreator

Generating RDF Data

I leave it to you find a tutorial on RDF data on the web, or you can get a PDF for my book “Practical Semantic Web and Linked Data Applications, Common Lisp Edition” and read the tutorial sections on RDF.

RDF data is comprised of triples, where the value for each triple are a subject, a predicate, and an object. Subjects are URIs, predicates are usually URIs, and objects are either literal values or URIs. Here are two triples written by this example application:

<http://dbpedia.org/resource/The_Wall_Street_Journal> 
  <http://knowledgebooks.com/schema/aboutCompanyName> 
  "Wall Street Journal" .
<https://newsshop.com/june/z902.html>
  <http://knowledgebooks.com/schema/containsCountryDbPediaLink>
  <http://dbpedia.org/resource/Canada> .

The following listing of the file src/kgcreator/rdf.lisp generates RDF data:

 1 (in-package #:kgcreator)
 2 
 3 (let ((*rdf-nodes-hash*))
 4   
 5   (defun rdf-from-files (output-file-path text-and-meta-pairs)
 6     (setf *rdf-nodes-hash* (make-hash-table :test #'equal :size 200))
 7     (print (list "==> rdf-from-files" output-file-path text-and-meta-pairs ))
 8     (with-open-file
 9         (str output-file-path
10              :direction :output
11              :if-exists :supersede
12              :if-does-not-exist :create)
13 
14       (defun rdf-from-files-handle-single-file (text-input-file meta-input-file)
15         (let* ((text (file-to-string text-input-file))
16                (words (myutils:words-from-string text))
17                (meta (file-to-string meta-input-file)))
18                          
19           (defun generate-original-doc-node-rdf ()
20             (let ((node-name (node-name-from-uri meta)))
21               (if (null (gethash node-name *rdf-nodes-hash*))
22                   (let* ((cats (categorize words))
23                          (sum (summarize words cats)))
24 		    (print (list "$$$$$$  cats:" cats))
25                     (setf (gethash node-name *rdf-nodes-hash*) t)
26 		    (format str (concatenate 'string "<" meta
27 					     "> <http:knowledgebooks.com/schema/summary> \""
28 					     sum "\" . ~%"))
29                     (dolist (cat cats)
30                       (let ((hash-check (concatenate 'string node-name (car cat))))
31                         (if (null (gethash hash-check *rdf-nodes-hash*))
32                             (let ()
33                               (setf (gethash hash-check *rdf-nodes-hash*) t)
34 			      (format str
35 				      (concatenate 'string "<" meta
36 						   "> <http://knowledgebooks.com/schema/"
37 						   "topicCategory> "
38 						   "<http://knowledgebooks.com/schema/"
39 						   (car cat) "> . ~%"))))))))))
40           
41           (defun generate-dbpedia-contains-rdf (key value)
42             (generate-original-doc-node-rdf)
43             (let ((relation-name (concatenate 'string key "DbPediaLink")))
44               (dolist (entity-pair value)
45                 (let* ((node-name (node-name-from-uri meta))
46                        (object-node-name (node-name-from-uri (cadr entity-pair)))
47                        (hash-check (concatenate 'string node-name object-node-name)))
48                   (if (null (gethash hash-check *rdf-nodes-hash*))
49                       (let ()
50                         (setf (gethash hash-check *rdf-nodes-hash*) t)
51 			(format str (concatenate 'string "<" meta
52 					       "> <http://knowledgebooks.com/schema/contains/"
53 					       key "> " (cadr entity-pair) " .~%"))))))))))
54 
55       
56       ;; start code for rdf-from-files (output-file-path text-and-meta-pairs)
57       (dolist (pair text-and-meta-pairs)
58         (rdf-from-files-handle-single-file (car pair) (cadr pair))
59         (let ((h (entities_dbpedia:find-entities-in-text (file-to-string (car pair))\
60 )))
61           (entities_dbpedia:entity-iterator #'generate-dbpedia-contains-rdf h))))))
62 
63 
64 (defvar test_files '((#P"~/GITHUB/common-lisp/kgcreator/test_data/test3.txt"
65                       #P"~/GITHUB/common-lisp/kgcreator/test_data/test3.meta")))
66 (defvar test_filesZZZ '((#P"~/GITHUB/common-lisp/kgcreator/test_data/test3.txt"
67                          #P"~/GITHUB/common-lisp/kgcreator/test_data/test3.meta")
68                         (#P"~/GITHUB/common-lisp/kgcreator/test_data/test2.txt"
69                          #P"~/GITHUB/common-lisp/kgcreator/test_data/test2.meta")
70                         (#P"~/GITHUB/common-lisp/kgcreator/test_data/test1.txt"
71                          #P"~/GITHUB/common-lisp/kgcreator/test_data/test1.meta")))
72 
73 (defun test3a ()
74   (rdf-from-files "out.rdf" test_files))

You can load all of KGCreator but just execute the test function at the end of this file using:

(ql:quickload "kgcreator")
(in-package #:kgcreator)
(kgcreator:test3a)

This code works on a list of paired files for text data and the meta data for each text file. As an example, if there is an input text file test123.txt then there would be a matching meta file test123.meta that contains the source of the data in the file test123.txt. This data source will be a URI on the web or a local file URI. The top level function rdf-from-files takes an output file path for writing the generated RDF data and a list of pairs of text and meta file paths.

A global variable *rdf-nodes-hash* will be used to remember the nodes in the RDF graph as it is generated. Please note that the function rdf-from-files is not re-entrant: it uses the global *rdf-nodes-hash* so if you are writing multi-threaded applications it will not work to execute the function rdf-from-files simultaneously in multiple threads of execution.

The function rdf-from-files (and the nested functions) are straightforward. I left a few debug printout statements in the code and when you run the test code that I left in the bottom of the file, hopefully it will be clear what rdf.lisp is doing.

Generating Data for the Neo4j Graph Database

Now we will generate Neo4J Cypher data. In order to keep the implementation simple, both the RDF and Cypher generation code starts with raw text and performs the NLP analysis to find entities. This example could be refactored to perform the NLP analysis just one time but in practice you will likely be working with either RDF or NEO4J and so you will probably extract just the code you need from this example (i.e., either the RDF or Cypher generation code).

Before we look at the code, let’s start with a few lines of generated Neo4J Cypher import data:

CREATE (newsshop_com_june_z902_html_news)-[:ContainsCompanyDbPediaLink]->(Wall_Stree\
t_Journal)
CREATE (Canada:Entity {name:"Canada", uri:"<http://dbpedia.org/resource/Canada>"})
CREATE (newsshop_com_june_z902_html_news)-[:ContainsCountryDbPediaLink]->(Canada)
CREATE (summary_of_abcnews_go_com_US_violent_long_lasting_tornadoes_threaten_oklahom\
a_texas_storyid63146361:Summary {name:"summary_of_abcnews_go_com_US_violent_long_las\
ting_tornadoes_threaten_oklahoma_texas_storyid63146361", uri:"<https://abcnews.go.co\
m/US/violent-long-lasting-tornadoes-threaten-oklahoma-texas/story?id=63146361>", sum\
mary:"Part of the system that delivered severe weather to the central U.S. over the \
weekend is moving into the Northeast today, producing strong to severe storms -- dam\
aging winds, hail or isolated tornadoes can't be ruled out. Severe weather is foreca\
st to continue on Tuesday, with the western storm moving east into the Midwest and p\
arts of the mid-Mississippi Valley."})

The following listing of file src/kgcreator/neo4j.lisp is similar to the code that generated RDF in the last section:

 1 (in-package #:kgcreator)
 2 
 3 (let ((*entity-nodes-hash*))
 4   
 5   (defun cypher-from-files (output-file-path text-and-meta-pairs)
 6     (setf *entity-nodes-hash* (make-hash-table :test #'equal :size 200))
 7     ;;(print (list "==> cypher-from-files"output-file-path text-and-meta-pairs ))
 8     (with-open-file
 9 	(str output-file-path
10 	     :direction :output
11 	     :if-exists :supersede 
12 	     :if-does-not-exist :create)
13 
14       (defun generateNeo4jCategoryNodes ()
15 	(let* ((names categorize_summarize::categoryNames))
16 	  (dolist (name names)
17 	    (format str
18 		    (myutils:replace-all
19 		     (concatenate
20 		      'string "CREATE (" name ":CategoryType {name:\"" name "\"})~%")
21 		     "/" "_"))))
22 	(format str "~%"))
23       
24 
25       (defun cypher-from-files-handle-single-file (text-input-file meta-input-file)
26 	(let* ((text (file-to-string text-input-file))
27 	       (words (myutils:words-from-string text))
28 	       (meta (file-to-string meta-input-file)))
29 	       	  
30 	  (defun generate-original-doc-node ()
31 	    (let ((node-name (node-name-from-uri meta)))
32 	      (if (null (gethash node-name *entity-nodes-hash*))
33 		  (let* ((cats (categorize words))
34 			 (sum (summarize words cats)))
35 		    (setf (gethash node-name *entity-nodes-hash*) t)
36 		    (format str (concatenate 'string "CREATE (" node-name ":News {name:\""
37 					     node-name "\", uri: \"" meta
38                "\", summary: \"" sum "\"})~%"))
39 		    (dolist (cat cats)
40 		      (let ((hash-check (concatenate 'string node-name (car cat))))
41 			(if (null (gethash hash-check *entity-nodes-hash*))
42 			    (let ()
43 			      (setf (gethash hash-check *entity-nodes-hash*) t)
44 			      (format str (concatenate 'string "CREATE (" node-name
45                                      ")-[:Category]->("
46 						       (car cat) ")~%"))))))))))
47 	  
48 	  (defun generate-dbpedia-nodes (key entity-pairs)
49 	    (dolist (entity-pair entity-pairs)
50 	      (if (null (gethash (node-name-from-uri (cadr entity-pair))
51                            *entity-nodes-hash*))
52 		  (let ()
53 		    (setf (gethash (node-name-from-uri (cadr entity-pair)) *entity-nodes-hash*) t)
54 		    (format str
55 			    (concatenate 'string "CREATE (" (node-name-from-uri (cadr entity-pair)) ":"
56 					 key " {name: \"" (car entity-pair)
57 					 "\", uri: \"" (cadr entity-pair) "\"})~%"))))))
58 	  
59 	  (defun generate-dbpedia-contains-cypher (key value)
60 	    (generate-original-doc-node)
61 	    (generate-dbpedia-nodes key value)
62 	    (let ((relation-name (concatenate 'string key "DbPediaLink")))
63 	      (dolist (entity-pair value)
64 		(let* ((node-name (node-name-from-uri meta))
65 		       (object-node-name (node-name-from-uri (cadr entity-pair)))
66 		       (hash-check (concatenate 'string node-name object-node-name)))
67 		  (if (null (gethash hash-check *entity-nodes-hash*))
68 		      (let ()
69 			(setf (gethash hash-check *entity-nodes-hash*) t)
70 			(format str (concatenate 'string
71 						 "CREATE (" node-name ")-[:"
72 						 relation-name "]->(" object-node-name ")~%"))))))))))
73 
74       
75       ;; start code for cypher-from-files (output-file-path text-and-meta-pairs)
76       (generateNeo4jCategoryNodes) ;; just once, not for every input file
77       (dolist (pair text-and-meta-pairs)
78 	(cypher-from-files-handle-single-file (car pair) (cadr pair))
79 	(let ((h (entities_dbpedia:find-entities-in-text (file-to-string (car pair)))))
80 	  (entities_dbpedia:entity-iterator #'generate-dbpedia-contains-cypher h))))))
81 
82 
83 (defvar test_files '((#P"~/GITHUB/common-lisp/kgcreator/test_data/test3.txt"
84 		      #P"~/GITHUB/common-lisp/kgcreator/test_data/test3.meta")
85 		     (#P"~/GITHUB/common-lisp/kgcreator/test_data/test2.txt"
86 		      #P"~/GITHUB/common-lisp/kgcreator/test_data/test2.meta")
87 		     (#P"~/GITHUB/common-lisp/kgcreator/test_data/test1.txt"
88 		      #P"~/GITHUB/common-lisp/kgcreator/test_data/test1.meta")))
89 
90 (defun test2a ()
91   (cypher-from-files "out.cypher" test_files))

You can load all of KGCreator but just execute the test function at the end of this file using:

(ql:quickload "kgcreator")
(in-package #:kgcreator)
(kgcreator:test2a)

Implementing the Top Level Application APIs

The code in the file src/kgcreator/kgcreator.lisp uses both rdf.lisp and neo4j.lisp that we saw in the last two sections. The function get-files-and-meta looks at the contents of an input directory to generate a list of pairs, each pair containing the path to a text file and the meta file for the corresponding text file.

We are using the opts package to parse command line arguments. This will be used when we build a single file standalone executable file for the entire KGCreator application, including the web application that we will see in a later section.

 1 ;; KGCreator main program
 2 
 3 (in-package #:kgcreator)
 4 
 5 (ensure-directories-exist "temp/")
 6 
 7 (defun get-files-and-meta (fpath)
 8   (let ((data (directory (concatenate 'string fpath "/" "*.txt")))
 9 	      (meta (directory (concatenate 'string fpath "/" "*.meta"))))
10     (if (not (equal (length data) (length meta)))
11 	(let ()
12 	  (princ "Error: must be matching *.meta files for each *.txt file")
13 	  (terpri)
14 	  '())
15 	(let ((ret '()))
16 	  (dotimes (i (length data))
17 	    (setq ret (cons (list (nth i data) (nth i meta)) ret)))
18 	  ret))))
19 
20 (opts:define-opts
21     (:name :help
22 	   :description
23      "KGcreator command line example: ./KGcreator -i test_data -r out.rdf -c out.cyp\
24 er"
25 	   :short #\h
26 	   :long "help")
27     (:name :rdf
28 	   :description "RDF output file name"
29 	   :short #\r
30 	   :long "rdf"
31 	   :arg-parser #'identity ;; <- takes an argument
32 	   :arg-parser #'identity) ;; <- takes an argument
33   (:name :cypher
34 	 :description "Cypher output file name"
35 	 :short #\c
36 	 :long "cypher"
37 	 :arg-parser #'identity) ;; <- takes an argument
38   (:name :inputdir
39 	 :description "Cypher output file name"
40 	 :short #\i
41 	 :long "inputdir"
42 	 :arg-parser #'identity)) ;; <- takes an argument
43 
44 
45 (defun kgcreator () ;; don't need: &aux args sb-ext:*posix-argv*)
46   (handler-case
47       (let* ((opts (opts:get-opts))
48 	     (input-path
49 	      (if (find :inputdir opts)
50 		  (nth (1+ (position :inputdir opts)) opts)))
51 	     (rdf-output-path
52 	      (if (find :rdf opts)
53 		  (nth (1+ (position :rdf opts)) opts)))
54 	     (cypher-output-path
55 	      (if (find :cypher opts)
56 		  (nth (1+ (position :cypher opts)) opts))))
57 	(format t "input-path: ~a  rdf-output-path: ~a cypher-output-path:~a~%"
58 		input-path rdf-output-path cypher-output-path)
59 	(if (not input-path)
60 	    (format t "You must specify an input path.~%")
61 	    (locally
62 		(declare #+sbcl(sb-ext:muffle-conditions sb-kernel:redefinition-warning))
63 	      (handler-bind
64 		  (#+sbcl(sb-kernel:redefinition-warning #'muffle-warning))
65 		;; stuff that emits redefinition-warning's
66 		(let ()
67 		  (if rdf-output-path
68 		      (rdf-from-files rdf-output-path (get-files-and-meta input-path)))
69 		  (if cypher-output-path
70 		      (cypher-from-files cypher-output-path (get-files-and-meta input-path))))))))
71     (t (c)
72       (format t "We caught a runtime error: ~a~%" c)
73       (values 0 c)))
74   (format t "~%Shutting down KGcreator - done processing~%~%"))
75 
76 (defun test1 ()
77     (get-files-and-meta
78      "~/GITHUB/common-lisp/kgcreator/test_data"))
79 
80 (defun print-hash-entry (key value)
81   (format t "The value associated with the key ~S is ~S~%" key value))
82 
83 (defun test2 ()
84   (let ((h (entities_dbpedia:find-entities-in-text "Bill Clinton and George Bush wen\
85 t to Mexico and England and watched Univision. They enjoyed Dakbayan sa Dabaw and sh\
86 oped at Best Buy and listened to Al Stewart. They agree on República de Nicaragua a\
87 nd support Sweden Democrats and Leicestershire Miners Association and both sent thei\
88 r kids to Darul Uloom Deoband.")))
89     (entities_dbpedia:entity-iterator #'print-hash-entry h)))
90 
91 (defun test7 ()
92   (rdf-from-files "out.rdf" (get-files-and-meta "test_data")))

You can load all of KGCreator but just execute the three test functions at the end of this file using:

(ql:quickload "kgcreator")
(in-package #:kgcreator)
(kgcreator:test1)
(kgcreator:test2)
(kgcreator:test7)

Implementing The Web Interface

When we build a standalone single file application for KGCreator, we include a simple web application interface that allows users to enter input text and see generated RDF and Neo4j Cypher data.

The file src/kgcreator/web.lisp uses the libraries cl-who hunchentoot parenscript that we used earlier. The function write-files-run-code** (lines 8-43) takes raw text, and writes generated RDF and Neo4j Cypher data to local temporary files that are then read and formatted to HTML for display. The code in rdf.lisp and neo4j.lisp is file oriented, and I wrote web.lisp as an afterthought so it was easier writing temporary files than refactoring rdf.lisp and neo4j.lisp to write to strings.

 1 (in-package #:kgcreator)
 2 
 3 (ql:quickload '(cl-who hunchentoot parenscript))
 4 
 5 
 6 (setf (html-mode) :html5)
 7 
 8 (defun write-files-run-code (a-uri raw-text)
 9   (if (< (length raw-text) 10)
10       (list "not enough text" "not enough text")
11     ;; generate random file number
12     (let* ((filenum (+ 1000 (random 5000)))
13 	   (meta-name (concatenate 'string "temp/" (write-to-string filenum) ".meta"))
14 	   (text-name (concatenate 'string "temp/" (write-to-string filenum) ".txt"))
15 	   (rdf-name (concatenate 'string "temp/" (write-to-string filenum) ".rdf"))
16 	   (cypher-name (concatenate 'string "temp/" (write-to-string filenum) ".cypher"))
17 	   ret)
18       ;; write meta file
19       (with-open-file (str meta-name
20 			   :direction :output
21 			   :if-exists :supersede
22 			   :if-does-not-exist :create)
23 		      (format str a-uri))
24       ;; write text file
25       (with-open-file (str text-name
26 			   :direction :output
27 			   :if-exists :supersede
28 			   :if-does-not-exist :create)
29 	(format str raw-text))
30       ;; generate rdf and cypher files
31       (rdf-from-files rdf-name (list (list text-name meta-name)))
32       (cypher-from-files cypher-name (list (list text-name meta-name)))
33       ;; read files and return results
34       (setf ret
35 	    (list
36 	     (replace-all
37 	      (replace-all
38 	       (uiop:read-file-string rdf-name)
39 	       ">" "&gt;")
40 	      "<" "&lt;")
41 	     (uiop:read-file-string cypher-name)))
42       (print (list "ret:" ret))
43       ret)))
44   
45 (defvar *h* (make-instance 'easy-acceptor :port 3000))
46 
47 ;; define a handler with the arbitrary name my-greetings:
48 
49 (define-easy-handler (my-greetings :uri "/") (text)
50   (setf (hunchentoot:content-type*) "text/html")
51   (let ((rdf-and-cypher (write-files-run-code "http://test.com/1" text)))
52     (print (list "*** rdf-and-cypher:" rdf-and-cypher))
53     (with-html-output-to-string
54      (*standard-output* nil :prologue t)
55      (:html
56       (:head (:title "KGCreator Demo")
57 	     (:link :rel "stylesheet" :href "styles.css" :type "text/css"))
58       (:body
59        :style "margin: 90px"
60        (:h1 "Enter plain text for the demo to create RDF and Cypher")
61        (:p "For more information on the KGCreator product please visit the web site:"
62 	   (:a :href "https://markwatson.com/products/" "Mark Watson's commercial products"\
63 ))
64        (:p "The KGCreator product is a command line tool that processes all text "
65 	   "web applications and files in a source directory and produces both RDF data "
66 	   "triples for semantic Cypher input data files for the Neo4j graph database. "
67 	   "For the purposes of this demo the URI for your input text is hardwired to "
68 	   "&lt;http://test.com/1&gt; but the KGCreator product offers flexibility "
69 		 "for assigning URIs to data sources and further, "
70 	   "creates links for relationships between input sources.")
71        (:p :style "text-align: left"
72 	   "To try the demo paste plain text into the following form that contains "
73 		 "information on companies, news, politics, famous people, broadcasting "
74 		 "networks, political parties, countries and other locations, etc. ")
75        (:p "Do not include and special characters or character sets:")
76        (:form
77 	:method :post
78 	(:textarea
79 	 :rows "20"
80 	 :cols "90"
81 	 :name "text"
82 	 :value text)
83 	(:br)
84 	(:input :type :submit :value "Submit text to process"))
85        (:h3 "RDF:")
86        (:pre (str (car rdf-and-cypher)))
87        (:h3 "Cypher:")
88        (:pre (str (cadr rdf-and-cypher))))))))
89 
90 (defun kgcweb ()
91   (hunchentoot:start *h*))

You can load all of KGCreator and start the web application using:

(ql:quickload "kgcreator")
(in-package #:kgcreator)
(kgcweb)

You can access the web app at http://localhost:3000.

Creating a Standalone Application Using SBCL

When I originally wrote KGCreator I intended to develop a commercial product so it was important to be able to create standalone single file executables. This is simple to do using SBCL:

1 $ sbcl
2 (ql:quickload "kgcreator")
3 (in-package #:kgcreator)
4 (sb-ext:save-lisp-and-die "KGcreator"
5   :toplevel #'kgcreator :executable t)

As an example, you could run the application on the command line using:

1 ./KGcreator -i test_data -r out.rdf -c out.cyper

Augmenting RDF Triples in a Knowledge Graph Using DBPedia

You can augment RDF-based Knowledge Graphs that you build with the KGcreator application by using the library in the directory kg-add-dbpedia-triples.

As seen in the kg-add-dbpedia-triples.asd and package.lisp configuration files, we use two other libraries developed in this book:

;;;; kg-add-dbpedia-triples.asd

(asdf:defsystem #:kg-add-dbpedia-triples
  :description "Add DBPedia triples from an input N-Triples RDF file"
  :author "markw@markwatson.com"
  :license "Apache 2"
  :depends-on (#:myutils #:sparql)
  :components ((:file "package")
               (:file "add-dbpedia-triples")))
;;;; package.lisp

(defpackage #:kg-add-dbpedia-triples
  (:use #:cl #:myutils #:sparql)
  (:export #:add-triples))

The library is implemented in the file kg-add-dbpedia-triples.lisp:

 1 (in-package #:kg-add-dbpedia-triples)
 2 
 3 (defun augmented-triples (a-uri ostream)
 4   (let ((results
 5          (sparql:dbpedia
 6           (format nil "construct { ~A ?p ?o } where { ~A ?p ?o } limit 5" a-uri a-ur\
 7 i))))
 8     (dolist (x results)
 9       (dolist (sop x)
10         (let ((val (second sop)))
11           (if (and
12                (stringp val)
13                (> (length val) 9)
14                (or
15                 (equal (subseq val 0 7) "http://")
16                 (equal (subseq val 0 8) "https://")))
17               (format ostream "<~A> " val)
18             (format ostream "~A " val))))
19       (format ostream " .~%"))))
20 
21 (defun add-triples (in-file-name out-file-name)
22   (let* ((nt-data (myutils:file-to-string in-file-name))
23          (tokens (myutils:tokenize-string-keep-uri nt-data))
24          (uris
25           (remove-duplicates
26            (mapcan #'(lambda (s) (if
27                                      (and
28                                       (stringp s)
29                                       (> (length s) 19)
30                                       (equal (subseq s 0 19) "<http://dbpedia.org"))
31                                      (list s)))
32                    tokens)
33            :test #'equal)))
34     (with-open-file (str out-file-name
35                          :direction :output
36                          :if-exists :supersede
37                          :if-does-not-exist :create)
38     (dolist (uri uris)
39       (augmented-triples uri str)))))

TBD

1 

KGCreator Wrap Up

When developing applications or systems using Knowledge Graphs it is useful to be able to quickly generate test data which is the primary purpose of KGCreator. A secondary use is to generate Knowledge Graphs for production use using text data sources. In this second use case you will want to manually inspect the generated data to verify its correctness or usefulness for your application.

Knowledge Graph Sampler for Creating Small Custom Knowledge Graphs

I find it convenient to be able to “sample” small parts of larger knowledge graphs. The example program in this chapter accepts a list of DBPedia entity URIs, attempts top find links between these entities, and writes these nodes and discovered edges to a RDF triples file.

The code is in the directory src/kgsampler. As seen in the configuration files kg-add-dbpedia-triples.asd and package.lisp, we will use the sparql library we developed earlier as well as the libraries uiop and drakma:

;;;; kgsampler.asd

(asdf:defsystem #:kgsampler
  :description "sample knowledge graphs"
  :author "Mark Watson markw@markwatson.com"
  :license "Apache 2"
  :depends-on (#:uiop #:drakma #:sparql)
  :components ((:file "package")
               (:file "kgsampler")))
;;;; package.lisp

(defpackage #:kgsampler
  (:use #:cl #:uiop #:sparql)
  (:export #:sample))

The program starts with a list of entities and tries to find links on DBPedia between the entities. A small sample graph of the input entities and any discovered links is written to a file. The function dbpedia-as-nt spawns a process to use the curl utility to make a HTTP request to DBPedia. The function construct-from-dbpedia takes a list of entities and writes SPARQL CONSTRUCT statements with the entity as the subject and the object filtered to a string value in the English language to an output stream. The function find-relations runs at O(N^2) where N is the number of input entities so you should avoid using this program with a large number of input entities.

I offer this code with little explanation since much of it is similar to the techniques you saw in the previous chapter Knowledge Graph Navigator.

;; kgsampler main program

(in-package #:kgsampler)

(defun dbpedia-as-nt (query)
  (print query)
  (uiop:run-program 
   (list
    "curl" 
    (concatenate 'string
                 "https://dbpedia.org/sparql?format=text/ntriples&query="
                 ;; formats that work: csv, text/ntriples, text/ttl
                 (drakma:url-encode query :utf-8)))
   :output :string))

(defun construct-from-dbpedia (entity-uri-list &key (output-stream t))
  (dolist (entity-uri entity-uri-list)
    (format output-stream "~%~%# ENTITY NAME: ~A~%~%" entity-uri)
    (format
     output-stream
     (dbpedia-as-nt
      (format nil
              "CONSTRUCT { ~A ?p ?o } where { ~A ?p ?o  . FILTER(lang(?o) = 'en') }"\
  
              entity-uri entity-uri)))))

(defun ensure-angle-brackets (s)
  "make sure URIs have angle brackets"
  (if (equal #\< (char s 0))
      s
      (concatenate 'string "<" s ">")))

(defun find-relations (entity-uri-list &key (output-stream t))
  (dolist (entity-uri1 entity-uri-list)
    (dolist (entity-uri2 entity-uri-list)
      (if (not (equal entity-uri1 entity-uri2))
          (let ((possible-relations
                 (mapcar #'cadar
                         (sparql::dbpedia
                          (format nil 
                                  "select ?p where { ~A ?p ~A . filter(!regex(str(?p\
), \"page\", \"i\"))} limit 50"
                                  entity-uri1 entity-uri2)))))
            (print "** possible-relations:") (print possible-relations)
            (dolist (pr possible-relations)
              (format output-stream "~A ~A ~a .~%"
		      entity-uri1
		      (ensure-angle-brackets pr)
		      entity-uri2)))))))

(defun sample (entity-uri-list output-filepath)
  (with-open-file (ostream  (pathname output-filepath) :direction :output :if-exists\
 :supersede)
    (construct-from-dbpedia entity-uri-list :output-stream ostream)
    (find-relations entity-uri-list :output-stream ostream)))

Let’s start by running the two helper functions interactively so you can see their output (output edited for brevity). The top level function kgsampler:sample for this example takes a list of entity URIs and an output file name, and uses the functions construct-from-dbpedia entity-uri-list and find-relations to write triples for the entities and then for the relationships discovered between entities. The following listing also calls the helper function kgsampler::find-relations to show you what its output looks like.

$ sbcl
* (ql:quickload "kgsampler")
*   (kgsampler::construct-from-dbpedia '("<http://dbpedia.org/resource/Bill_Gates>" \
"<http://dbpedia.org/resource/Steve_Jobs>") :output-stream nil)

"CONSTRUCT { <http://dbpedia.org/resource/Bill_Gates> ?p ?o } where { <http://dbpedi\
a.org/resource/Bill_Gates> ?p ?o  . FILTER (lang(?o) = 'en') }" 
"CONSTRUCT { <http://dbpedia.org/resource/Bill_Gates> <http://purl.org/dc/terms/subj\
ect> ?o } where { <http://dbpedia.org/resource/Bill_Gates> <http://purl.org/dc/terms\
/subject> ?o  }" 

 ...

* (kgsampler::find-relations '("<http://dbpedia.org/resource/Bill_Gates>" "<http://d\
bpedia.org/resource/Microsoft>") :output-stream nil)

("dbpedia SPARQL:"
 "select ?p where { <http://dbpedia.org/resource/Bill_Gates> ?p <http://dbpedia.org/\
resource/Microsoft> . filter(!regex(str(?p), \"page\", \"i\"))} limit 50"
 "n") 
"** possible-relations:" 
("http://dbpedia.org/ontology/knownFor") 
"http://dbpedia.org/ontology/knownFor" 
("dbpedia SPARQL:"
 "select ?p where { <http://dbpedia.org/resource/Microsoft> ?p <http://dbpedia.org/r\
esource/Bill_Gates> . filter(!regex(str(?p), \"page\", \"i\"))} limit 50"
 "n") 
"** possible-relations:" 
("http://dbpedia.org/property/founders" "http://dbpedia.org/ontology/foundedBy") 
"http://dbpedia.org/property/founders" 
"http://dbpedia.org/ontology/foundedBy" 
nil

We now use the main function to generate an output RDF triple file:

 1 $ sbcl
 2 * (ql:quickload "kgsampler")
 3 * (kgsampler:sample '("<http://dbpedia.org/resource/Bill_Gates>" "<http://dbpedia.or\
 4 g/resource/Steve_Jobs>" "<http://dbpedia.org/resource/Microsoft>")  "test.nt")
 5 "CONSTRUCT { <http://dbpedia.org/resource/Bill_Gates> ?p ?o } where { <http://dbpedi\
 6 a.org/resource/Bill_Gates> ?p ?o  . FILTER (lang(?o) = 'en') }" 
 7 ("ndbpedia SPARQL:n"
 8  "select ?p where { <http://dbpedia.org/resource/Bill_Gates> ?p <http://dbpedia.org/\
 9 resource/Microsoft> . filter(!regex(str(?p), \"page\", \"i\"))} limit 50"
10  "n") 
11 "** possible-relations:" 
12 ("http://dbpedia.org/ontology/board") 
13 ("dbpedia SPARQL:"
14  "select ?p where { <http://dbpedia.org/resource/Steve_Jobs> ?p <http://dbpedia.org/\
15 resource/Bill_Gates> . filter(!regex(str(?p), \"page\", \"i\"))} limit 50"
16  "n") 

Output RDF N-Triple data is written to the file sample-KG.nt. A very small part of this file is listed here:

 1 # ENTITY NAME: <http://dbpedia.org/resource/Bill_Gates>
 2 
 3 <http://dbpedia.org/resource/Bill_Gates>        <http://dbpedia.org/ontology/abstrac\
 4 t>  "William Henry \"Bill\" Gates III (born October 28, 1955) is an American busines\
 5 s magnate,...."@en .
 6 <http://dbpedia.org/resource/Bill_Gates>
 7         <http://xmlns.com/foaf/0.1/name>
 8         "Bill Gates"@en .
 9 <http://dbpedia.org/resource/Bill_Gates>
10         <http://xmlns.com/foaf/0.1/surname>
11         "Gates"@en .
12 <http://dbpedia.org/resource/Bill_Gates>
13         <http://dbpedia.org/ontology/title>
14         "Co-Chairmanof theBill & Melinda Gates Foundation"@en .

The same data in Turtle RDF format can be seen in the file sample-KG.ttl that was produced by importing the triples file into the free edition of GraphDB exporting it to the Turtle file sample-KG.ttl that I find easier to read. GraphDB has visualization tools which I use here to generate an interactive graph display of this data:

GraphDB Visual graph of generated RDF triples
GraphDB Visual graph of generated RDF triples

This example is also set up for people and companies. I may expand it in the future to other types of entities as I need them.

This example program takes several minutes to run since many SPARQL queries are made to DBPedia. I am a non-corporate member of the DBPedia organization. Here is a membership application if you are interested in joining me there.

Knowledge Graph Navigator Common Library Implementation

The Knowledge Graph Navigator (which I will often refer to as KGN) is a tool for processing a set of entity names and automatically exploring the public Knowledge Graph DBPedia using SPARQL queries. I started to write KGN for my own use, to automate some things I used to do manually when exploring Knowledge Graphs, and later thought that KGN might be also useful for educational purposes. KGN shows the user the auto-generated SPARQL queries so hopefully the user will learn by seeing examples. KGN uses NLP code developed in earlier chapters and we will reuse that code with a short review of using the APIs.

In previous versions of this book, this example was hard-wired to use LispWork CAPI for the user interface. This old version is in src/kgn in the main GitHub repository for this book: https://github.com/mark-watson/loving-common-lisp and has a few UI components like a progress bar that I removed since the previous edition. The new version has separate GitHub repositories for:

If you followed the code example setup instructions in the book Preface or in the README file in the main repo https://github.com/mark-watson/loving-common-lisp then all three of these projects are available for loading via Quicklisp on your computer.

After looking at SPARQL generated by this example for an example query, we will start a process of bottom up development, first writing low level functions to automate SPARQL queries, writing utilities we will need for the UIs developed in later chapters.

Since the DBPedia SPARQL queries are time consuming, we will also implement a caching layer using SQLite that will make the app more responsive. The cache is especially helpful during development when the same queries are repeatedly used for testing.

The code for this reusable library is in the directory src/kgn-common. This is common library that will be used for user interfaces developed in later chapters. There is a lot of code in the following program listings and I hope to provide you with a roadmap overview of the code, diving in on code that you might want to reuse for your own projects and some representative code for generating SPARQL queries.

Let’s start by looking at the files for the common library:

  • Makefile - contains development shortcuts.
  • data - data used to remove stop words from text.
  • kgn-common.lisp - main code file for library.
  • package.lisp - standard Common Lisp package definition.
  • utils.lisp - miscelanious utility functions.
  • README.txt
  • kgn-common.asd - standard Common Lisp ASDF definition.

Example Output

Before we get started studying the implementation, let’s look at sample output in order to help give meaning to the code we will look at later. Consider a query that a user might type into the top query field in the KGN app:

1     Steve Jobs lived near San Francisco and was
2     a founder of <http://dbpedia.org/resource/Apple_Inc.>

The system will try to recognize entities in a query. If you know the DBPedia URI of an entity, like the company Apple in this example, you can use that directly. Note that in the SPARQL URIs are surrounded with angle bracket characters.

The application prints out automatically generated SPARQL queries. For the above listed example query the following output will be generated (some editing to fit page width):

Trying to get entity by name = Steve Jobs using SPARQL with type:
select distinct ?s ?comment { ?s ?p "Steve Jobs"@en . 
 ?s <http://www.w3.org/2000/01/rdf-schema#comment> ?comment .
   FILTER ( lang ( ?comment ) = 'en' ) . 
 ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
    <http://dbpedia.org/ontology/Person> . 
 } LIMIT 15 
Trying to get entity by name = San Francisco using SPARQL with type:
select distinct ?s ?comment { ?s ?p "San Francisco"@en . 
 ?s <http://www.w3.org/2000/01/rdf-schema#comment> ?comment . 
   FILTER ( lang ( ?comment ) = 'en' ) . 
 ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
    <http://dbpedia.org/ontology/City> . 
 } LIMIT 15 
SPARQL to get PERSON data for <http://dbpedia.org/resource/Steve_Jobs>:
SELECT DISTINCT ?label ?comment 
 ( GROUP_CONCAT ( DISTINCT ?birthplace ; SEPARATOR=' | ' ) AS ?birthplace ) 
 ( GROUP_CONCAT ( DISTINCT ?almamater ; SEPARATOR=' | ' ) AS ?almamater ) 
 ( GROUP_CONCAT ( DISTINCT ?spouse ; SEPARATOR=' | ' ) AS ?spouse ) { 
 <http://dbpedia.org/resource/Steve_Jobs> 
   <http://www.w3.org/2000/01/rdf-schema#comment>
   ?comment . 
 FILTER ( lang ( ?comment ) = 'en' ) . 
 OPTIONAL { <http://dbpedia.org/resource/Steve_Jobs>
            <http://dbpedia.org/ontology/birthPlace>
            ?birthplace } . 
 OPTIONAL { <http://dbpedia.org/resource/Steve_Jobs> 
            <http://dbpedia.org/ontology/almaMater> 
            ?almamater } . 
 OPTIONAL { <http://dbpedia.org/resource/Steve_Jobs> 
            <http://dbpedia.org/ontology/spouse> 
            ?spouse } . 
 OPTIONAL { <http://dbpedia.org/resource/Steve_Jobs> 
            <http://www.w3.org/2000/01/rdf-schema#label> 
            ?label . 
 FILTER ( lang ( ?label ) = 'en' ) } 
 } LIMIT 10 

Remember, the SPARQL is generated by KGN from natural language queries. Some more examples:

SPARQL to get CITY data for <http://dbpedia.org/resource/San_Francisco>:
SELECT DISTINCT ?label ?comment 
 ( GROUP_CONCAT ( DISTINCT ?latitude_longitude ; SEPARATOR=' | ' ) 
     AS ?latitude_longitude ) 
 ( GROUP_CONCAT ( DISTINCT ?populationDensity ; SEPARATOR=' | ' ) 
     AS ?populationDensity ) 
 ( GROUP_CONCAT ( DISTINCT ?country ; SEPARATOR=' | ' ) 
     AS ?country ) { 
 <http://dbpedia.org/resource/San_Francisco> 
   <http://www.w3.org/2000/01/rdf-schema#comment>
   ?comment . 
      FILTER ( lang ( ?comment ) = 'en' ) . 
 OPTIONAL { <http://dbpedia.org/resource/San_Francisco> 
            <http://www.w3.org/2003/01/geo/wgs84_pos#geometry> 
            ?latitude_longitude } . 
 OPTIONAL { <http://dbpedia.org/resource/San_Francisco> 
            <http://dbpedia.org/ontology/PopulatedPlace/populationDensity> 
            ?populationDensity } . 
 OPTIONAL { <http://dbpedia.org/resource/San_Francisco> 
            <http://dbpedia.org/ontology/country> 
            ?country } . 
 OPTIONAL { <http://dbpedia.org/resource/San_Francisco> 
            <http://www.w3.org/2000/01/rdf-schema#label> 
            ?label . } 
 } LIMIT 30 
SPARQL to get COMPANY data for <http://dbpedia.org/resource/Apple_Inc.>:
SELECT DISTINCT ?label ?comment ( GROUP_CONCAT ( DISTINCT ?industry ; SEPARATOR=' | \
' ) 
      AS ?industry ) 
 ( GROUP_CONCAT ( DISTINCT ?netIncome ; SEPARATOR=' | ' ) 
      AS ?netIncome ) 
 ( GROUP_CONCAT ( DISTINCT ?numberOfEmployees ; SEPARATOR=' | ' ) 
      AS ?numberOfEmployees ) { 
 <http://dbpedia.org/resource/Apple_Inc.> 
    <http://www.w3.org/2000/01/rdf-schema#comment> ?comment . 
 FILTER ( lang ( ?comment ) = 'en' ) . 
 OPTIONAL { <http://dbpedia.org/resource/Apple_Inc.> 
            <http://dbpedia.org/ontology/industry> 
            ?industry } . 
 OPTIONAL { <http://dbpedia.org/resource/Apple_Inc.> 
            <http://dbpedia.org/ontology/netIncome> ?netIncome } . 
 OPTIONAL { <http://dbpedia.org/resource/Apple_Inc.> 
            <http://dbpedia.org/ontology/numberOfEmployees> ?numberOfEmployees } . 
 OPTIONAL { <http://dbpedia.org/resource/Apple_Inc.> 
            <http://www.w3.org/2000/01/rdf-schema#label> ?label . 
              FILTER ( lang ( ?label ) = 'en' ) } 
 } LIMIT 30 

Once KGN has identified DBPedia entire URIs, it also searches for relationships between these entities:

DISCOVERED RELATIONSHIP LINKS:
<http://dbpedia.org/resource/Steve_Jobs>    ->
    <http://dbpedia.org/ontology/birthPlace>    ->
    <http://dbpedia.org/resource/San_Francisco>
<http://dbpedia.org/resource/Steve_Jobs>    -> 
    <http://dbpedia.org/ontology/occupation>    -> 
    <http://dbpedia.org/resource/Apple_Inc.>
<http://dbpedia.org/resource/Steve_Jobs>    -> 
    <http://dbpedia.org/ontology/board>         -> 
    <http://dbpedia.org/resource/Apple_Inc.>
<http://dbpedia.org/resource/Steve_Jobs>    -> 
    <http://www.w3.org/2000/01/rdf-schema#seeAlso> -> 
    <http://dbpedia.org/resource/Apple_Inc.>
<http://dbpedia.org/resource/Apple_Inc.>    -> 
    <http://dbpedia.org/property/founders>      -> 
    <http://dbpedia.org/resource/Steve_Jobs>

After listing the generated SPARQL for finding information for the entities in the query, KGN searches for relationships between these entities. These discovered relationships can be seen at the end of the last listing. Please note that this step makes SPARQL queries on O(n^2) where n is the number of entities. Local caching of SPARQL queries to DBPedia helps make processing many entities possible.

In addition to showing generated SPARQL and discovered relationships in the middle text pane of the application, KGN also generates formatted results that are also displayed in the bottom text pane:

- - - ENTITY TYPE: PEOPLE - - -

LABEL: Steve Jobs

COMMENT: Steven Paul "Steve" Jobs was an American information technology 
entrepreneur and inventor. He was the co-founder, chairman, and chief 
executive officer (CEO) of Apple Inc.; CEO and majority shareholder
of Pixar Animation Studios; a member of The Walt Disney Company's 
board of directors following its acquisition of Pixar; and founder, 
chairman, and CEO of NeXT Inc. Jobs is widely recognized as a pioneer of 
the microcomputer revolution of the 1970s and 1980s, along with Apple 
co-founder Steve Wozniak. Shortly after his death, Jobs's official 
biographer, Walter Isaacson, described him as a "creative entrepreneur 
whose passion for perfection and ferocious drive revolutionized six industries:
personal computers, animated movies, music, phones

BIRTHPLACE: http://dbpedia.org/resource/San_Francisco

ALMAMATER: http://dbpedia.org/resource/Reed_College

SPOUSE: http://dbpedia.org/resource/Laurene_Powell_Jobs

- - - ENTITY TYPE: CITIES - - -

LABEL:  San Francisco

COMMENT: San Francisco, officially the City and County of San Francisco, is the
cultural, commercial, and financial center of Northern California and
the only consolidated city-county in California. San Francisco encompasses a
land area of about 46.9 square miles (121 km2) on the northern end of the 
San Francisco Peninsula, which makes it the smallest county in the state. 
It has a density of about 18,451 people per square mile (7,124 people per km2),
making it the most densely settled large city (population greater than 
200,000) in the state of California and the second-most densely populated 
major city in the United States after New York City. San Francisco is 
the fourth-most populous city in California, after Los Angeles, San Diego, and 
San Jose, and the 13th-most populous cit

LATITUDE--LONGITUDE: POINT(-122.41666412354 37.783332824707)

POPULATION-DENSITY: 7123.97092726667

COUNTRY: http://dbpedia.org/resource/United_States

- - - ENTITY TYPE: COMPANIES - - -

LABEL: Apple Inc.

COMMENT: Apple Inc. is an American multinational technology company headquartered
in Cupertino, 
California, that designs, develops, and sells consumer electronics, 
computer software, and online services. Its hardware products include the 
iPhone smartphone, the iPad tablet computer, the Mac personal computer, the 
iPod portable media player, the Apple Watch smartwatch, and the Apple TV digital 
media player. Apple's consumer software includes the macOS and iOS operating 
systems, the iTunes media player, the Safari web browser, and the iLife and 
iWork creativity and productivity suites. Its online services include the 
iTunes Store, the iOS App Store and Mac App Store, Apple Music, and iCloud.

INDUSTRY: http://dbpedia.org/resource/Computer_hardware | 
          http://dbpedia.org/resource/Computer_software | 
          http://dbpedia.org/resource/Consumer_electronics | 
          http://dbpedia.org/resource/Corporate_Venture_Capital | 
          http://dbpedia.org/resource/Digital_distribution |
          http://dbpedia.org/resource/Fabless_manufacturing

NET-INCOME: 5.3394E10

NUMBER-OF-EMPLOYEES: 115000

Hopefully after reading through sample output and seeing the screen shot of the application, you now have a better idea what this example application does. Now we will look at project configuration and then implementation.

Project Configuration and Running the Application

The following listing of kgn.asd shows the ten packages this example depends on (five of these are also examples in this book, and five are in the public Quicklisp repository):

 1 ;;;; knowledgegraphnavigator.asd
 2 
 3 ((asdf:defsystem #:kgn-common
 4   :description "common utilities for Knowledge Graph Navigator"
 5   :author "Mark Watson <markw@markwatson.com>"
 6   :license "Apache 2"
 7   :depends-on (#:sqlite #:cl-json #:alexandria #:drakma #:myutils #:entities #:entit\
 8 y-uris #:kbnlp #:sparql-cache)
 9   :components ((:file "package")
10                (:file "utils")
11                (:file "kgn-common")))

Listing of package.lisp:

 1 ;;;; package.lisp
 2 
 3 (defpackage #:kgn-common
 4 (:use #:cl #:alexandria #:myutils  #:myutils #:sparql-cache
 5    #:entities #:entity-uris #:kbnlp)
 6 (:export #:kgn-common #:remove-stop-words        
 7          #:entity-results->relationship-links
 8          #:get-entity-data-helper #:handle-URIs-in-query
 9          #:remove-uris-from-query #:get-URIs-in-query #:display-entity-results
10          #:string-shorten #:prompt-string #:dbpedia-get-product-detail
11          #:dbpedia-get-person-detail #:dbpedia-get-country-detail
12          #:dbpedia-get-city-detail #:dbpedia-get-company-detail #:clean-results
13          #:dbpedia-get-entities-by-name #:clean-comment))

We use ql:quickload to load the KGN common library and call a few APIs (some output removed for brevity):

  1 $ sbcl
  2 This is SBCL 2.1.10, an implementation of ANSI Common Lisp.
  3 * (ql :kgn-common)
  4 To load "kgn-common":
  5   Load 1 ASDF system:
  6     kgn-common
  7 ; Loading "kgn-common"
  8 ..................
  9 "Starting to load data...." 
 10 "....done loading data." 
 11 To load "sqlite":
 12   Load 1 ASDF system:
 13     sqlite
 14 ; Loading "sqlite"
 15 
 16 To load "cl-json":
 17   Load 1 ASDF system:
 18     cl-json
 19 ; Loading "cl-json"
 20 
 21 To load "drakma":
 22   Load 1 ASDF system:
 23     drakma
 24 ; Loading "drakma"
 25 [package kgn-common]..
 26 (:kgn-common)
 27 *
 28 *  (kgn-common::dbpedia-get-relationships "<http://dbpedia.org/resource/Bill_Gates>"\
 29  "<http://dbpedia.org/resource/Microsoft>")
 30 ("<http://dbpedia.org/ontology/knownFor>")
 31 *
 32 *  (kgn-common:dbpedia-get-entities-by-name "Bill Gates" "<http://dbpedia.org/ontolo\
 33 gy/Person>" "<http://schema.org/Person>" :message-stream nil)
 34 
 35 (((:s "http://dbpedia.org/resource/Bill_Gates")
 36   (:comment
 37    "William Henry Gates III (born October 28, 1955) is an American business magnate,\
 38  software developer, investor, author, and philanthropist. He is a co-founder of Mic\
 39 rosoft, along with his late childhood friend Paul Allen. During his career at Micros\
 40 oft, Gates held the positions of chairman, chief executive officer (CEO), president \
 41 and chief software architect, while also being the largest individual shareholder un\
 42 til May 2014. He is considered one of the best known entrepreneurs of the microcompu\
 43 ter revolution of the 1970s and 1980s."))
 44  ((:s "http://dbpedia.org/resource/Harry_R._Lewis")
 45   (:comment
 46    "Harry Roy Lewis (born 1947) is an American computer scientist, mathe 00ADma 00AD\
 47 ti 00ADcian, and uni 00ADver 00ADsity admin 00ADi 00ADstra 00ADtor known for his res\
 48 earch in com 00ADpu 00ADta 00ADtional logic, textbooks in theoretical computer scien\
 49 ce, and writings on computing, higher education, and technology. He is Gordon McKay \
 50 Professor of Computer Science at Harvard University, and was Dean of Harvard College\
 51  from 1995 to 2003. A new professorship in Engineering and Applied Sciences, endowed\
 52  by a former student, will be named for Lewis and his wife upon their retirements.")\
 53 )
 54  ((:s "http://dbpedia.org/resource/Cascade_Investment")
 55   (:comment
 56    "Cascade Investment, L.L.C. is an American holding company and private investment\
 57  firm headquartered in Kirkland, Washington, United States. It is controlled by Bill\
 58  Gates, and managed by Michael Larson. More than half of Gates' fortune is held in a\
 59 ssets outside his holding of Microsoft shares. Cascade is the successor company to D\
 60 ominion Income Management, the former investment vehicle for Gates' holdings, which \
 61 was managed by convicted felon Andrew Evans."))
 62  ((:s "http://dbpedia.org/resource/Jerry_Dyer")
 63   (:comment
 64    "Jerry P. Dyer (born May 3, 1959) is an American politician and former law enforc\
 65 ement officer. He is the 26th and current mayor of Fresno, California. Previously, h\
 66 e served as the chief of the Fresno Police Department."))) 
 67 
 68 select distinct ?s ?comment { ?s ?p "Bill Gates"@en . @@ ?s <http://www.w3.org/2000/\
 69 01/rdf-schema#comment>  ?comment  . FILTER  (lang(?comment) = 'en') . @@ ?s <http://\
 70 www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://dbpedia.org/ontology/Person> . @@ \
 71 } LIMIT 15
 72 (((:s "http://dbpedia.org/resource/Bill_Gates")
 73   (:comment
 74    "William Henry Gates III (born October 28, 1955) is an American business magnate,\
 75  software developer, investor, author, and philanthropist. He is a co-founder of Mic\
 76 rosoft, along with his late childhood friend Paul Allen. During his career at Micros\
 77 oft, Gates held the positions of chairman, chief executive officer (CEO), president \
 78 and chief software architect, while also being the largest individual shareholder un\
 79 til May 2014. He is considered one of the best known entrepreneurs of the microcompu\
 80 ter revolution of the 1970s and 1980s."))
 81  ((:s "http://dbpedia.org/resource/Harry_R._Lewis")
 82   (:comment
 83    "Harry Roy Lewis (born 1947) is an American computer scientist, mathe 00ADma 00AD\
 84 ti 00ADcian, and uni 00ADver 00ADsity admin 00ADi 00ADstra 00ADtor known for his res\
 85 earch in com 00ADpu 00ADta 00ADtional logic, textbooks in theoretical computer scien\
 86 ce, and writings on computing, higher education, and technology. He is Gordon McKay \
 87 Professor of Computer Science at Harvard University, and was Dean of Harvard College\
 88  from 1995 to 2003. A new professorship in Engineering and Applied Sciences, endowed\
 89  by a former student, will be named for Lewis and his wife upon their retirements.")\
 90 )
 91  ((:s "http://dbpedia.org/resource/Cascade_Investment")
 92   (:comment
 93    "Cascade Investment, L.L.C. is an American holding company and private investment\
 94  firm headquartered in Kirkland, Washington, United States. It is controlled by Bill\
 95  Gates, and managed by Michael Larson. More than half of Gates' fortune is held in a\
 96 ssets outside his holding of Microsoft shares. Cascade is the successor company to D\
 97 ominion Income Management, the former investment vehicle for Gates' holdings, which \
 98 was managed by convicted felon Andrew Evans."))
 99  ((:s "http://dbpedia.org/resource/Jerry_Dyer")
100   (:comment
101    "Jerry P. Dyer (born May 3, 1959) is an American politician and former law enforc\
102 ement officer. He is the 26th and current mayor of Fresno, California. Previously, h\
103 e served as the chief of the Fresno Police Department.")))
104 * 

In this last example, using :message-stream nil effectively turns off printing generated SPARQL queries used by these APIs. You can use :message-stream t to see generated SPARQL.

Every time the KGN common library makes a web service call to DBPedia the query and response are cached in a SQLite database in ~/.kgn_cache.db which can greatly speed up the program, especially in development mode when testing a set of queries. This caching also takes some load off of the public DBPedia endpoint, which is a polite thing to do.

Review of NLP Utilities Used in Application

Here is a quick review of NLP utilities we saw in an earlier chapter:

  • kbnlp:make-text-object
  • kbnlp::text-human-names
  • kbnlp::text-place-name
  • entity-uris:find-entities-in-text
  • entity-uris:pp-entities

The following code snippets show example calls to the relevant NLP functions and the generated output:

KGN 39 > (setf text "Bill Clinton went to Canada")
"Bill Clinton went to Canada"

KGN 40 > (setf txtobj (kbnlp:make-text-object text))
#S(TEXT :URL "" :TITLE "" :SUMMARY "<no summary>" :CATEGORY-TAGS (("computers_micros\
oft.txt" 0.00641) ("religion_islam.txt" 0.00357)) :KEY-WORDS NIL :KEY-PHRASES NIL :H\
UMAN-NAMES ("Bill Clinton") :PLACE-NAMES ("Canada") :COMPANY-NAMES NIL :TEXT #("Bill\
" "Clinton" "went" "to" "Canada") :TAGS #("NNP" "NNP" "VBD" "TO" "NNP"))

KGN 41 > (kbnlp::text-human-names txtobj)
("Bill Clinton")

KGN 42 > 
(loop for key being the hash-keys of  (entity-uris:find-entities-in-text text)
  using (hash-value value)
  do (format t "key: ~S value: ~S~%" key value))
key: "people" value: (("Bill Clinton" "<http://dbpedia.org/resource/Bill_Clinton>"))
key: "countries" value: (("Canada" "<http://dbpedia.org/resource/Canada>"))
NIL

The code using loop at the end of the last repl listing that prints keys and values of a hash table is from the Common Lisp Cookbook web site in the section “Traversing a Hash Table.”

Developing Low-Level SPARQL Utilities

I use the standard command line curl utility program with the Common Lisp package uiop to make HTML GET requests to the DBPedia public Knowledge Graph and the package drakma to url-encode parts of a query. The source code is in a separate Quicklisp library located in src/sparql-cache/sparql.lisp. A non-caching library is also available in src/sparql/sparql.lisp.

In the following listing of src/sparql-cache/sparql.lisp, lines 8, 24, 39, and 55 I use some caching code that we will look at later. The nested replace-all statements in lines 12-13 are a kluge to remove Unicode characters that occasionally caused runtime errors in the KGN application.

 1 (in-package #:kgn)
 2 
 3 (ql:quickload "cl-json")
 4 (ql:quickload "drakma")
 5 
 6 (defun sparql-dbpedia (query)
 7   (let* (ret
 8          (cr (fetch-result-dbpedia query))
 9          (response
10           (or
11            cr
12            (replace-all
13             (replace-all
14              (uiop:run-program 
15               (list
16                "curl" 
17                (concatenate 'string
18                             "https://dbpedia.org/sparql?query="
19                             (drakma:url-encode query :utf-8)
20                             "&format=json"))
21               :output :string)
22              "\\u2013" " ")
23             "\\u" " "))))
24     (save-query-result-dbpedia query response)
25     (ignore-errors
26       (with-input-from-string
27           (s response)
28         (let ((json-as-list (json:decode-json s)))
29           (setf
30            ret
31            (mapcar #'(lambda (x)
32                        ;;(pprint x)
33                        (mapcar #'(lambda (y)
34                                    (list (car y) (cdr (assoc :value (cdr y))))) x))
35                    (cdr (cadddr (cadr json-as-list))))))))
36     ret))
37 
38 (defun sparql-ask-dbpedia (query)
39   (let* ((cr (fetch-result-dbpedia query))
40          (response
41           (or
42            cr
43            (replace-all
44             (replace-all
45              (uiop:run-program 
46               (list
47                "curl" 
48                (concatenate 'string
49                             "https://dbpedia.org/sparql?query="
50                             (drakma:url-encode query :utf-8)
51                             "&format=json"))
52               :output :string)
53              "\\u2013" " ")
54             "\\u" " "))))
55     (save-query-result-dbpedia query response)
56     (if  (search "true" response)
57         t
58       nil)))

The code for replacing Unicode characters is messy but prevents problems later when we are using the query results in the example application.

The code (json-as-list (json:decode-json s)) on line 28 converts a deeply nested JSON response to nested Common Lisp lists. You may want to print out the list to better understand the mapcar expression on lines 31-35. There is no magic to writing expressions like this, in a repl I set json-as-list to the results of one query, and I spent a minute or two experimenting with the nested mapcar expression to get it to work with my test case.

The implementation for sparql-ask-dbpedia in lines 38-58 is simpler because we don’t have to fully parse the returned SPARQL query results. A SPARQL ask type query returns a true/false answer to a query. We will use this to determine the types of entities in query text. While our NLP library identifies entity types, making additional ask queries to DBPedia to verify entity types will provide better automated results.

Implementing the Caching Layer

While developing KGN and also using it as an end user, many SPARQL queries to DBPedia contain repeated entity names so it makes sense to write a caching layer. We use a SQLite database “~/.kgn_cache.db” to store queries and responses.

The caching layer is implemented in the file src/sparql-cache/sparql.lisp and some of the relevant code is listed here:

 1 ;;; SqList caching for SPARQL queries:
 2 
 3 (defvar *db-path* (pathname "~/.kgn_cache.db"))
 4 
 5 (defun create-dbpedia ()
 6   (sqlite:with-open-database (d *db-path*)
 7     (ignore-errors
 8       (sqlite:execute-single d
 9         "CREATE TABLE dbpedia (query string  PRIMARY KEY ASC, result string)"))))
10 
11 (defun save-query-result-dbpedia (query result)
12   (sqlite:with-open-database (d *db-path*)
13     (ignore-errors
14       (sqlite:execute-to-list d
15                        "insert into dbpedia (query, result) values (?, ?)"
16                        query result))))
17 (defun fetch-result-dbpedia (query)
18   (sqlite:with-open-database (d *db-path*)
19     (cadar
20      (sqlite:execute-to-list d
21                       "select * from dbpedia where query = ?" query))))

This caching layer greatly speeds up my own personal use of KGN. Without caching, queries that contain many entity references simply take too long to run. The UI for the KGN applications in later chapters have a menu option for clearing the local cache but I almost never use this option because growing a large cache that is tailored for the types of information I search for makes the entire system much more responsive.

Utilities in the Main Library File kgn-common.lisp

The utilities in the file src/kgn-common/kgn-common.lisp can be seen in this complete code listing:

  1 (in-package #:kgn-common)
  2 
  3 (defun pprint-results (results &key (stream t))
  4   (pprint results stream))
  5 
  6 (defun colorize-sparql-local (s  &key (stream nil))
  7   "placeholder - some applications, like the kgn-capi-ui example need to
  8    colorize the sparql output"
  9   (princ s stream))
 10 
 11 (defun check-uri (uri)
 12   "sloppy code fix: URIs have different forms - normalize these"
 13   (if (equal (type-of uri) 'cons) (setf uri (second uri)))
 14   (entity-uris:ensure-uri-brackets uri))
 15 
 16 (defun clean-comment (comment-string)
 17   "When getting comment strings from DBPedia, there are parts
 18    that I remove for display"
 19   (let ((i1 (search "(" comment-string))
 20         (i2 (search ")" comment-string)))
 21     (if (and i1 i2 (> i2 i1) (> i1 0))
 22         (concatenate 'string (subseq comment-string 0 (- i1 1))
 23                              (subseq comment-string (+ i2 1)))
 24         (let ((j1 (search " / " comment-string)))
 25           (if j1
 26               (let ((j2 (search "/" comment-string :start2 (+ j1 2))))
 27                 (if (and j1 j2 (> j2 j1) (< (+ j2 1) (length comment-string)))
 28                     (concatenate 'string (subseq comment-string 0 j1)
 29                                          (subseq comment-string (+ j2 1)))
 30                     comment-string))
 31               comment-string)
 32           comment-string))))
 33 
 34 (defun clean-results (results)
 35   "This function is replaced when we later build GUI apps"
 36   results)
 37 
 38 (defun get-name-and-description-for-uri (uri)
 39   (let* ((sparql
 40            (replace-all
 41             (format nil
 42   "select distinct ?name ?comment { @@ ~
 43     values ?nameProperty {<http://www.w3.org/2000/01/rdf-schema#label>
 44                           <http://xmlns.com/foaf/0.1/name> } . @@ ~
 45     ~A ?nameProperty ?name . @@ ~
 46     ~A <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment  .
 47          FILTER  (lang(?comment) = 'en') . @@ ~
 48    } LIMIT 1" uri uri)
 49              "@@" " "))
 50           (results (sparql-cache:dbpedia sparql)))
 51     (list (second (assoc :name (car results)))
 52           (second (assoc :comment (car results))))))
 53 
 54 ;; (kgn-common::get-name-and-description-for-uri
 55 ;;   "<http://dbpedia.org/resource/Apple_Inc.>")
 56 
 57 (defun ask-is-type-of (entity-uri type-value-uri) ;; both URIs expected to use surro\
 58 unding < > brackets for SPARQL
 59   (let* ((sparql
 60            (format nil
 61               "ASK { ~A <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> ~A }"
 62               (check-uri entity-uri) (check-uri type-value-uri)))
 63          (results (sparql-cache:ask-dbpedia sparql)))
 64     (print sparql)
 65     results))
 66 
 67 ;; (kgn-common::ask-is-type-of
 68 ;;   "<http://dbpedia.org/resource/Apple_Inc.>"
 69 ;;   "<http://dbpedia.org/ontology/Company>")
 70 
 71 
 72 (defun dbpedia-get-entities-by-name (name dbpedia-type schema-org-type
 73           &key (message-stream t)
 74                (colorize-sparql-function #'colorize-sparql-local))
 75   ;; http://www.w3.org/1999/02/22-rdf-syntax-ns#type <http://schema.org/Person>
 76   (let* ((sparql
 77            (format nil
 78 "select distinct ?s ?comment { 
 79    ?s ?p \"~A\"@en . @@ ~
 80    ?s <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment  . FILTER  (lang(?com\
 81 ment) = 'en') . @@ ~
 82    ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> ~A . @@ ~
 83 } LIMIT 15" name dbpedia-type))
 84          (results (sparql-cache:dbpedia (replace-all sparql "@@" " "))))
 85     (print results)
 86     (terpri message-stream)
 87     (format message-stream
 88       "Trying to get entity by name = ~A using SPARQL with type:"
 89       name dbpedia-type)
 90     (terpri message-stream)
 91     (apply colorize-sparql-function (list sparql :stream message-stream))
 92     (if (null results)
 93         (let* ((sparql2
 94                  (format nil
 95 "select distinct ?s ?comment {
 96     ?s ?p \"~A\"@en . @@ ~
 97     ?s <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment  . FILTER  (lang(?co\
 98 mment) = 'en') . @@ ~
 99     ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> ~A . @@ ~
100 } LIMIT 15" name schema-org-type)))
101           (format
102            t
103            "No results for ~A for last SPARQL query using type ~A so trying type ~A"\
104  name dbpedia-type schema-org-type)
105           (terpri message-stream)
106           (setf results (sparql-cache:dbpedia (replace-all sparql2 "@@" " ")))
107           (if (null results)
108               (format
109                t
110                "No results for ~A for last SPARQL query using type ~A"
111                name schema-org-type)
112               (let* ((filtered (remove-if
113                                 #'(lambda (x)
114                                     (or
115                                      (search "," (cadar x))
116                                      (and
117                                       (not (equal (first x) :comment))
118                                       (not (search "/resource/" (cadar x))))))
119                                 results))
120                      (uris (remove-duplicates
121                             (map
122                               'list
123                               #'(lambda (x)
124                                 (list (concatenate 'string "<" (cadar x) ">") (cadad\
125 r x)))
126                                  filtered) :test #'equal)))
127                 (format t "~%~%********** dbpedia-get-entities-by-name: uris:~%")
128                 (pprint uris) (terpri)
129                 uris))))
130     results))
131 
132 ;; (kgn-common:dbpedia-get-entities-by-name
133 ;;    "Bill Gates" "<http://dbpedia.org/ontology/Person>"
134 ;;    "<http://schema.org/Person>" :message-stream nil)
135 ;; in above, pass t for message-stream to see generated SPARQL queries
136 
137 (defun dbpedia-get-person-detail (person-uri-raw
138     &key (message-stream t)
139          (colorize-sparql-function #'colorize-sparql-local))
140   ;; http://dbpedia.org/ontology/birthPlace 
141   (let* ((person-uri (check-uri person-uri-raw))
142          (query
143            (format nil
144       "SELECT DISTINCT ?label ?comment@@ ~
145         (GROUP_CONCAT (DISTINCT ?birthplace; SEPARATOR=' | ') AS ?birthplace) @@ ~
146         (GROUP_CONCAT (DISTINCT ?almamater; SEPARATOR=' | ') AS ?almamater) @@ ~
147         (GROUP_CONCAT (DISTINCT ?spouse; SEPARATOR=' | ') AS ?spouse) { @@ ~
148          ~A <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .@@
149               FILTER  (lang(?comment) = 'en') . @@ ~
150         OPTIONAL { ~A <http://dbpedia.org/ontology/birthPlace> ?birthplace } . @@ ~
151         OPTIONAL { ~A <http://dbpedia.org/ontology/almaMater> ?almamater } . @@ ~
152         OPTIONAL { ~A <http://dbpedia.org/ontology/spouse> ?spouse } . @@ ~
153         OPTIONAL { ~A  <http://www.w3.org/2000/01/rdf-schema#label> ?label .@@ ~
154                 FILTER  (lang(?label) = 'en') } @@ ~
155       } LIMIT 10@@" person-uri person-uri person-uri person-uri person-uri))
156          (results (sparql-cache:dbpedia  (replace-all query "@@" " "))))
157     (format message-stream "~%SPARQL to get PERSON data for ~A:~%~%" person-uri)
158     (apply colorize-sparql-function (list query :stream message-stream))
159     (format message-stream "~%")
160     ;;results))
161     (clean-results results)))
162 
163 ;; (kgn-common:dbpedia-get-person-detail "<http://dbpedia.org/resource/Bill_Gates>")
164 
165 (defun dbpedia-get-company-detail (company-uri-raw
166      &key (message-stream t)
167           (colorize-sparql-function #'colorize-sparql-local))
168   (let* ((company-uri (check-uri company-uri-raw))
169          (query
170            (format nil
171 "SELECT DISTINCT ?label ?comment (GROUP_CONCAT (DISTINCT ?industry; SEPARATOR=' | ')\
172  AS ?industry)@@ ~
173   (GROUP_CONCAT (DISTINCT ?netIncome; SEPARATOR=' | ') AS ?netIncome)@@ ~
174   (GROUP_CONCAT (DISTINCT ?numberOfEmployees; SEPARATOR=' | ') AS ?numberOfEmployees\
175 ) {@@ ~
176   ~A <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .@@
177           FILTER  (lang(?comment) = 'en') .@@ ~
178   OPTIONAL { ~A <http://dbpedia.org/ontology/industry> ?industry } .@@  ~
179   OPTIONAL { ~A <http://dbpedia.org/ontology/netIncome> ?netIncome } .@@  ~
180   OPTIONAL { ~A <http://dbpedia.org/ontology/numberOfEmployees> ?numberOfEmployees }\
181  .@@  ~
182   OPTIONAL { ~A <http://www.w3.org/2000/01/rdf-schema#label> ?label . FILTER (lang(?\
183 label) = 'en') } @@ ~
184 } LIMIT 30@@"
185            company-uri company-uri company-uri company-uri company-uri))
186          (results (sparql-cache:dbpedia  (replace-all query "@@" " "))))
187     (format message-stream "~%SPARQL to get COMPANY data for ~A:~%~%" company-uri)
188     (apply colorize-sparql-function (list query :stream message-stream))
189     (format message-stream "~%")
190     (clean-results results)))
191 
192 ;; (kgn-common:dbpedia-get-company-detail "<http://dbpedia.org/resource/Microsoft>")
193 
194 (defun dbpedia-get-country-detail (country-uri-raw
195     &key (message-stream t)
196          (colorize-sparql-function #'colorize-sparql-local))
197   (let* ((country-uri (check-uri country-uri-raw))
198          (query
199            (format nil
200 "SELECT DISTINCT ?label ?comment (GROUP_CONCAT (DISTINCT ?areaTotal; SEPARATOR=' | '\
201 ) AS ?areaTotal)@@ ~
202   (GROUP_CONCAT (DISTINCT ?populationDensity; SEPARATOR=' | ') AS ?populationDensity\
203 ) {@@ ~
204   ~A <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .@@
205       FILTER  (lang(?comment) = 'en') .@@ ~
206   OPTIONAL { ~A <http://dbpedia.org/ontology/areaTotal> ?areaTotal } .@@  ~
207   OPTIONAL { ~A <http://dbpedia.org/ontology/populationDensity> ?populationDensity }\
208  .@@  ~
209   OPTIONAL { ~A <http://www.w3.org/2000/01/rdf-schema#label> ?label . }@@ ~
210 } LIMIT 30@@"
211                    country-uri country-uri country-uri country-uri country-uri))
212          (results (sparql-cache:dbpedia  (replace-all query "@@" " "))))
213     (format message-stream "~%SPARQL to get COUNTRY data for ~A:~%~%" country-uri)
214     (apply colorize-sparql-function (list query :stream message-stream))
215     (format message-stream "~%")
216     (clean-results results)))
217 
218 ;; (kgn-common:dbpedia-get-country-detail "<http://dbpedia.org/resource/Canada>")
219 
220 (defun dbpedia-get-city-detail (city-uri-raw
221     &key (message-stream t)
222          (colorize-sparql-function #'colorize-sparql-local))
223   (let* ((city-uri (check-uri city-uri-raw))
224          (query
225            (format
226             nil
227 "SELECT DISTINCT ?label ?comment @@ ~
228   (GROUP_CONCAT (DISTINCT ?latitude_longitude; SEPARATOR=' | ')  AS ?latitude_longit\
229 ude) @@ ~
230   (GROUP_CONCAT (DISTINCT ?populationDensity; SEPARATOR=' | ') AS ?populationDensity\
231 ) @@ ~
232   (GROUP_CONCAT (DISTINCT ?country; SEPARATOR=' | ') AS ?country) { @@ ~
233   ~A <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment . FILTER  (lang(?comme\
234 nt) = 'en') . @@ ~
235   OPTIONAL { ~A <http://www.w3.org/2003/01/geo/wgs84_pos#geometry> ?latitude_longitu\
236 de } . @@ ~
237   OPTIONAL { ~A <http://dbpedia.org/ontology/PopulatedPlace/populationDensity> ?popu\
238 lationDensity } . @@ ~
239   OPTIONAL { ~A <http://dbpedia.org/ontology/country> ?country } .@@ ~
240   OPTIONAL { ~A <http://www.w3.org/2000/01/rdf-schema#label> ?label . } @@ ~
241 } LIMIT 30@@"
242             city-uri city-uri city-uri city-uri city-uri))
243          (results (sparql-cache:dbpedia (replace-all query "@@" " "))))
244     (format message-stream "~%SPARQL to get CITY data for ~A:~%~%" city-uri)
245     (apply colorize-sparql-function (list query :stream message-stream))
246     (format message-stream "~%")
247     (clean-results results)))
248 
249 ;; (kgn-common:dbpedia-get-city-detail "<http://dbpedia.org/resource/London>")
250 
251 (defun dbpedia-get-product-detail (product-uri-raw
252      &key (message-stream t)
253           (colorize-sparql-function #'colorize-sparql-local))
254   (let* ((product-uri (check-uri product-uri-raw))
255          (query
256            (format
257             nil
258 "SELECT DISTINCT ?label ?comment {  @@ ~
259   ~A <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment . FILTER  (lang(?comme\
260 nt) = 'en') . @@ ~
261   OPTIONAL { ~A <http://www.w3.org/2000/01/rdf-schema#label> ?label . } ~
262 } LIMIT 30@@"
263             product-uri product-uri))
264          (results (sparql-cache:dbpedia (replace-all query "@@" " "))))
265     (format message-stream "~%SPARQL to get PRODUCT data for ~A:~%~%" product-uri)
266     (apply colorize-sparql-function (list query :stream message-stream))
267     (format message-stream "~%")
268     (clean-results results)))
269 
270 ;; (kgn-common:dbpedia-get-product-detail "<http://dbpedia.org/resource/Pepsi>")
271 
272 
273 (defun dbpedia-get-relationships (s-uri o-uri) ;;  &key (message-stream t))
274   (let* ((query
275            (format
276             nil
277 "SELECT DISTINCT ?p { ~A ?p ~A . FILTER (!regex(str(?p), 'wikiPage', 'i'))} LIMIT 5"
278             (check-uri s-uri) (check-uri  o-uri)))
279          (results (sparql-cache:dbpedia query)))
280     (alexandria:flatten (map 'list
281                              #'(lambda (x)
282                                  (format nil "~{<~A>~}" (cdar x)))
283                              results))))
284 
285 ;; (kgn-common::dbpedia-get-relationships
286 ;;   "<http://dbpedia.org/resource/Bill_Gates>"
287 ;;   "<http://dbpedia.org/resource/Microsoft>")
288 
289 (defun entities (text)
290   (let ((txt-obj (kbnlp:make-text-object text)))
291     (list (kbnlp::text-human-names txt-obj) (kbnlp::text-place-names txt-obj) (kbnlp\
292 ::text-company-names txt-obj))))
293 
294 ;; (kgn-common::entities "Bill Clinton went to Canada")
295 
296 (defun entities-dbpedia (text)
297   (let ((e-hash (entity-uris:find-entities-in-text text)))
298     (list
299       (gethash "people" e-hash)
300       (gethash "companies" e-hash)
301       (gethash "countries" e-hash)
302       (gethash "cities" e-hash))))
303 
304 ;; (kgn-common::entities-dbpedia "Bill Clinton went to Canada")

Wrap-up

This is a long example application for a book, split between this chapter and the next two chapters offering different user interface implementations.

I got the idea for the KGN application because I was spending quite a bit of time manually setting up SPARQL queries for DBPedia (and other public sources like WikiData) and I wanted to experiment with partially automating this process. Now in the next two chapters we will write user interfaces for this KGN common library.

Knowledge Graph Navigator Text-Based User Interface

We developed the The Knowledge Graph Navigator (which I will often refer to as KGN) common library in the last chapter. Here we write a simple console or text-based user interface for the library. In later chapters we implement UIs using LispWorks CAPI, McCLIM, and Franz Common Graphics.

This Quicklisp library can be found in a separate GitHub repository https://github.com/mark-watson/kgn-text-ui and contains the files:

  • kgn-text-ui.asd - specifies dependencies, including the KGN common library
  • kgn-text-ui.lisp - Contains the complete user interface
  • package.lisp - specifies dependencies, including the KGN common library

We start by looking at sample output using the text user interface and then look at the implementation.

Example Output

We will look at a very simple example query Bill Gates worked at Microsoft and his competitor was IBM that only contains a few entities. In practice, I usually use queries with five to ten entities to get more discovered relationships. I remove a lot of the generated output in the following listing for brevity, especially the many generated SPARQL queries that the code generates and uses (comments on the output appear after this listing):

  1 $ sbcl
  2 *(ql:quickload :kgn-text-ui)
  3 ; Loading "kgn-common"
  4 ; Loading "sqlite"
  5 ; Loading "cl-json"
  6 ; Loading "drakma"
  7 
  8 * (kgn-text-ui:kgn-text-ui)
  9 
 10 "Enter entity names (people, places, companies, etc.":
 11 Bill Gates worked at Microsoft and his competitor was IBM
 12 
 13 - - - - Enter zero or more indices for your desired selections:
 14 
 15 0  -   "William Henry Gates III (born October 28, 1955) is an American business magn\
 16 ate, software developer, investor, author, and philanthropist. He is a co-founder of\
 17  Microsoft, along with his late childhood friend Paul Allen. During his career at Mi\
 18 crosoft, Gates held the positions of chairman, chief executive officer (CEO), presid\
 19 ent and chief software architect, while also being the largest individual shareholde\
 20 r until May 2014. He is considered one of the best known entrepreneurs of the microc\
 21 omputer revolution of the 1970s and 1980s." 
 22 
 23 1  -   "Harry Roy Lewis (born 1947) is an American computer scientist, mathe 00ADma \
 24 00ADti 00ADcian, and uni 00ADver 00ADsity admin 00ADi 00ADstra 00ADtor known for his\
 25  research in com 00ADpu 00ADta 00ADtional logic, textbooks in theoretical computer s\
 26 cience, and writings on computing, higher education, and technology. He is Gordon Mc\
 27 Kay Professor of Computer Science at Harvard University, and was Dean of Harvard Col\
 28 lege from 1995 to 2003. A new professorship in Engineering and Applied Sciences, end\
 29 owed by a former student, will be named for Lewis and his wife upon their retirement\
 30 s." 
 31 
 32 2  -   "Cascade Investment, L.L.C. is an American holding company and private invest\
 33 ment firm headquartered in Kirkland, Washington, United States. It is controlled by \
 34 Bill Gates, and managed by Michael Larson. More than half of Gates' fortune is held \
 35 in assets outside his holding of Microsoft shares. Cascade is the successor company \
 36 to Dominion Income Management, the former investment vehicle for Gates' holdings, wh\
 37 ich was managed by convicted felon Andrew Evans." 
 38 
 39 3  -   "Jerry P. Dyer (born May 3, 1959) is an American politician and former law en\
 40 forcement officer. He is the 26th and current mayor of Fresno, California. Previousl\
 41 y, he served as the chief of the Fresno Police Department." 
 42 
 43 0
 44 
 45 - - - - Enter zero or more indices for your desired selections:
 46 
 47 0  -   "Kenexa, an IBM Company, provides employment and retention services. This inc\
 48 ludes recruitment process outsourcing onboarding tools, employee assessment, abiliti\
 49 es assessment for employment candidates (Kenexa Prove It); and Kenexa Interview Buil\
 50 der, a structured interview archive with example questions." 
 51 
 52 1  -   "Sequent Computer Systems was a computer company that designed and manufactur\
 53 ed multiprocessing computer systems. They were among the pioneers in high-performanc\
 54 e symmetric multiprocessing (SMP) open systems, innovating in both hardware (e.g., c\
 55 ache management and interrupt handling) and software (e.g., read-copy-update). Vesti\
 56 ges of Sequent's innovations live on in the form of data clustering software from Po\
 57 lyServe (subsequently acquired by HP), various projects within OSDL, IBM contributio\
 58 ns to the Linux kernel, and claims in the SCO v. IBM lawsuit." 
 59 
 60 2  -   "i2 Limited was the UK-based arm of software company i2 Group which produced \
 61 visual intelligence and investigative analysis software for military intelligence, l\
 62 aw enforcement and commercial agencies. After a number of acquisitions, in 2011 it b\
 63 ecame part of IBM." 
 64 
 65 3  -   "The International Technology Alliance in Distributed Analytics and Informati\
 66 on Sciences (DAIS-ITA) is a research program initiated by the UK Ministry of Defence\
 67  (United Kingdom) (MOD) and the US Army Research Laboratory (ARL), in September 2016\
 68 . It is led by IBM Research in the U.S. and IBM Hursley in the UK. DAIS ITA is the s\
 69 econd International Technology Alliance started by the two countries, succeeding the\
 70  previous ten year alliance NIS-ITA, which was of similar nature." 
 71 
 72 4  -   "The International Technology Alliance in Network and Information Sciences (N\
 73 IS-ITA) was a research program initiated by the UK Ministry of Defence (United Kingd\
 74 om) (MoD) and the US Army Research Laboratory (ARL), which was active for 10 years f\
 75 rom May 2006 to May 2016. It was led by IBM Research in the U.S. and IBM Hursley in \
 76 the UK. NIS ITA was the first International Technology Alliance started by the two c\
 77 ountries." 
 78 
 79 5  -   "Applix Inc. was a computer software company founded in 1983 based in Westbor\
 80 ough, Massachusetts that published Applix TM1, a multi-dimensional online analytical\
 81  processing (MOLAP) database server, and related presentation tools, including Appli\
 82 x Web and Applix Executive Viewer. Together, Applix TM1, Applix Web and Applix Execu\
 83 tive Viewer were the three core components of the Applix Business Analytics Platform\
 84 . (Executive Viewer was subsequently discontinued by IBM.)" 
 85 
 86 6  -   "Ounce Labs (an IBM company) is a Waltham, Massachusetts-based security softw\
 87 are vendor. The company was founded in 2002 and created a software analysis product \
 88 that analyzes source code to identify and remove security vulnerabilities. The secur\
 89 ity software looks for a range of vulnerabilities that leave an application open to \
 90 attack. Customers have included GMAC, Lockheed Martin, and the U.S. Navy. On July 28\
 91 , 2009, Ounce was acquired by IBM, for an undisclosed sum, with the intention of int\
 92 egrating it into IBM's Rational Software business." 
 93 
 94 7  -   "IBM Watson Health is a digital tool that helps clients facilitate medical re\
 95 search, clinical research, and healthcare solutions, through the use of artificial i\
 96 ntelligence, data, analytics, cloud computing, and other advanced information techno\
 97 logy. It is a division of the International Business Machines Corporation, (IBM), an\
 98  American multinational information technology company headquartered in Armonk, New \
 99 York." 
100 
101 8  -   "International Business Machines Corporation (IBM) is an American multination\
102 al technology corporation headquartered in Armonk, New York, with operations in over\
103  171 countries. The company began in 1911, founded in Endicott, New York by trust bu\
104 sinessman Charles Ranlett Flint, as the Computing-Tabulating-Recording Company (CTR)\
105  and was renamed \"International Business Machines\" in 1924. IBM is incorporated in\
106  New York." 
107 
108 9  -   "Microsoft Corporation is an American multinational technology corporation wh\
109 ich produces computer software, consumer electronics, personal computers, and relate\
110 d services. Its best known software products are the Microsoft Windows line of opera\
111 ting systems, the Microsoft Office suite, and the Internet Explorer and Edge web bro\
112 wsers. Its flagship hardware products are the Xbox video game consoles and the Micro\
113 soft Surface lineup of touchscreen personal computers. Microsoft ranked No. 21 in th\
114 e 2020 Fortune 500 rankings of the largest United States corporations by total reven\
115 ue; it was the world's largest software maker by revenue as of 2016. It is considere\
116 d one of the Big Five companies in the U.S. information technology industry, along w\
117 ith Amazon, Google (Alphabet), Apple, and Facebook (" 
118 
119 10  -   "The CSS Working Group (Cascading Style Sheets Working Group) is a working g\
120 roup created by the World Wide Web Consortium (W3C) in 1997, to tackle issues that h\
121 ad not been addressed with CSS level 1. As of December 2019, the CSSWG had 142 membe\
122 rs. The working group is co-chaired by and ." 
123 
124 11  -   "The AMD Professional Gamers League (PGL), founded around 1997, was one of t\
125 he first professional computer gaming eSports leagues. The PGL was run by Total Ente\
126 rtainment Network and was sponsored by AMD. The first professional tournament they h\
127 eld was for StarCraft in September 1997. The league was official unveiled at a press\
128  conference at Candlestick Park in San Francisco on November 3, 1997. It was sponsor\
129 ed by Microsoft, Nvidia, and Levi Strauss & Co. The organization raised over $1.2mil\
130  USD in sponsorship money." 
131 
132 12  -   "Secure Islands Technologies Ltd. was an Israeli privately held technology c\
133 ompany headquartered in Beit Dagan which was subsequently acquired by Microsoft. The\
134  company develops and markets Information Protection and Control (IPC) solutions." 
135 
136 13  -   "Microsoft Innovation Centers (MICs) are local government organizations, uni\
137 versities, industry organizations, or software or hardware vendors who partner with \
138 Microsoft with a common goal to foster the growth of local software economies. These\
139  are state of the art technology facilities which are open to students, developers, \
140 IT professionals, entrepreneurs, startups and academic researchers. While each Cente\
141 r tunes its programs to local needs, they all provide similar content and services d\
142 esigned to accelerate technology advances and stimulate local software economies thr\
143 ough skills and professional training, industry partnerships and innovation. As of 1\
144 0 September 2010, there are 115 Microsoft Innovation Centers worldwide, most of whic\
145 h are open to the public. Recently it was reported th" 
146 
147 14  -   "Press Play ApS was a Danish video game development studio based in central \
148 Copenhagen in Denmark. Since 2006, Press Play have released five titles, including t\
149 he Max & the Magic Marker, Max: The Curse of Brotherhood and Kalimba. On November 10\
150 , 2016, Flashbulb acquired Press Play and its library of games to republish under th\
151 e Flashbulb name including Kalimba, Tentacles: Enter the Mind, and Max: The Curse of\
152  Brotherhood." 
153 
154 8 9
155 
156 - - - ENTITY TYPE: people - - -
157 
158 SPARQL to get PERSON data for <http://dbpedia.org/resource/Bill_Gates>:
159 
160 "SELECT DISTINCT ?label ?comment@@ (GROUP_CONCAT (DISTINCT ?birthplace; SEPARATOR=' \
161 | ') AS ?birthplace) @@ (GROUP_CONCAT (DISTINCT ?almamater; SEPARATOR=' | ') AS ?alm\
162 amater) @@ (GROUP_CONCAT (DISTINCT ?spouse; SEPARATOR=' | ') AS ?spouse) { @@ <http:\
163 //dbpedia.org/resource/Bill_Gates> <http://www.w3.org/2000/01/rdf-schema#comment>  ?\
164 comment .@@
165                            FILTER  (lang(?comment) = 'en') . @@ OPTIONAL { <http://d\
166 bpedia.org/resource/Bill_Gates> <http://dbpedia.org/ontology/birthPlace> ?birthplace\
167  } . @@ OPTIONAL { <http://dbpedia.org/resource/Bill_Gates> <http://dbpedia.org/onto\
168 logy/almaMater> ?almamater } . @@ OPTIONAL { <http://dbpedia.org/resource/Bill_Gates\
169 > <http://dbpedia.org/ontology/spouse> ?spouse } . @@ OPTIONAL { <http://dbpedia.org\
170 /resource/Bill_Gates>  <http://www.w3.org/2000/01/rdf-schema#label> ?label .@@ FILTE\
171 R  (lang(?label) = 'en') } @@ } LIMIT 10@@"
172 
173 
174 label: Bill Gates
175 
176 comment: William Henry Gates III (born October 28, 1955) is an American business mag\
177 nate, software developer, investor, author, and philanthropist. He is a co-founder o\
178 f Microsoft, along with his late childhood friend Paul Allen. During his career at M\
179 icrosoft, Gates held the positions of chairman, chief executive officer (CEO), presi\
180 dent and chief software architect, while also being the largest individual sharehold\
181 er until May 2014. He is considered one of the best known entrepreneurs of the micro\
182 computer revolution of the 1970s and 1980s.
183 
184 birthplace: http://dbpedia.org/resource/Seattle | http://dbpedia.org/resource/Washin\
185 gton_(state)
186 
187 almamater: 
188 
189 spouse: http://dbpedia.org/resource/Melinda_French_Gates
190 
191 - - - ENTITY TYPE: companies - - -
192 
193 SPARQL to get COMPANY data for <http://dbpedia.org/resource/IBM>:
194 
195 
196 "SELECT DISTINCT ?label ?comment (GROUP_CONCAT (DISTINCT ?industry; SEPARATOR=' | ')\
197  AS ?industry)@@ (GROUP_CONCAT (DISTINCT ?netIncome; SEPARATOR=' | ') AS ?netIncome)\
198 @@ (GROUP_CONCAT (DISTINCT ?numberOfEmployees; SEPARATOR=' | ') AS ?numberOfEmployee\
199 s) {@@ <http://dbpedia.org/resource/IBM> <http://www.w3.org/2000/01/rdf-schema#comme\
200 nt>  ?comment .@@
201                            FILTER  (lang(?comment) = 'en') .@@ OPTIONAL { <http://db\
202 pedia.org/resource/IBM> <http://dbpedia.org/ontology/industry> ?industry } .@@  OPTI\
203 ONAL { <http://dbpedia.org/resource/IBM> <http://dbpedia.org/ontology/netIncome> ?ne\
204 tIncome } .@@  OPTIONAL { <http://dbpedia.org/resource/IBM> <http://dbpedia.org/onto\
205 logy/numberOfEmployees> ?numberOfEmployees } .@@  OPTIONAL { <http://dbpedia.org/res\
206 ource/IBM> <http://www.w3.org/2000/01/rdf-schema#label> ?label . FILTER (lang(?label\
207 ) = 'en') } @@ } LIMIT 30@@"
208 
209 
210 label: IBM
211 
212 comment: International Business Machines Corporation (IBM) is an American multinatio\
213 nal technology corporation headquartered in Armonk, New York, with operations in ove\
214 r 171 countries. The company began in 1911, founded in Endicott, New York by trust b\
215 usinessman Charles Ranlett Flint, as the Computing-Tabulating-Recording Company (CTR\
216 ) and was renamed "International Business Machines" in 1924. IBM is incorporated in \
217 New York.
218 
219 industry: http://dbpedia.org/resource/Artificial_intelligence | http://dbpedia.org/r\
220 esource/Automation | http://dbpedia.org/resource/Blockchain | http://dbpedia.org/res\
221 ource/Cloud_computing | http://dbpedia.org/resource/Computer_hardware | http://dbped\
222 ia.org/resource/Quantum_computing | http://dbpedia.org/resource/Robotics | http://db\
223 pedia.org/resource/Software
224 
225 net-income: 5.59E9
226 
227 number-of-employees: 345900
228 
229 SPARQL to get COMPANY data for <http://dbpedia.org/resource/Microsoft>:
230 
231 
232 "SELECT DISTINCT ?label ?comment (GROUP_CONCAT (DISTINCT ?industry; SEPARATOR=' | ')\
233  AS ?industry)@@ (GROUP_CONCAT (DISTINCT ?netIncome; SEPARATOR=' | ') AS ?netIncome)\
234 @@ (GROUP_CONCAT (DISTINCT ?numberOfEmployees; SEPARATOR=' | ') AS ?numberOfEmployee\
235 s) {@@ <http://dbpedia.org/resource/Microsoft> <http://www.w3.org/2000/01/rdf-schema\
236 #comment>  ?comment .@@
237                            FILTER  (lang(?comment) = 'en') .@@ OPTIONAL { <http://db\
238 pedia.org/resource/Microsoft> <http://dbpedia.org/ontology/industry> ?industry } .@@\
239   OPTIONAL { <http://dbpedia.org/resource/Microsoft> <http://dbpedia.org/ontology/ne\
240 tIncome> ?netIncome } .@@  OPTIONAL { <http://dbpedia.org/resource/Microsoft> <http:\
241 //dbpedia.org/ontology/numberOfEmployees> ?numberOfEmployees } .@@  OPTIONAL { <http\
242 ://dbpedia.org/resource/Microsoft> <http://www.w3.org/2000/01/rdf-schema#label> ?lab\
243 el . FILTER (lang(?label) = 'en') } @@ } LIMIT 30@@"
244 
245 
246 label: Microsoft
247 
248 comment: Microsoft Corporation is an American multinational technology corporation w\
249 hich produces computer software, consumer electronics, personal computers, and relat\
250 ed services. Its best known software products are the Microsoft Windows line of oper\
251 ating systems, the Microsoft Office suite, and the Internet Explorer and Edge web br\
252 owsers. Its flagship hardware products are the Xbox video game consoles and the Micr\
253 osoft Surface lineup of touchscreen personal computers. Microsoft ranked No. 21 in t\
254 he 2020 Fortune 500 rankings of the largest United States corporations by total reve\
255 nue; it was the world's largest software maker by revenue as of 2016. It is consider\
256 ed one of the Big Five companies in the U.S. information technology industry, along \
257 with Amazon, Google (Alphabet), Apple, and Facebook (
258 
259 industry: http://dbpedia.org/resource/Cloud_computing | http://dbpedia.org/resource/\
260 Computer_hardware | http://dbpedia.org/resource/Consumer_electronics | http://dbpedi\
261 a.org/resource/Corporate_venture_capital | http://dbpedia.org/resource/Internet | ht\
262 tp://dbpedia.org/resource/Social_networking_service | http://dbpedia.org/resource/So\
263 ftware_development | http://dbpedia.org/resource/Video_game_industry
264 
265 net-income: 6.06E10
266 
267 number-of-employees: 182268
268 
269 DISCOVERED RELATIONSHIP LINKS:
270 
271   <http://dbpedia.org/resource/Bill_Gates>   
272     <http://dbpedia.org/ontology/knownFor>     
273     <http://dbpedia.org/resource/Microsoft> .
274 
275 
276   <http://dbpedia.org/resource/Microsoft>    
277     <http://dbpedia.org/ontology/foundedBy>    
278     <http://dbpedia.org/resource/Bill_Gates> .
279 
280 
281   <http://dbpedia.org/resource/Microsoft>    
282     <http://dbpedia.org/property/founders>     
283     <http://dbpedia.org/resource/Bill_Gates> .
284 
285 "Enter entity names (people, places, companies, etc.":

On line 10 I input a test phrase “Bill Gates worked at Microsoft and his competitor was IBM.” In lines 13-41 the test program prints out matching human entities from DBPedia that are indexed starting at 0. On line 43 I entered 0 to choose just the first entity “William Henry Gates III”.

The prompt on line 45 asks the user to enter the indices for the company DBPedia entities they want to use. These companies are listed in lines 47-152. On line 154 I entered “8 9” to select two entities to use.

Lines 156-171 show the automatically generated SPARQL query to get information about Bill Gates. This information is printed on lines 174-189. I list more generated SPARQL queries and results (which we will not discuss further).

Lines 269-283 show discovered links found between the entities in the input text.

In the LispWorks CAPI user interface developed in the next chapter I use two text output stream window panes, one for the generated SPARQL and one for the results.

Text User Interface Implementation

We will skip looking at the kgn-text-ui.asd and package.lisp files for this library but look at src/kgn-text-ui/kgn-text-ui.lisp in its entirety. When entities are identified in input text we find candidate DBPedia entity URIs that we present to the user. We precede each entire DBPedia description with an index starting at 0. The user enters the indices for entities to further process. For example, in the example listing in the previous section I entered “8 9” to indicate two company URIs.

  1 (in-package #:kgn-text-ui)
  2 
  3 (defun pprint-results (results)
  4   (dolist (result (car results))
  5     (terpri)
  6     (format t  "~A:" (first result))
  7     (format t " ~A~%" (second result))))
  8 
  9 
 10 (defun multiple-selections (sel-list)
 11   (if (not (null sel-list))
 12       (let ()
 13         (pprint sel-list)
 14         (format t
 15          "~%- - - - Enter zero or more indices for your desired selections:~%~%")
 16         (let ((count 0))
 17           (dolist (sel sel-list)
 18             (format t "~A  -   ~S ~%~%" count (cadr (assoc :comment (car sel))))
 19             (setf count (1+ count))))
 20         (let* ((line (read-line))
 21                (indices
 22                 (if (> (length line) 0)
 23                     (mapcar
 24                      #'parse-integer
 25                      (myutils:tokenize-string line)))))
 26           (print indices)
 27     ;(dolist (index indices)
 28     ;  (setf ret (cons (nth index str-list)
 29           indices))))
 30 
 31 ;; (kgn-text-ui::multiple-selections
 32 ;;   '("Option 1" "Option 2" "And yet another option 3"))
 33 
 34 
 35 (defun prompt-selection-list (a-list-of-choices) 
 36   ;; e.g., '((:people (("11" "data1")  ("22" "data2"))) (:places (("p1" "data3"))))
 37   (let (ret)
 38     (dolist (choice a-list-of-choices)
 39       (setf choice (remove-if #'null choice))
 40       (let* ((topic-type (car choice))
 41              (choice-list-full (rest choice))
 42              (choice-list (remove-duplicates
 43                            (map 'list #'(lambda (z)
 44                                           (list
 45                                            z
 46                                            (string-shorten
 47                                             (kgn-common:clean-comment
 48                                              (kgn-common:clean-comment (cadr z)))
 49                                             140 :first-remove-stop-words t)))
 50                                 ;; top level list flatten:
 51                                 (apply #'append choice-list-full))
 52                            :test #'equal)))
 53         (let (ret2
 54               (dialog-results (multiple-selections choice-list)))
 55           (dolist (index dialog-results)
 56             (setf ret2 (cons (nth index choice-list) ret2)))
 57           (if (> (length ret2) 0)
 58               (setf ret (cons (list topic-type (reverse ret2)) ret))))))
 59     (reverse ret)))
 60 
 61 ;; (kgn-text-ui::prompt-selection-list 
 62 ;;   '((:people (("11" "data1")  ("22" "data2")))
 63 ;;     (:places (("p1" "data3") ("p2" "data4") ("p3" "data5")))))
 64 ;; (kgn-text-ui::prompt-selection-list
 65 ;;   (get-entity-data-helper "Bill Gates went to Seattle to Microsoft"))
 66 
 67 (defun colorize-sparql (str &key (stream t))
 68   " this could be used to colorize text (as it is in kgn-capi-ui example)"
 69   ;;(declare (ignore message-stream))
 70   (declare (ignore stream))
 71   (format t "~%~S~%" str))
 72 
 73 (defun get-string-from-user (text-prompt)
 74   (format t "~%~S:~%" text-prompt)
 75   (read-line))
 76 
 77 
 78 ;; Main funtion
 79 
 80 (defun kgn-text-ui ()
 81   (let (prompt
 82         (message-stream t)
 83         (results-stream t))
 84     (loop
 85        while
 86         (>
 87          (length
 88            (setf prompt
 89              (get-string-from-user
 90                "Enter entity names (people, places, companies, etc.")))
 91            0)
 92          do
 93          (let* ((entity-data (get-entity-data-helper prompt :message-stream t)))
 94            (let ((user-selections (prompt-selection-list entity-data)))
 95              (dolist (ev user-selections)
 96                (if (> (length (cadr ev)) 0)
 97                    (let ()
 98                      (terpri results-stream)
 99                      (format results-stream "- - - ENTITY TYPE: ~A - - -" (car ev))
100                      ;;(terpri results-stream)
101                      (dolist (uri (cadr ev))
102                        (setf uri (assoc :s (car uri)))
103                        (case (car ev)
104                          (:people
105                           (pprint-results 
106                            (kgn-common:dbpedia-get-person-detail  
107                              uri
108                              :message-stream message-stream
109                              :colorize-sparql-function #'colorize-sparql)))
110                          (:companies
111                           (pprint-results 
112                            (kgn-common:dbpedia-get-company-detail uri
113                              :message-stream message-stream
114                              :colorize-sparql-function #'colorize-sparql)))
115                          (:countries
116                           (pprint-results
117                            (kgn-common:dbpedia-get-country-detail uri
118                              :message-stream message-stream
119                              :colorize-sparql-function #'colorize-sparql)))
120                          (:cities
121                           (pprint-results
122                            (kgn-common:dbpedia-get-city-detail    uri
123                              :message-stream message-stream
124                              :colorize-sparql-function #'colorize-sparql)))
125                          (:products
126                           (pprint-results
127                            (kgn-common:dbpedia-get-product-detail uri
128                            :message-stream message-stream
129                            :colorize-sparql-function #'colorize-sparql)))))))))))))
130 
131 ;; (kgn-text-ui:kgn-text-ui)

The utility function multiple-selections listed in lines 10-29 displays a list of user choices, adding a zero-based index for each list item. The user can enter zero or more indices to indicate their choices using the function prompt-selection-list listed in lines 35-59.

The commented out code in lines 61-65 can be used to test these two functions.

The main function kgn-text-ui is listed in lines 80-129.

Wrap-up

In the previous chapter we implemented the Knowledge Graph Navigator library. Here we developed a text-based user interface. In the next chapter we use the library to develop a LispWorks specific CAPI user interface.

Knowledge Graph Navigator User Interface Using LispWorks CAPI

As we have seen in the last two chapters the Knowledge Graph Navigator (which I will often refer to as KGN) is a tool for processing a set of entity names and automatically exploring the public Knowledge Graph DBPedia using SPARQL queries. I started to write KGN for my own use, to automate some things I used to do manually when exploring Knowledge Graphs, and later thought that KGN might also be useful for educational purposes. KGN shows the user the auto-generated SPARQL queries so hopefully the user will learn by seeing examples. KGN uses NLP code developed in earlier chapters and we will reuse that code with a short review of using the APIs. Here is a screenshot showing the application we develop here:

UI for the Knowledge Graph Navigator
UI for the Knowledge Graph Navigator

We will use the KGN common library developed earlier. This example replaces the text bases UI from the last chapter and requires either the free or professional version of LispWorks to run.

The code for the CAPI user interface is found in the GitHub repository https://github.com/mark-watson/kgn-capi-ui.

Project Configuration and Running the Application

The following listing of kgn.asd shows the five packages this example depends on in addition to #:kgn-common that was developed in an earlier chapter that is referenced in the file package.lisp:

 1 ;;;; knowledgegraphnavigator.asd
 2 
 3 (asdf:defsystem #:kgn-capi-ui
 4   :description "top level Knowledge Graph Navigator package"
 5   :author "Mark Watson <markw@markwatson.com>"
 6   :license "Apache 2"
 7   :depends-on (#:kgn-common #:sparql #:kbnlp #:lw-grapher #:trivial-open-browser)
 8   :components ((:file "package")
 9                 (:file "kgn-capi-ui")
10                 (:file "option-pane")
11                 (:file "colorize")
12                 (:file "user-interface")))

Other dependency libraries specified in project.lisp are trivial-open-browser which we will use to open a web browser to URIs for human readable information on DBPedia and sparql that was developed in an earlier chapter.

Listing of package.lisp:

1 ;;;; package.lisp
2 
3 (defpackage #:kgn-capi-ui
4   (:use #:cl)
5   (:use #:kgn-common #:sparql #:lw-grapher #:trivial-open-browser)
6   (:export #:kgn-capi-ui))

The free personal edition of LispWorks does not support initialization files so you must manually load Quicklisp from the Listener Window when you first start LispWorks Personal as seen in the following repl listing (edited to remove some output for brevity). Once Quicklisp is loaded we then use ql:quickload to load the example in this chapter (some output removed for brevity):

CL-USER 1 > (load "~/quicklisp/setup.lisp")
; Loading text file /Users/markw/quicklisp/setup.lisp
; Loading /Applications/LispWorks Personal 7.1/...
;; Creating system "COMM"
#P"/Users/markw/quicklisp/setup.lisp"

CL-USER 2 > (ql:quickload "kgn")
To load "kgn":
  Load 1 ASDF system:
    kgn
; Loading "kgn"
.
"Starting to load data...." 
"....done loading data." 
"#P\"/Users/markw/GITHUB/common-lisp/entity-uris/entity-uris.lisp\"" 
"current directory:" 
"/Users/markw/GITHUB/common-lisp/entity-uris" 
"Starting to load data...." 
"....done loading data."
[package kgn]
To load "sqlite":
  Load 1 ASDF system:
    sqlite
; Loading "sqlite"
To load "cl-json":
  Load 1 ASDF system:
    cl-json
; Loading "cl-json"
To load "drakma":
  Load 1 ASDF system:
    drakma
; Loading "drakma"
.To load "entity-uris":
  Load 1 ASDF system:
    entity-uris
; Loading "entity-uris"
("kgn")
CL-USER 3 > (kgn:kgn)
#<KGN::KGN-INTERFACE "Knowledge Graph Navigator" 40201E91DB>

Please note that I assume you have configured all of the examples for this book for discoverability by Quicklisp as per the section Setup for Local Quicklisp Projects in Appendix A.

When the KGN application starts a sample query is randomly chosen. Queries with many entities can take a while to process, especially when you first start using this application. Every time KGN makes a web service call to DBPedia the query and response are cached in a SQLite database in ~/.kgn_cache.db which can greatly speed up the program, especially in development mode when testing a set of queries. This caching also takes some load off of the public DBPedia endpoint, which is a polite thing to do.

I use LispWorks Professional and add two utility functions to the bottom on my ~/.lispworks configuration file (you can’t do this with LispWorks Personal):

 1 ;;; The following lines added by ql:add-to-init-file:
 2 #-quicklisp
 3 (let ((quicklisp-init
 4          (merge-pathnames
 5            "quicklisp/setup.lisp"
 6            (user-homedir-pathname))))
 7   (when (probe-file quicklisp-init)
 8     (load quicklisp-init)))
 9 
10 (defun ql (x) (ql:quickload x))
11 (defun qlp (x)
12   (ql:quickload x)
13   (SYSTEM::%IN-PACKAGE (string-upcase x) :NEW T))

Function ql is just a short alias to avoid frequently typing ql:quickload and qlp loads a Quicklisp project and then performs an in-package of the Common Lisp package with the same name as the Quicklisp project.

Utilities to Colorize SPARQL and Generated Output

When I first had the basic functionality of KGN with a CAPI UI working, I was disappointed by how the application looked as all black text on a white background. Every editor and IDE I use colorizes text in an appropriate way so I took advantage of the function capi::write-string-with-properties to easily implement color hilting SPARQL queries.

The code in the following listing is in the file kgn/colorize.lisp. When I generate SPARQL queries to show the user I use the characters “@@” as placeholders for end of lines in the generated output. In line 5 I am ensuring that there are spaces around these characters so they get tokenized properly. In the loop starting at line 7 I process the tokens checking each one to see if it should have a color associated with it when it is written to the output stream.

 1 (in-package #:kgn)
 2 
 3 (defun colorize-sparql (s  &key (stream nil))
 4   (let ((tokens (tokenize-string-keep-uri
 5                     (replace-all s "@@" " @@ ")))
 6         in-var)
 7     (dolist (token tokens)
 8       (if (> (length token) 0)
 9           (if (or in-var (equal token "?"))
10               (capi::write-string-with-properties
11                 token
12                 '(:highlight :compiler-warning-highlight)
13                 stream)
14             (if (find token '("where" "select" "distinct" "option" "filter"
15                               "FILTER" "OPTION" "DISTINCT"
16                               "SELECT" "WHERE")
17                       :test #'equal)
18                 (capi::write-string-with-properties 
19                   token
20                   '(:highlight :compiler-note-highlight)
21                   stream)
22               (if (equal (subseq token 0 1) "<")
23                   (capi::write-string-with-properties
24                     token
25                     '(:highlight :bold)
26                     stream)
27                 (if (equal token "@@")
28                     (terpri stream)
29                   (if (not (equal token "~")) (write-string token stream)))))))
30       (if (equal token "?")
31           (setf in-var t)
32         (setf in-var nil))
33       (if (and
34            (not in-var)
35            (not (equal token "?")))
36           (write-string " " stream)))
37     (terpri stream)))

Here is an example call to function colorize-sparql:

KGN 25 > (colorize-sparql "select ?s ?p  where {@@  ?s ?p \"Microsoft\" } @@  FILTER\
 (lang(?comment) = 'en')")
select ?s ?p where { 
 ?s ?p "Microsoft" } 
 FILTER ( lang ( ?comment ) = 'en' ) 

Main Implementation File kgn-capi-ui.lisp

  1 ;;----------------------------------------------------------------------------
  2 ;; To try it, compile and load this file and then execute:
  3 ;;
  4 ;;      (kgn::kgn)
  5 ;;
  6 ;;----------------------------------------------------------------------------
  7 ;; Copyright (c) 2020-2022 Mark Watson. All rights reserved.
  8 ;;----------------------------------------------------------------------------
  9 
 10 (in-package #:kgn-capi-ui)
 11 
 12 (defvar *width* 1370)
 13 (defvar *best-width* 1020)
 14 (defvar *show-info-pane* t)
 15 
 16 (defvar *pane2-message*
 17   "In order to process your query a series of SPARQL queries will be formed based on\
 18  the query. These generated SPARQL queries will be shown here and the reuslts of the\
 19  queries will be formatted and displayed in the results display pane below.")
 20 
 21 (defvar *pane3-message*
 22   "Enter a query containing entities like people's names, companys, places, etc. fol\
 23 lowing by the RETURN key to start processing your query. You can also directly use a\
 24  DBPedia URI for an entity, for example: <http://dbpedia.org/resource/Apple_Inc.> Wh\
 25 en you start this application, a sample query is randomly chosen to get you started.\
 26 ")
 27 
 28 (defun test-callback-click (selected-node-name)
 29   (ignore-errors
 30     (format nil "* user clicked on node: ~A~%" selected-node-name)))
 31 
 32 (defun test-callback-click-shift (selected-node-name)
 33   (ignore-errors
 34     (if (equal (subseq selected-node-name 0 5) "<http")
 35         (trivial-open-browser:open-browser 
 36          (subseq selected-node-name 1 (- (length selected-node-name) 1))))
 37     (format nil "* user shift-clicked on node: ~A - OPEN WEB BROWSER~%" selected-nod\
 38 e-name)))
 39 
 40 (defun cache-callback (&rest x) (declare (ignore x))
 41   (if *USE-CACHING*
 42       (capi:display (make-instance 'options-panel-interface))))
 43 
 44 (defun website-callback (&rest x) (declare (ignore x)) (trivial-open-browser:open-br\
 45 owser "http://www.knowledgegraphnavigator.com/"))
 46 
 47 (defun toggle-grapher-visibility (&rest x)
 48   (declare (ignore x))
 49   (setf *show-info-pane* (not *show-info-pane*)))
 50 
 51 (defvar *examples*)
 52 (setf *examples* '("Bill Gates and Melinda Gates at Microsoft in Seattle"
 53                    "Bill Clinton <http://dbpedia.org/resource/Georgia_(U.S._state)>"
 54                    "Bill Gates and Steve Jobs visited IBM and Microsoft in Berlin, S\
 55 an Francisco, Toronto, Canada"
 56                    "Steve Jobs lived near San Francisco and was a founder of <http:/\
 57 /dbpedia.org/resource/Apple_Inc.>"
 58                    "<http://dbpedia.org/resource/Bill_Gates> visited IBM"
 59                    "<http://dbpedia.org/resource/Bill_Gates> visited <http://dbpedia\
 60 .org/resource/Apple_Inc.>"
 61                    "Bill Gates visited <http://dbpedia.org/resource/Apple_Inc.>"))
 62 
 63 (capi:define-interface kgn-interface ()
 64   ()
 65   (:menus
 66    (action-menu 
 67     "Actions"
 68     (
 69      ("Copy generated SPARQL to clipboard"
 70       :callback
 71       #'(lambda (&rest x) (declare (ignore x))
 72           (let ((messages (capi:editor-pane-text text-pane2)))
 73             (capi::set-clipboard text-pane2 (format nil "---- Generated SPARQL and c\
 74 omments:~%~%~A~%~%" messages) nil))))
 75      ("Copy results to clipboard"
 76       :callback
 77       #'(lambda (&rest x) (declare (ignore x))
 78           (let ((results (capi:editor-pane-text text-pane3)))
 79             (capi::set-clipboard text-pane2 (format nil "---- Results:~%~%~A~%" resu\
 80 lts) nil))))
 81      ("Copy generated SPARQL and results to clipboard"
 82       :callback
 83       #'(lambda (&rest x) (declare (ignore x))
 84           (let ((messages (capi:editor-pane-text text-pane2))
 85                 (results (capi:editor-pane-text text-pane3)))
 86             (capi::set-clipboard
 87              text-pane2
 88              (format nil "---- Generated SPARQL and comments:~%~%~A~%~%---- Results:\
 89 ~%~%~A~%" messages results) nil))))
 90      ("Visit Knowledge Graph Navigator Web Site" :callback 'website-callback)
 91      ("Clear query cache" :callback 'cache-callback)
 92      ((if *show-info-pane*
 93           "Stop showing Grapher window for new results"
 94         "Start showing Grapher window for new results")
 95       :callback 'toggle-grapher-visibility)
 96      )))
 97   (:menu-bar action-menu)
 98   (:panes
 99    (text-pane1
100     capi:text-input-pane
101     :text (nth (random (length *examples*)) *examples*)
102     :title "Query"
103     :min-height 80
104     :max-height 100
105     :max-width *width*
106     ;;:min-width (- *width* 480)
107     :width *best-width*
108     :callback 'start-background-thread)
109 
110    (text-pane2
111     capi:collector-pane
112     :font "Courier"
113     :min-height 210
114     :max-height 250
115     :title "Generated SPARQL queries to get results"
116     :text "Note: to answer queries, this app makes multipe SPARQL queries to DBPedia\
117 . These SPARQL queries will be shown here."
118     :vertical-scroll t
119     :create-callback #'(lambda (&rest x)
120                          (declare (ignore x))
121                          (setf (capi:editor-pane-text text-pane2) *pane2-message*))
122     :max-width *width*
123     :width *best-width*
124     :horizontal-scroll t)
125 
126    (text-pane3
127     capi:collector-pane ;; capi:display-pane ;; capi:text-input-pane
128     :text *pane3-message*
129     :font "Courier"
130     :line-wrap-marker nil
131     :wrap-style :split-on-space
132     :vertical-scroll :with-bar
133     :title "Results"
134     :horizontal-scroll t
135     :min-height 220
136     :width *best-width*
137     :create-callback #'(lambda (&rest x)
138                          (declare (ignore x))
139                          (setf (capi:editor-pane-text text-pane3) *pane3-message*))
140     :max-height 240
141     :max-width *width*)
142    (info
143     capi:title-pane
144     :text "Use natural language queries to generate SPARQL"))
145   (:layouts
146    (main-layout
147     capi:grid-layout
148     '(nil info
149       nil text-pane1
150       nil text-pane2
151       nil text-pane3)
152      :x-ratios '(1 99)
153     :has-title-column-p t))
154   (:default-initargs
155    :layout 'main-layout
156    :title "Knowledge Graph Navigator"
157    :best-width *best-width*
158    :max-width *width*))
159 
160 (defun start-background-thread (query-text self)
161   (format t "~%** ** entering start-progress-bar-test-from-background-thread:~%~%sel\
162 f=~S~%~%" self)
163   (with-slots (text-pane2 text-pane3) self
164     (print text-pane2)
165     (mp:process-run-function "progress-bar-test-from-background-thread"
166                               '()
167                               'run-and-monitor-progress-background-thread
168                               query-text text-pane2 text-pane3)))
169 
170 ;; This function runs in a separate thread.
171 
172 (defun run-and-monitor-progress-background-thread (text text-pane2 text-pane3)
173   (unwind-protect
174       (setf (capi:editor-pane-text text-pane2) "")
175     (setf (capi:editor-pane-text text-pane3) "")
176     ;;(capi:display-message "done")
177     (let ((message-stream (capi:collector-pane-stream text-pane2))
178           (results-stream (capi:collector-pane-stream text-pane3)))
179       (format message-stream "# Starting to process query....~%")
180       (format results-stream *pane3-message*)
181       (let ((user-selections (prompt-selection-list (get-entity-data-helper text :me\
182 ssage-stream message-stream))))
183         (print "***** from prompt selection list:") (print user-selections)
184         (setf (capi:editor-pane-text text-pane3) "")
185         (dolist (ev user-selections)
186           (if (> (length (cadr ev)) 0)
187               (let ()
188                 (terpri results-stream)
189                 (capi::write-string-with-properties
190                  (format nil "- - - ENTITY TYPE: ~A - - -" (car ev))
191                  '(:highlight :compiler-error-highlight) results-stream)
192                 ;;(terpri results-stream)
193                 (dolist (uri (cadr ev))
194                   (setf uri (car uri))
195                   (case (car ev)
196                     (:people
197                      (pprint-results 
198                       (kgn-common:dbpedia-get-person-detail  uri :message-stream mes\
199 sage-stream :colorize-sparql-function #'colorize-sparql)
200                       :stream results-stream))
201                     (:companies
202                      (pprint-results 
203                       (kgn-common:dbpedia-get-company-detail uri :message-stream mes\
204 sage-stream :colorize-sparql-function #'colorize-sparql)
205                       :stream results-stream))
206                     (:countries
207                      (pprint-results
208                       (kgn-common:dbpedia-get-country-detail uri :message-stream mes\
209 sage-stream :colorize-sparql-function #'colorize-sparql)
210                       :stream results-stream))
211                     (:cities
212                      (pprint-results
213                       (kgn-common:dbpedia-get-city-detail    uri :message-stream mes\
214 sage-stream :colorize-sparql-function #'colorize-sparql)
215                       :stream results-stream))
216                     (:products
217                      (pprint-results
218                       (kgn-common:dbpedia-get-product-detail uri :message-stream mes\
219 sage-stream :colorize-sparql-function #'colorize-sparql)
220                       :stream results-stream)))))))
221 
222         (let (links x)
223           (dolist (ev user-selections)
224             (dolist (uri (second ev))
225               (setf uri (car uri))
226               (if (> (length ev) 2)
227                   (setf x (caddr ev)))
228               (setf links (cons (list (symbol-name (first ev)) uri x) links)))
229 
230           (setf
231            links
232            (append
233             links
234             (entity-results->relationship-links
235              user-selections
236              :message-stream message-stream))))
237 
238           (if
239               *show-info-pane*
240               (lw-grapher:make-info-panel-grapher '("PEOPLE" "COMPANIES" "COUNTRIES"\
241  "CITIES" "PRODUCTS" "PLACES")
242                                                   links 'test-callback-click 'test-c\
243 allback-click-shift))) ;; do  not use #' !!
244         (terpri results-stream)
245         (princ "** Done wih query **" results-stream)))))
246 
247    
248 
249 ;; MAIN entry point for application:
250 
251 (defun kgn-capi-ui ()
252   ;;(ignore-errors (create-dbpedia))
253   (capi:display (make-instance 'kgn-interface)))

User Interface Utilites File user-interface.lisp

In the previous chapter, the function prompt-selection-list was defined in the file kgn-text-ui/kgn-text-ui.lisp for text based (console) UIs. Here it is implemented in a separate file user-interface.lisp in the project directory kgn-capi-ui.

 1 (in-package #:kgn-capi-ui)
 2 
 3 ;; (use-package "CAPI")
 4 
 5 (defun prompt-selection-list (a-list-of-choices) 
 6   (let (ret)
 7     (dolist (choice a-list-of-choices)
 8       (setf choice (remove-if #'null choice))
 9       (let* ((topic-type (car choice))
10              (choice-list-full (rest choice))
11              (choice-list (remove-duplicates
12                            (map 'list #'(lambda (z)
13                                           (list
14                                            z ;; (first z)
15                                            (string-shorten
16                                             (kgn-common:clean-comment
17                                              (kgn-common:clean-comment (cadr z)))
18                                             140 :first-remove-stop-words t)))
19                                 (apply #'append choice-list-full))
20                            :test #'equal)))
21         (let ((dialog-results (alexandria:flatten
22                                (capi:prompt-with-list ;; SHOW SELECTION LIST
23                                    (map 'list #'second choice-list)
24                                    (symbol-name topic-type)
25                                    :interaction :multiple-selection
26                                    :choice-class 'capi:button-panel
27                                    :pane-args '(:visible-min-width 910
28                                    :layout-class capi:column-layout))))
29               (ret2))
30           (dolist (x choice-list)
31             (if (find (second x) dialog-results)
32                 (setf ret2 (cons (car x) ret2))))
33           (if (> (length ret2) 0)
34               (setf ret (cons (list topic-type (reverse ret2)) ret))))))
35     (reverse ret)))
36 
37 ;; (get-entity-data-helper "Bill Gates went to Seattle to Microsoft")
38 ;; (prompt-selection-list
39 ;;   (get-entity-data-helper
40 ;;     "Bill Gates went to Seattle to Microsoft"))

User Interface CAPI Options Panes Definition File option-pane.lisp

In the following listing we define functions to implement CAPI menus:

  1 (in-package #:kgn-capi-ui)
  2 
  3 ;; options for:
  4 ;;  1. programming language to generate code snippets for
  5 ;;  2. colorization options (do we really need this??)
  6 ;;  3. show disk space used by caching
  7 ;;  4. option to remove local disk cache
  8 
  9 (defvar *width-options-panel* 800)
 10 
 11 (defun get-cache-disk-space ()
 12   (let ((x (ignore-errors
 13              (floor
 14                (/
 15                 (with-open-file
 16                   (file "~/Downloads/knowledge_graph_navigator_cache.db")
 17                  (file-length file)) 1000)))))
 18     (or x  0))) ;; units in megabytes
 19 
 20 (defun clear-cache-callback (&rest val)
 21   (declare (ignore val))
 22   (ignore-errors (delete-file "~/Downloads/knowledge_graph_navigator_cache.db")))
 23 
 24 (defvar *code-snippet-language* nil)
 25 (defun set-prog-lang (&rest val)
 26   (format t "* set-prog-lang: val=~S~%" val)
 27   (setf *code-snippet-language* (first val)))
 28 
 29 (capi:define-interface options-panel-interface ()
 30   ()
 31   (:panes
 32    #|
 33    (prog-lang-pane
 34     capi:option-pane
 35     :items '("No language set" "Python" "Common Lisp")
 36     :visible-items-count 6
 37     :selection (if (equal *code-snippet-language* nil)
 38                    0
 39                  (if (equal *code-snippet-language* "No language set")
 40                      0
 41                    (if (equal *code-snippet-language* "Python")
 42                        1
 43                      (if (equal *code-snippet-language* "Common Lisp")
 44                          2
 45                        0))))
 46     :interaction :single-selection
 47     :selection-callback
 48     'set-prog-lang)|#
 49    (disk-space-pane
 50      capi:text-input-pane
 51     :text (format nil "~A (megabytes)"
 52                   (let ((x
 53                          (ignore-errors
 54                            (floor
 55                              (/
 56                                (with-open-file (file "~/.kgn_cache.db")
 57                                  (file-length file))
 58                                 1000)))))
 59                     (if x
 60                         x
 61                       0)))
 62     :title "Current size of cache:"
 63     :min-width 170
 64     :max-width *width-options-panel*)
 65    (clear-disk-cache-pane
 66     capi:push-button-panel
 67     ;;:title "Clear local query cache:"
 68     :items 
 69     '("Clear local query cache")
 70     :selection-callback 
 71     #'(lambda (&rest val)
 72         (declare (ignore val))
 73         (ignore-errors (delete-file "~/.kgn_cache.db"))
 74         (ignore-errors (setf (capi:text-input-pane-text disk-space-pane)
 75                              "0 (megabytes)"))))
 76    (toggle-graph-display
 77     capi:option-pane
 78     :items '("Show Graph Info Pane Browser" "Hide Graph Info Pane Browser")
 79     :selected-item (if *show-info-pane* 0 1)
 80     ;;:title ""
 81     :selection-callback 'toggle-grapher-visibility))
 82 
 83   (:layouts
 84    (main-layout
 85     capi:grid-layout
 86     '(nil disk-space-pane
 87     nil clear-disk-cache-pane)
 88     :x-ratios '(1 99)
 89     :has-title-column-p nil))
 90   (:default-initargs
 91    :layout 'main-layout
 92    :title "Knowledge Graph Navigator Options"
 93    :max-width *width-options-panel*))
 94 
 95 ;; MAIN entry point for application:
 96 
 97 
 98 ;;         (capi:display (make-instance 'options-panel-interface))
 99 
100 (defun ui2 () (capi:display (make-instance 'options-panel-interface)))

The popup list in the last example looks like:

Popup list shows the user possible entity resolutions for each entity found in the input query. The user selects the resolved entities to use.
Popup list shows the user possible entity resolutions for each entity found in the input query. The user selects the resolved entities to use.

In this example there were two “Bill Gates” entities, one an early American frontiersman, the other the founder of Microsoft and I chose the latter person to continue finding information about.

Using LispWorks CAPI UI Toolkit

You can use the free LispWorks Personal Edition for running KGN. Using other Common Lisp implementations like Clozure-CL and SBCL will not work because the CAPI user interface library is proprietary to LispWorks. I would like to direct you to three online resources for learning CAPI:

I am not going to spend too much time in this chapter explaining my CAPI-based code. If you use LispWorks (either the free Personal or the Professional editions) you are likely to use CAPI and spending time on the official documentation and especially the included example programs is strongly recommended.

In the next section I will review the KGN specific application parts of the CAPI-based UI.

The following figure shows a popup window displaying a graph of discovered entities and relationships:

UI for info-pane-grapher
UI for info-pane-grapher

Since I just showed the info-pane-grapher this is a good time to digress to its implementation. This is in a different package and you will find the source code in src/lw-grapher/info-pane-grapher.lisp. I used the graph layout algorithm from ISI-Grapher Manual (by Gabriel Robbins). There is another utility in src/lw-grapher/lw-grapher.lisp that also displays a graph without mouse support and an attached information pane that is not used here but you might prefer it for reuse in your projects if you don’t need mouse interactions.

The graph nodes are derived from the class capi:pinboard-object:

1 (defclass text-node (capi:pinboard-object)
2   ((text :initarg :text :reader text-node-text)
3    (string-x-offset :accessor text-node-string-x-offset)
4    (string-y-offset :accessor text-node-string-y-offset)))

I customized how my graph nodes are drawn in a graph pane (this is derived from LispWorks example code):

 1 (defmethod capi:draw-pinboard-object (pinboard (self text-node)
 2                                                &key &allow-other-keys)
 3   (multiple-value-bind (X Y  width height)
 4       (capi:static-layout-child-geometry self)
 5     (let* ((half-width  (floor (1- width)  2))
 6            (half-height (floor (1- height) 2))
 7            (circle-x (+ X half-width))
 8            (circle-y (+ Y half-height))
 9            (background :white)
10            (foreground (if background
11                            :black
12                          (capi:simple-pane-foreground pinboard)))
13            (text (text-node-text self)))
14         (gp:draw-ellipse pinboard
15                            circle-x circle-y
16                            half-width half-height
17                            :filled t
18                            :foreground background)
19         (gp:draw-ellipse pinboard
20                          circle-x circle-y
21                          half-width half-height
22                          :foreground foreground)
23         (gp:draw-string pinboard
24                         text
25                         (+ X (text-node-string-x-offset self))
26                         (+ Y (text-node-string-y-offset self))
27                         :foreground foreground))))

Most of the work is done in the graph layout method that uses Gabriel Robbins’ algorithm. Here I just show the signature and we won’t go into implementation. If you are interested in modifying the layout code, I include a screen shot from ISI-Grapher manual showing the algorithm in a single page; see the file src/lw-grapher/Algorithm from ISI-Grapher Manual.png.

The following code snippets show the method signature for the layout algorithm function in the file src/lw-grapher/grapher.lisp. I also include the call to capi:graph-pane-nodes that is the CLOS reader method for getting the list of node objects in a graph pane:

1 (defun graph-layout (self &key force)
2   (declare (ignore force))
3   (let* ((nodes (capi:graph-pane-nodes self))
4     ...

The CAPI graph node model uses a function that is passed a node object and returns a list of this node’s child node objects. There are several examples of this in the CAPI graph examples that are included with LispWorks (see the CAPI documentation).

In src/lw-grapher/lw-grapher.lisp I wrote a function that builds a graph layout and instead of passing in a “return children” function I found it more convenient to wrap this process, accepting a list of graph nodes and graph edges as function arguments:

  1 (in-package :lw-grapher)
  2 
  3 ;; A Grapher (using the layout algorithm from the ISI-Grapher
  4 ;; user guide) with an info panel
  5 
  6 (defun make-info-panel-grapher (h-root-name-list h-edge-list
  7                                 h-callback-function-click
  8                                 h-callback-function-shift-click)
  9   (let (edges roots last-selected-node node-callback-click
 10 	  node-callback-click-shift output-pane)
 11     (labels
 12         ((handle-mouse-click-on-pane (pane x y)
 13            (ignore-errors
 14              (let ((object (capi:pinboard-object-at-position pane x y)))
 15                (if object
 16                    (let ()
 17                      (if last-selected-node
 18                          (capi:unhighlight-pinboard-object pane
 19                              last-selected-node))
 20                      (setf last-selected-node object)
 21                      (capi:highlight-pinboard-object pane object)
 22                      (let ((c-stream (collector-pane-stream output-pane))) 
 23                        (format c-stream
 24                          (funcall node-callback-click
 25                            (text-node-full-text object)))
 26                        (terpri c-stream)))))))
 27          (handle-mouse-click-shift-on-pane (pane x y)
 28            (ignore-errors
 29              (let ((object
 30                     (capi:pinboard-object-at-position pane x y)))
 31                (if object
 32                    (let ()
 33                      (if last-selected-node
 34                          (capi:unhighlight-pinboard-object
 35                            pane last-selected-node))
 36                      (setf last-selected-node object)
 37                      (capi:highlight-pinboard-object pane object)
 38                      (let ((c-stream
 39                              (collector-pane-stream output-pane)))
 40                        (format c-stream
 41                          (funcall node-callback-click-shift
 42                            (text-node-full-text object)))
 43                        (terpri c-stream)))))))
 44          
 45          (info-panel-node-children-helper (node-text)
 46            (let (ret)
 47              (dolist (e edges)
 48                (if (equal (first e) node-text)
 49                    (setf ret (cons (second e) ret))))
 50              (reverse ret)))
 51          
 52          (make-info-panel-grapher-helper
 53            (root-name-list edge-list callback-function-click
 54             callback-function-click-shift)
 55            ;; example: root-name-list: '("n1") edge-list:
 56            ;;   '(("n1" "n2") ("n1" "n3"))
 57            (setf edges edge-list
 58                  roots root-name-list
 59                  node-callback-click callback-function-click
 60                  node-callback-click-shift callback-function-click-shift)
 61            (capi:contain 
 62 
 63             (make-instance
 64              'column-layout
 65              :title "Entity Browser"
 66              :description
 67              (list
 68               (make-instance 'capi:graph-pane
 69                              :min-height 330
 70                              :max-height 420
 71                              :roots roots
 72                              :layout-function 'graph-layout
 73                              :children-function #'info-panel-node-children-helper
 74                              :edge-pane-function 
 75                              #'(lambda(self from to)
 76                                  (declare (ignore self))
 77                                  (let ((prop-name ""))
 78                                    (dolist (edge edge-list)
 79                                      (if (and
 80                                           (equal from (first edge))
 81                                           (equal to (second edge)))
 82                                          (if (and (> (length edge) 2) (third edge))
 83                                              (let ((last-index
 84                                                      (search
 85                                                       "/" (third edge) 
 86                                                       :from-end t)))
 87                                                (if last-index
 88                                                    (setf prop-name 
 89                                                     (subseq (third edge) 
 90                                                     (1+ last-index)))
 91                                                  (setf prop-name (third edge)))))))
 92                                    (make-instance 
 93                                     'capi:labelled-arrow-pinboard-object
 94                                     :data (format nil "~A" prop-name))))
 95                              :node-pinboard-class 'text-node
 96                              :input-model `(((:button-1 :release)
 97                                              ,#'(lambda (pane x y)
 98                                                   (handle-mouse-click-on-pane 
 99                                                     pane x y)))
100                                             ((:button-1 :release :shift) ;; :press)
101                                              ,#'(lambda (pane x y)
102                                                   (handle-mouse-click-shift-on-pane 
103                                                     pane x y))))
104                              :node-pane-function 'make-text-node)
105               (setf
106                output-pane
107                (make-instance 'capi:collector-pane
108                               :min-height 130
109                               :max-height 220
110                               :title "Message collection pane"
111                               :text "..."
112                               :vertical-scroll t
113                               :horizontal-scroll t))))
114             :title 
115     "Info Pane Browser: mouse click for info, mouse click + shift for web browser"
116             
117             :best-width 550 :best-height 450)))
118       (make-info-panel-grapher-helper h-root-name-list
119         h-edge-list h-callback-function-click
120         h-callback-function-shift-click))))

Wrap-up

This is a long example application for a book so I did not discuss all of the code in the project. If you enjoy running and experimenting with this example and want to modify it for your own projects then I hope that I provided a sufficient road map for you to do so.

I got the idea for the KGN application because I was spending quite a bit of time manually setting up SPARQL queries for DBPedia (and other public sources like WikiData) and I wanted to experiment with partially automating this process. I wrote the CAPI user interface for fun since this example application could have had similar functionality as a command line tool.

Using the OpenAI and Mistral APIs

I have been working as an artificial intelligence practitioner since 1982 and the capability of the beta OpenAI APIs is the most impressive thing that I have seen in my career so far. These APIs use the GPT-3 and GPT-4 models.

Note: in January 2024 I expanded this chapter to also include the hosted Mistral LLM APIs. I recently added material for locally hosted open weight LLM models like mistral-7b and lama2 to my Racket Scheme AI (link to read online) book. I plan on adding that material to this book soon.

History of OpenAI

OpenAI has been working on large language models (LLMs) since they were founded in December 2015. LLMs are artificial neural networks that can contain up to a trillion weights, and are trained using self-supervised learning and semi-supervised learning. OpenAI’s LLMs can respond to written prompts with various types of content. The release of ChatGPT in November 2022 mainstreamed the idea that generative artificial intelligence could be used by companies and consumers to automate tasks, help with creative ideas, and write software in many different programming languages.

Common Lisp Library for Using OpenAI APIs

I recommend reading the online documentation for the online documentation for the APIs to see all the capabilities of the beta OpenAI APIs. Let’s start by jumping into the example code. As seen in the package.lisp file we use the UIOP and cl-json libraries and we export three top level functions:

1 ;;;; package.lisp
2 
3 (defpackage #:openai
4   (:use #:cl #:uiop #:cl-json)
5   (:export #:completions #:summarize #:answer-question))

The library that I wrote for this chapter supports three functions that are exported from the package openai: for completing text, summarizing text, and answering general questions. The single OpenAI model that the beta OpenAI APIs use is fairly general purpose and can generate cooking directions when given an ingredient list, grammar correction, write an advertisement from a product description, generate spreadsheet data from data descriptions in English text, etc.

Given the examples from https://beta.openai.com and the Common Lisp examples here, you should be able to modify my example code to use any of the functionality that OpenAI documents.

1 ;;;; openai.asd
2 
3 (asdf:defsystem #:openai
4   :description "Library for using the beta OpenAI APIs"
5   :author "Mark Watson"
6   :license "Apache 2"
7   :depends-on (#:uiop #:cl-json)
8   :components ((:file "package")
9                (:file "openai")))

We will look closely at the function completions and then just look at the small differences to the other two example functions. The definitions for all three exported functions are kept in the file openai.lisp. You need to request an API key (I had to wait a few weeks to receive my key) and set the value of the environment variable OPENAI_KEY to your key. You can add a statement like:

export OPENAI_KEY=sa-hdffds7&dhdhsdgffd

to your .profile or other shell resource file.

While I sometimes use pure Common Lisp libraries to make HTTP requests, I prefer running the curl utility as a separate process for these reasons:

  • No problems with system specific dependencies.
  • Use the standard library UIOP to run a shell command and capture the output as a string.
  • I use curl from the command line when experimenting with web services. After I get working curl options, it is very easy to translate this into Common Lisp code.

An example curl command line call to the beta OpenAI APIs is:

1 curl \
2   https://api.openai.com/v1/engines/davinci/completions \
3    -H "Content-Type: application/json"
4    -H "Authorization: Bearer sa-hdffds7&dhdhsdgffd" \
5    -d '{"prompt": "The President went to Congress", \
6         "max_tokens": 22}'

Here the API token “sa-hdffds7&dhdhsdgffd” on line 4 is made up - that is not my API token. All of the OpenAI APIs expect JSON data with query parameters. To use the completion API, we set values for prompt and max_tokens. The value of max_tokens is the requested number of returns words or tokens. We will look at several examples later.

In the file openai.lisp we start with a helper function openai-helper that takes a string with the OpenAI API call arguments encoded as a curl command, calls the service, and then extracts the results from the returned JSON data:

 1 (defun openai-helper (curl-command)
 2   (let ((response
 3           (uiop:run-program
 4            curl-command
 5            :output :string)))
 6     (with-input-from-string
 7         (s response)
 8       (let* ((json-as-list (json:decode-json s)))
 9         ;; extract text (this might change if OpenAI changes JSON return format):
10         (cdr (assoc :content
11                (cdr (assoc :message
12                       (cadr (assoc :choices json-as-list))))))))))

I convert JSON data to a Lisp list in line 8 and in line 10 I reach into the nested results list for the generated text string. You might want to add a debug printout statement to see the value of json-as-list.

The three example functions all use this openai-helper function. The first example function completions sets the parameters to complete a text fragment. You have probably seen examples of the OpenAI GPT-4 model writing stories, given a starting sentence. We are using the same model and functionality here:

 1 (defun completions (starter-text max-tokens)
 2   (let* ((input-text (write-to-string starter-text))
 3          (d
 4           (cl-json:encode-json-to-string
 5            `((:messages . (((:role . "user") (:content . ,input-text))))
 6              (:model . "gpt-4")
 7              (:max_tokens . ,max-tokens))))
 8          (curl-command
 9           (concatenate
10            'string
11            "curl " model-host
12            " -H \"Content-Type: application/json\""
13            " -H \"Authorization: Bearer " (uiop:getenv "OPENAI_KEY") "\" " 
14            " -d '" d "'")))
15     (openai-helper curl-command)))

Note that the OpenAI API models are stochastic. When generating output words (or tokens), the model assigns probabilities to possible words to generate and samples a word using these probabilities. As a simple example, suppose given prompt text “it fell and”, then the model could only generate three words, with probabilities for each word based on this prompt text:

  • the 0.9
  • that 0.1
  • a 0.1

The model would emit the word the 90% of the time, the word that 10% of the time, or the word a 10% of the time. As a result, the model can generate different completion text for the same text prompt. Let’s look at some examples. We request 22 output tokens (words or punctuation) in the first two examples and 100 tokens in the third example:

 1 cl-user> (openai:completions "The President went to Congress" 22)
 2 " yesterday and proposed a single tax rate for all corporate taxpayers, which he env\
 3 isions will be lower than what our"
 4 
 5 cl-user> (openai:completions "The President went to Congress" 22)
 6 " last month, asking for authorization of a program, which had previously been appro\
 7 ved by the Foreign Intelligence Surveillance court as"
 8 
 9 cl-user> (openai:completions "The President went to Congress" 100)
10 " worried about what the massive unpopular bill would do to his low approvals. Democ\
11 rats lost almost every situation to discuss any legislation about this controversial\
12  subject. Even more so, President Obama failed and had to watch himself be attacked \
13 by his own party for not leading.
14 
15 There were also two celebrated (in DC) pieces of student loan legislation, which aim\
16 ed to make college cheaper. Harkin teamed up with Congressman Roddenbery on one, Stu\
17 dent Loan Affordability Act, and Senator Jack Reed (D"
18 cl-user> 

The function summarize is very similar to the function completions except the JSON data passed to the API has a few additional parameters that let the API know that we want a text summary:

  • presence_penalty - penalize words found in the original text (we set this to zero)
  • temperature - higher values the randomness used to select output tokens. If you set this to zero, then the same prompt text will always yield the same results (I never use a zero value).
  • top_p - also affects randomness. All examples I have seen use a value of 1.
  • frequency_penalty - penalize using the same words repeatedly (I usually set this to zero, but you should experiment with different values)

When summarizing text, try varying the number of generated tokens to get shorter or longer summaries; in the following two examples we ask for 15 output tokens and 50 output tokens:

 1  (defvar s "Jupiter is the fifth planet from the Sun and the largest in the Solar Sy\
 2 stem. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-ha\
 3 lf times that of all the other planets in the Solar System combined. Jupiter is one \
 4 of the brightest objects visible to the naked eye in the night sky, and has been kno\
 5 wn to ancient civilizations since before recorded history. It is named after the Rom\
 6 an god Jupiter.[19] When viewed from Earth, Jupiter can be bright enough for its ref\
 7 lected light to cast visible shadows,[20] and is on average the third-brightest natu\
 8 ral object in the night sky after the Moon and Venus.")
 9 
10 cl-user> (openai:summarize s 15)
11 "Jupiter is a gas giant because it is primarily composed of hydrogen"
12 
13 cl-user> (openai:summarize s 50)
14 "Jupiter is a gas giant because it is predominantly composed of hydrogen and helium;\
15  it has a solid core that is composed of heavier elements. It is the largest of the \
16 four giant planets in the Solar System and the largest in the Solar System"

The function answer-question is very similar to the function summarize except the JSON data passed to the API has one additional parameter that let the API know that we want a question answered:

  • stop - The OpenAI API examples use the value: [\n], which is what I use here.

Additionally, the model returns a series of answers with the string “nQ:” acting as a delimiter between the answers.

1      (let ((index (search "nQ:" answer)))
2        (if index
3          (string-trim " " (subseq answer 0 index))
4          (string-trim " " answer)))

I strongly urge you to add a debug printout to the question answering code to print the full answer before we check for the delimiter string. For some questions, the OpenAI APIs generate a series of answers that increase in generality. In the example code we just take the most specific answer.

Let’s look at a few question answering examples and we will discuss possible problems and workarounds:

1 cl-user> (openai:answer-question "Where is the Valley of Kings?" 60)
2 "It's in Egypt."

Let’s explore some issues with the question answering model. In the last example there is one good answer and the model works well. The next example “What rivers are in Arizona?” shows some problems because there are many rivers in Arizona. Sometimes the model misses a few rivers and often river names are repeated in the output. You also don’t necessarily get the same answer for the same input arguments. Here are three examples requesting 70, 90, and 160 output tokens:

 1 cl-user> (openai:answer-question "What rivers are in Arizona?" 70)
 2 "The Colorado River, the Gila River, the Little Colorado River, the Salt River, the \
 3 Verde River, the San Pedro River, the Santa Cruz River, the San Juan River, the Agua\
 4  Fria River, the Hassayampa River, the Bill Williams River, the Little Colorado Rive\
 5 r, the San Francisco River, the San Pedro River"
 6 
 7 cl-user> (openai:answer-question "What rivers are in Arizona?" 90)
 8 "The Colorado River, the Gila River, the Little Colorado River, the Salt River, the \
 9 Verde River, the San Pedro River, the Santa Cruz River, the San Juan River, the Agua\
10  Fria River, the Hassayampa River, the Bill Williams River, the Little Colorado Rive\
11 r, the San Francisco River, the San Pedro River, the Santa Cruz River, the San Juan \
12 River, the Agua Fria River, the Hass"
13 cl-user> (openai:answer-question "What rivers are in Arizona?" 160)
14 "Colorado, Gila, Salt, Verde, and the Little Colorado."
15 
16 cl-user> (openai:answer-question "What rivers are in Arizona?" 160)
17 "The Colorado River, the Gila River, the Little Colorado River, the Salt River, the \
18 Verde River, the San Pedro River, the Santa Cruz River, the San Juan River, the Agua\
19  Fria River, the Hassayampa River, the Bill Williams River, the Little Colorado Rive\
20 r, the San Francisco River, the San Pedro River, the Santa Cruz River, the San Juan \
21 River, the Agua Fria River, the Hassayampa River, the Bill Williams River, the Littl\
22 e Colorado River, the San Francisco River, the San Pedro River, the Santa Cruz River\
23 , the San Juan River, the Agua Fria River, the Hassayampa River, the Bill Williams R\
24 iver, the Little Colorado River, the San Francisco River, the San Pedro River, the S\
25 anta Cruz"

My library does not handle embedded single quote characters in questions so the question “Who is Bill Clinton’s wife?” will throw an error. Leaving out the single quote character works fine:

1 cl-user> (openai:answer-question "Who is Bill Clintons wife?" 120)
2 "Hillary Clinton."
3 cl-user> 

The function embeddings is used to convert a chunk of text to an embedding. What are embeddings? Embeddings take complex content like natural language words and sentences or software code and converts text into a special sequence of numbers. This process lets machines model the underlying concepts and relationships within the content, just like understanding the main ideas in a book even if you don’t know every word. Embeddings are often used in RAG (Retrieval Augmented Generation) applications to work around the problem of the limits of the amount of context text a LLM can process. For example, if an LLM can only accept 8192 tokens a RAG application might “chunkâ€� input text into 2K segments. Using embeddings with a vector store database, we could find the 3 chunks of original text that most closely match a query. The text for these three matched chunks could be supplied to a LLM as context text for answering a question of query.

 1 ;; text-embedding-3-large embeddings are 3072 floats
 2 ;; text-embedding-3-small embeddings are 1536 floats
 3 
 4 (defun embeddings (text)
 5   (let* ((curl-command
 6           (concatenate
 7            'string
 8            "curl  https://api.openai.com/v1/embeddings "
 9            " -H \"Content-Type: application/json\""
10            " -H \"Authorization: Bearer " (uiop:getenv "OPENAI_KEY") "\" " 
11            " -d '{\"input\": \"" text "\", \"model\": \"text-embedding-3-small\"}'")\
12 ))
13 ;;           " -d '{\"input\": \"" text "\", \"model\": \"text-embedding-3-large\"}'\
14 ")))
15     (let ((response
16            (uiop:run-program
17             curl-command
18             :output :string)))
19       ;;(princ curl-command)
20       ;;(pprint response)
21       (with-input-from-string
22           (s response)
23         (let* ((json-as-list (json:decode-json s)))
24           ;;(pprint json-as-list)
25           (cdr (nth 2 (cadr (cadr json-as-list)))))))))

The following output is edited for brevity:

CL-USER 5 > (openai::embeddings "John bought a new car")
(0.0038357755 0.007082982 -7.8207086E-4 -0.003108127 -0.038506914 ...
CL-USER 6 > (length (openai::embeddings "John bought a new car"))
1536

In addition to reading the beta OpenAI API documentation you might want to read general material on the use of OpenAI’s GPT-3 model. Since the APIs we are using are beta they may change. I will update this chapter and the source code on GitHub if the APIs change.

History of Mistral AI

Mistral AI is a French company that is a leader in the development and utilization of Large Language Models (LLMs). The company was founded in 2018 by a team of AI experts who previously worked at Google and Meta to harness the power of language models for various applications.

I am a fan of Mistral because they supply both hosted APIs for their models as well as publicly releasing the weights for their smaller models like mistral-7b and mixtral-8-7b with commercial-friendly licensing. I run both of these models locally for much of my personal research on my Mac mini with 32G of memory. They have a larger model mistral-medium that is only available through their hosted API.

Client Library for Mistral APIs

Mistral designed their API schemas to be similar to those of OPenAI so the client code for Mistral is very similar to what we saw in the previous sections.

The following code is found in the GitHub repository https://github.com/mark-watson/mistral:

 1 (in-package #:mistral)
 2 
 3 ;; define the environment variable "MISTRAL_API_KEY" with the value of your mistral \
 4 API key
 5 
 6 
 7 (defvar model-host "https://api.mistral.ai/v1/chat/completions")
 8 
 9 ;; Available models:
10 ;;
11 ;;   "mistral-tiny" powered by Mistral-7B-v0.2
12 ;;   "mistral-small" powered Mixtral-8X7B-v0.1, a sparse mixture of experts model wi\
13 th 12B active parameters
14 ;;   "mistral-medium" powered by a larger internal prototype model
15 ;;
16 
17 (defun mistral-helper (curl-command)
18   (let ((response
19           (uiop:run-program
20            curl-command
21            :output :string)))
22     (with-input-from-string
23         (s response)
24       (let* ((json-as-list (json:decode-json s)))
25         ;; extract text (this might change if mistral changes JSON return format):
26         (cdr (assoc :content (cdr (assoc :message (cadr (assoc :choices json-as-list\
27 ))))))))))
28 
29 (defun completions (starter-text max-tokens)
30   (let* ((d
31           (cl-json:encode-json-to-string
32            `((:messages . (((:role . "user") (:content . ,starter-text))))
33              (:model . "mistral-small")
34              (:max_tokens . ,max-tokens))))
35          (curl-command
36           (concatenate
37            'string
38            "curl " model-host
39            " -H \"Content-Type: application/json\""
40            " -H \"Authorization: Bearer " (uiop:getenv "MISTRAL_API_KEY") "\" " 
41            " -d '" d "'")))
42     (mistral-helper curl-command)))
43 
44 (defun summarize (some-text max-tokens)
45   (let* ((curl-command
46           (concatenate
47            'string
48            "curl " model-host
49            " -H \"Content-Type: application/json\""
50            " -H \"Authorization: Bearer " (uiop:getenv "MISTRAL_API_KEY") "\" " 
51            " -d '{\"messages\": [{\"role\": \"user\", \"content\": \"Sumarize: " som\
52 e-text 
53            "\"}], \"model\": \"mistral-small\", \"max_tokens\": " (write-to-string m\
54 ax-tokens)  "}'")))
55     (mistral-helper curl-command)))
56 
57 (defun answer-question (question-text max-tokens)
58   (let ((q-text
59           (concatenate
60            'string
61            "\nQ: " question-text "\nA:")))
62     (completions question-text max-tokens)))
63 
64 (defun embeddings (text)
65   (let* ((curl-command
66           (concatenate
67            'string
68            "curl  https://api.mistral.ai/v1/embeddings "
69            " -H \"Content-Type: application/json\""
70            " -H \"Authorization: Bearer " (uiop:getenv "MISTRAL_API_KEY") "\" " 
71            " -d '{\"input\": [\"" text "\"], \"model\": \"mistral-embed\"}'")))
72     (let ((response
73            (uiop:run-program
74             curl-command
75             :output :string)))
76       ;;(princ curl-command)
77       ;;(pprint response)
78       (with-input-from-string
79           (s response)
80         (let ((json-as-list (json:decode-json s)))
81           ;; return a list of 1024 floats:
82           (cdr (assoc :embedding (cadr (assoc :data json-as-list)))))))))
83 
84 (defun dot-product-recursive (a b) ;; generated by Bing+ChatGPT Search
85   (print "dot-product")
86   (if (or (null a) (null b))
87       0
88       (+ (* (first a) (first b))
89          (dot-product (rest a) (rest b)))))
90 
91 (defun dot-product (list1 list2)
92   (let ((sum 0))
93     (loop for x in list1
94           for y in list2
95           do
96                (setf sum (+ sum (* x y))))
97     sum))

Here is a simple example using the Mistral code completion API:

 1 cl-user> (ql:quickload :mistral)
 2 cl-user> (mistral:completions "The President went to Congress" 200)
 3 "When the President of a country goes to Congress, it typically means that they are \
 4 making a formal address to a joint session of the legislative body. This is often do\
 5 ne to present the State of the Union address, which outlines the administration's go\
 6 als and priorities for the upcoming year. The President may also go to Congress to p\
 7 ropose new legislation, rally support for existing bills, or address important natio\
 8 nal issues.
 9 
10 During the address, members of Congress from both parties are usually present in the\
11  chamber, and they may respond with applause, standing ovations, or other forms of e\
12 xpression. The President's speech is typically broadcast live on television and radi\
13 o, and it is covered extensively by the news media.
14 
15 The practice of the President going to Congress to deliver a State of the Union addr\
16 ess dates back to the early years of the United States, when President George Washin\
17 gton gave the first such address in 1790. Since then, it has become a regular tradit\
18 ion for"

Using the Anthropic Claude LLM Completion API

The Common Lisp implementation for the Anthropic Claude APIs in this chapter is similar to what we saw in the last chapter accessing the OpenAI GPT APIs.

You will want to use the Anthropic completion API dcumentation for reference.

History of Anthropic and How They Differ From OpenAI

Anthropic is an AI startup and public benefit corporation founded by former members of OpenAI. The company was founded in 2021 by siblings Daniela Amodei and Dario Amodei, who were senior members of OpenAI. Dario Amodei served as OpenAI’s Vice President of Research. They left OpenAI due to directional differences, specifically regarding OpenAI’s ventures with Microsoft in 2019. As I write this in October 2023 Anthropic has just taken investment from Amazon and will use AWS infrastructure in the future.

Anthropic specializes in developing general AI systems and language models, with a company goal of supporting responsible AI usage. They have developed their own AI chatbot named Claude. Claude uses a messaging interface where users can submit questions or requests and receive highly detailed and relevant responses. We will not use the Claude chatbot here, instead we will use their text completion API. You will need to get an API key:

https://docs.anthropic.com/claude/reference/getting-started-with-the-api

The differences between Anthropic and OpenAI are mainly in their focus areas and outputs. While both companies conduct cutting-edge machine learning research, Anthropic’s research is mainly focused on generative models and how to align them with human values. On the other hand OpenAI has developed the popular ChatGPT generative AI platform that is now widely used.

Another key difference is their strategic partnerships. Anthropic has a partnership with Google and Amazon, while OpenAI has Microsoft’s backing. These partnerships influence the direction of their research and the development of their products.

In terms of products Anthropic’s Claude 2 goes head-to-head against ChatGPT. However, like OpenAI’s models, Claude 2 still exhibits bias and sometimes hallucinates incorrect responses.

Common Lisp Library for the Anthropic LLM Completion API

The following Common Lisp code is very similar to what we used in the last chapter to call OpenAI’s APIs.

The GitHub repository for this Quicklisp compatible library can be found here:

https://github.com/mark-watson/anthropic

 1 (in-package #:anthropic)
 2 
 3 ;; define the environment variable "ANTHROPIC_API_KEY" with
 4 ;; the value of your ANTHROPIC_API_KEY API key
 5 
 6 (defvar anthropic-host "https://api.anthropic.com/v1/complete")
 7 
 8 (defun anthropic-helper (curl-command)
 9   (let ((response
10           (uiop:run-program
11            curl-command
12            :output :string)))
13     (with-input-from-string
14         (s response)
15       (let* ((json-as-list (json:decode-json s)))
16         (string-trim
17           " "
18           (cdar json-as-list))))))
19 
20 (defun completions (text max-tokens)
21   (let* ((curl-command
22           (concatenate
23            'string
24            "curl --request POST --url " anthropic-host
25            " -H \"accept: application/json\""
26            " -H \"anthropic-version: 2023-06-01\""
27            " -H \"content-type: application/json\""
28            " -H \"x-api-key: " (uiop:getenv "ANTHROPIC_API_KEY") "\""
29            " --data '{ \"prompt\": \"\\n\\nHuman: "
30            text
31            "\\n\\nAssistant: \", \"max_tokens_to_sample\": "
32            (princ-to-string max-tokens)
33            ", \"model\": \"claude-instant-1\" }'")))
34     (anthropic-helper curl-command)))

Here is an example:

1 cl-user> (ql:quickload :anthropic)
2 cl-user> (anthropic:completions "The President went to Congress" 200)
3 "I don't have enough context to comment on a specific President going to Congress. A\
4 s branches of the U.S. government, it's common for Presidents to engage with Congres\
5 s on policy matters, budget proposals, nominations, and other issues within their co\
6 nstitutional roles and responsibilities."
7 cl-user> (anthropic:completions "Answer concisely. Mary is 30 years old and Bob is 2\
8 5. Who is older?" 12)
9 "Mary is older than Bob."

Anthropic Text Completion API Wrapup

I have been using OpenAI APIs for about a year while I just received access to Anthropic’s APIs last week so I am still experimenting running the same prompts against both OpenAI and Anthropic. Since there are no standards for interaction and calling LLM APIs there is a small overhead for switching between Anthropic and OpenAI APIs. There is at least one open source project that provides a uniform interface for OpenAI and Anthropic if portability is important to you.

Using the Hugging Face Deep Learning Natural Language Processing APIs

Accessing the HuggingFace NLP APIs is similar to the code we used previously to access the OpenAI and Anthropic APIs.

History of Hugging Face and How They Differ from OpenAI and Anthropic

Hugging Face was founded in 2016 by French entrepreneurs Clement Delangue, Julien Chaumond, and Thomas Wolf. Hugging Face was initially developed as a chatbot application but open sourcing the model behind the chatbot the company decided to use its experience in developing a platform to run their own models to instead work on a general purpose platform for machine learning. They acquired Gradio, a software library used to make interactive browser demos of machine learning models. While we won’t use Gradio here because it is a Python library and platform, it is worth mentioning that I use Gradio when working on deep learning/LLM projects on Google’s Colab.

Comparing Hugging Face with OpenAI and Anthropic, Hugging Face provides an open-source platform for the machine learning community to collaborate on models, datasets, and applications. It supports users in hosting and sharing their own AI models. While OpenAI does allow developers to fine tune models on their platform, I find that Hugging Face to be designed for development and collaboration. Anthropic does not currently support fine tuning their models with your data.

Common Lisp Library for Hugging Face APIs

The following Common Lisp code is very similar to what we used in the last chapter to call OpenAI’s and Anthropic’s APIs.

The GitHub repository for this Quicklisp compatible library can be found here:

https://github.com/mark-watson/huggingface

 1 (in-package #:huggingface)
 2 
 3 ;; define the environment variable "HF_API_TOKEN" with the value
 4 ;;  of your Hugging Face API key
 5 
 6 (defun huggingface-helper (curl-command)
 7   (let ((response
 8           (uiop:run-program
 9            curl-command
10            :output :string)))
11     (with-input-from-string
12         (s response)
13       (let* ((json-as-list (json:decode-json s)))
14         json-as-list))))
15 
16 (defun summarize (some-text max-tokens)
17   (let* ((curl-command
18           (concatenate
19            'string
20            "curl https://api-inference.huggingface.co/models/facebook/bart-large-cnn"
21            " -H \"Content-Type: application/json\""
22            " -H \"Authorization: Bearer " (uiop:getenv "HF_API_TOKEN") "\" " 
23            " -d '{\"inputs\": \"" some-text "\", \"max_length\": "
24            (write-to-string max-tokens) " }'")))
25     (cdaar (huggingface-helper curl-command))))
26 
27 (defun answer-question (question-text context-text)
28   (let* ((curl-command
29           (concatenate
30            'string
31            "curl https://api-inference.huggingface.co/models/deepset/roberta-base-sq\
32 uad2"
33            " -H \"Content-Type: application/json\""
34            " -H \"Authorization: Bearer " (uiop:getenv "HF_API_TOKEN") "\" " 
35            " -d '{\"question\": \"" question-text "\", \"context\": \""
36            context-text "\" }'"))
37          (answer (huggingface-helper curl-command)))
38     (cdar (last answer))))

Here are two examples using this code:

 1 CL-USER>  (ql:quickload :huggingface)
 2 To load "huggingface":
 3   Load 1 ASDF system:
 4     huggingface
 5 ; Loading "huggingface"
 6 
 7 (:HUGGINGFACE)
 8 CL-USER> (huggingface:summarize "Jupiter is the fifth planet from the Sun and the la\
 9 rgest in the Solar System. It is a gas giant with a mass one-thousandth that of the \
10 Sun, but two-and-a-half times that of all the other planets in the Solar System comb\
11 ined. Jupiter is one of the brightest objects visible to the naked eye in the night \
12 sky, and has been known to ancient civilizations since before recorded history. It i\
13 s named after the Roman god Jupiter.[19] When viewed from Earth, Jupiter can be brig\
14 ht enough for its reflected light to cast visible shadows,[20] and is on average the\
15  third-brightest natural object in the night sky after the Moon and Venus." 30)
16 "Jupiter is the fifth planet from the Sun and the largest in the Solar System. When \
17 viewed from Earth, Jupiter can be bright enough for its reflected light to cast visi\
18 ble shadows. It is on average the third-brightest natural object in the night sky af\
19 ter the Moon and Venus. It has been known to ancient civilizations since before reco\
20 rded history."
21 
22 "Jupiter is the fifth planet from the Sun and the largest in the Solar System. When \
23 viewed from Earth, Jupiter can be bright enough for its reflected light to cast visi\
24 ble shadows. It is on average the third-brightest natural object in the night sky af\
25 ter the Moon and Venus. It has been known to ancient civilizations since before reco\
26 rded history."
27 
28 CL-USER> (huggingface:answer-question "Where were the 1992 Olympics held?" "The 1992\
29  Summer Games were the first since the end of the Cold War, and the first unaffected\
30  by boycotts since the 1972 Summer Games. The 1992 Olympics were in Greece. 1992 was\
31  also the first year South Africa was re-invited to the Olympic Games by the Interna\
32 tional Olympic Committee, after a 32-year ban from participating in international sp\
33 ort.")
34 
35 "Greece"
36 CL-USER> 

Hugging Face APIs Wrapup

I believe in supporting Hugging Face because they allow individual developers and smaller organizations to do meaningful custom work LLMs. Hugging Face has emerged as an important public resource in the AI industry by providing an open platform for experimenting with LLMs. This platform has democratized access to AI resources, allowing researchers and developers to collaborate and share LLMs. I like Hugging Face’s commitment to open source principles. This is particularly important in an era where there is a growing concern that a few large AI companies might monopolize AI resources. By providing an open platform Hugging Face is acting as an antidote to this potential issue. It ensures that access to cutting-edge AI technology is not restricted to a select few but is available to anyone with the interest and capability to use it. This democratization of AI resources promotes diversity in AI development and helps prevent the concentration of power in a few hands. It encourages a more equitable distribution of AI benefits and mitigates the risks associated with the monopolization of AI technology.

Using a Local Document Embeddings Vector Database With OpenAI GPT3 APIs for Semantically Querying Your Own Data

This project is inspired by the Python LangChain and LlamaIndex projects, with just the parts I need for my projects written from scratch in Common Lisp. I wrote a Python book “LangChain and LlamaIndex Projects Lab Book: Hooking Large Language Models Up to the Real World Using GPT-3, ChatGPT, and Hugging Face Models in Applications” in March 2023: https://leanpub.com/langchain that you might also be interested in.

The GitHub repository for this example can be found here: https://github.com/mark-watson/docs-qa. This code also requires my OpenAI Common Lisp library https://github.com/mark-watson/openai.

Overview of Local Embeddings Vector Database to Enhance the Use of GPT3 APIs With Local Documents

In this example we will use the SqLite database to store the text from documents as well as OpenAI embedding vectors for the text. Each embedding vector is 1536 floating point numbers. Two documents are semantically similar if the dot product of their embedding vectors is large.

For long documents, we extract the text and create multiple chunks of text. Each chunk is stored as a row in a SqLite database table. This is an easy way to implement a vector datastore. There are many open source and commercial vector datastores if you reach performance limits with the simple techniques we use here.

For each text chunk we call an OpenAI API to get an embedding vector. Later when we want to have a GPT enabled conversation or just semantically query our local documents, we take the user’s query and call an OpenAI API to get an embedding vector for the query text. We then compute the vector dot product between the query embedding vector and each chunk embedding vector. We save the text of the chunks that are semantically similar to the query embedding vector and use this text as “context text” that we pass to an OpenAI Large Language Model (LLM) API along with the user’s original query text.

What does this process really do? Normally when you query ChatGPT or similar LLMs, we are querying against knowledge gained from all the original model training text. This process can lead to so-called “model hallucinations” where the model “makes stuff up.” The advantage to the using the Python libraries LangChain and LlamaIndex is that a LLM is effectively using all original training data but is also primed with hopefully relevant context text from your local documents that might be useful for answering the user’s query. We will replicate a small amount of this functionality in Common Lisp.

At the end of this chapter we will extend our code for single queries with a conversational example. Our approach to this is simple: when we pass context text and a query, we also pass previous conversational queries from the user. I am still experimenting with the ideas in this chapter so please do occasionally look for updates to the GitHub repository https://github.com/mark-watson/docs-qa and updates to this book.

Implementing a Local Vector Database for Document Embeddings

In the following listing of the file docs-qa.lisp we start in lines 6-31 with a few string utility functions we will need: write-floats-to-string, read-file, concat-strings, truncate-string, and break-into-chunks.

The function break-into-chunks is a work in progress. For now we simply cut long input texts into specific chunk lengths, often cutting words in half. A future improvement will be detecting sentence boundaries and breaking text on sentences. The Python libraries LangChain and LlamaIndex have multiple chunking strategies.

In lines 33-37 function decode-row takes data from a SQL query to fetch a database table row and extracts the original chunk text and the embedding vector. Because of the overhead of making many calls to the OpenAI APIs the time spent running the local Common Lisp example code is very small so I have not yet worked on making my code efficient.

 1 (ql:quickload :sqlite)
 2 (use-package :sqlite)
 3 
 4 ;; define the environment variable "OPENAI_KEY" with the value of your OpenAI API key
 5 
 6 (defun write-floats-to-string (lst)
 7   (with-output-to-string (out)
 8     (format out "( ")
 9     (loop for i in lst
10           do (format out "~f " i))
11     (format out " )")))
12 
13 (defun read-file (infile) ;; from Bing+ChatGPT
14   (with-open-file (instream infile
15                             :direction :input
16                             :if-does-not-exist nil)
17     (when instream
18       (let ((string (make-string (file-length instream))))
19         (read-sequence string instream)
20         string))))
21 
22 (defun concat-strings (list)
23   (apply #'concatenate 'string list))
24 
25 (defun truncate-string (string length)
26   (subseq string 0 (min length (length string))))
27 
28 (defun break-into-chunks (text chunk-size)
29   "Breaks TEXT into chunks of size CHUNK-SIZE."
30   (loop for start from 0 below (length text) by chunk-size
31         collect (subseq text start (min (+ start chunk-size) (length text)))))
32 
33 (defun decode-row (row)
34   (let ((id (nth 0 row))
35         (context (nth 1 row))
36         (embedding (read-from-string (nth 2 row))))
37     (list id context embedding)))

The next listing shows of parts of docs-qa.lisp that contain code to use SqLite. I wrapped the calls to initialize the database inside of handler-case for convenience during development (file reloads don’t throw top level errors and the existing database is untouched).

 1 (defvar *db* (connect ":memory:"))
 2 ;;(defvar *db* (connect "test.db"))
 3 
 4 (pprint *db*)
 5 (handler-case
 6     (progn
 7       (execute-non-query
 8        *db*
 9        "CREATE TABLE documents (document_path TEXT, content TEXT, embedding TEXT);")
10       (execute-non-query
11        *db*
12        "CREATE INDEX idx_documents_id ON documents (document_path);")
13       (execute-non-query
14        *db*
15        "CREATE INDEX idx_documents_content ON documents (content);")
16       (execute-non-query
17        *db*
18        "CREATE INDEX idx_documents_embedding ON documents (embedding);"))
19  (error (c)
20    (print "Database and indices is already created")))
21 
22 (defun insert-document (document_path content embedding)
23   ;;(format t "insert-document:~% content:~A~%  embedding: ~A~%" content embedding)
24   (format t "~%insert-document:~%  content:~A~%~%" content)
25   (execute-non-query
26    *db*
27    "INSERT INTO documents (document_path, content, embedding) VALUES (?, ?, ?);"
28    document_path content (write-floats-to-string embedding)))
29 
30 (defun get-document-by-document_path (document_path)
31   (mapcar #'decode-row
32             (execute-to-list *db*
33                              "SELECT * FROM documents WHERE document_path = ?;"
34                              document_path)))
35 
36 (defun get-document-by-content (content)
37   (mapcar #'decode-row 
38     (execute-to-list *db*
39                      "SELECT * FROM documents WHERE content LIKE ?;" content)))
40 
41 (defun get-document-by-embedding (embedding)
42  (mapcar #'decode-row 
43    (execute-to-list *db*
44                     "SELECT * FROM documents WHERE embedding LIKE ?;" embedding)))
45 
46 (defun all-documents ()
47   (mapcar #'decode-row 
48     (execute-to-list *db* "SELECT * FROM documents;")))
49 
50 (defun create-document (fpath)
51   (let ((contents (break-into-chunks (read-file fpath) 200)))
52     (dolist (content contents)
53       (handler-case	  
54 	  (let ((embedding (openai::embeddings content)))
55 	    (insert-document fpath content embedding))
56 	(error (c)
57 	       (format t "Error: ~&~a~%" c))))))

Using Local Embeddings Vector Database With OpenAI GPT APIs

The next listing showing of parts of docs-qa.lisp interfaces with the OpenAI APIs:

 1 (defun qa (question)
 2   (let ((answer (openai:answer-question question 60)))
 3     (format t "~&~a~%" answer)))
 4 
 5 (defun semantic-match (query custom-context &optional (cutoff 0.7))
 6   (let ((emb (openai::embeddings query))
 7         (ret))
 8     (dolist (doc (all-documents))
 9       (let ((context (nth 1 doc)) ;; ignore fpath for now
10 	    (embedding (nth 2 doc)))
11 	(let ((score (openai::dot-product emb embedding)))
12 	  (when (> score cutoff)
13 	    (push context ret)))))
14     (format t "~%semantic-search: ret=~A~%" ret)
15     (let* ((context (join-strings " . " (reverse ret)))
16            (query-with-context
17              (join-strings
18                " "
19                (list context custom-context
20                  "Question:" query))))
21       (openai:answer-question query-with-context 40))))
22 
23 (defun QA (query &optional (quiet nil))
24   (let ((answer (semantic-match query "")))
25     (if (not quiet)
26         (format t "~%~%** query: ~A~%** answer: ~A~%~%" query answer))
27     answer))

Testing Local Embeddings Vector Database With OpenAI GPT APIs

In the next part of the listing of docs-qa.lisp we write a test function to create two documents. The two calls to create-document actually save text and embeddings for about 20 text chunks in the database.

1 (defun test()
2   "Test code for Semantic Document Search Using
3    OpenAI GPT APIs and local vector database"
4   (create-document "data/sports.txt")
5   (create-document "data/chemistry.txt")
6   (QA "What is the history of the science of chemistry?")
7   (QA "What are the advantages of engainging in sports?"))

The output is (with a lot of debug printout not shown):

 1 $ sbcl
 2 * (quicklisp:quickload :docs-qa)
 3 To load "docs-qa":
 4   Load 1 ASDF system:
 5     docs-qa
 6 ; Loading "docs-qa"
 7 ..................................................
 8 [package docs-qa]To load "sqlite":
 9   Load 1 ASDF system:
10     sqlite
11 ; Loading "sqlite"
12 
13 #<sqlite-handle {7005CA3783}>
14 (:docs-qa)
15 * (in-package :docs-qa)
16 #<package "DOCS-QA">
17 * (test)
18 
19 ** query: What is the history of the science of chemistry?
20 ** answer: The history of chemistry as a science began in the 6th century BC, when t\
21 he Greek philosopher Leucippus and his student Democritus posited the existence of a\
22 n endless number of worlds
23 
24 ** query: What are the advantages of engainging in sports?
25 ** answer: The advantages of engaging in sports are:n1. It helps to develop the body\
26  and mind.n2. It helps to develop the character.n3. It helps to develop the personal\
27 ity.

Adding Chat History

In the last part of the listing of docs-qa.lisp we experiment with supporting a conversation/chat of multiple semantic queries against our local documents.

 1 (defun CHAT ()
 2   (let ((messages '(""))
 3         (responses '("")))
 4     (loop
 5        (format t "~%Enter chat (STOP or empty line to stop) >> ")
 6        (let ((string (read-line))
 7              response)
 8          (cond ((or (string= string "STOP") (< (length string) 1)) (return))
 9                (t (let (prompt
10                         custom-context)
11                     (setf custom-context
12                           (concatenate
13                            'string
14                            "PREVIOUS CHAT: "
15                            (join-strings  " "
16                                           (reverse messages))))
17                     (push string messages)
18                     (print messages) ;; (print responses)
19                     (print prompt)
20                     (setf response (semantic-match string custom-context))
21                     (push response responses)
22                     (format t "~%Response: ~A~%" response))))))
23     (list (reverse messages) (reverse responses))))

The output (with lots of debug printouts removed) looks like:

 1 $ sbcl
 2 * (quicklisp:quickload :docs-qa)
 3 To load "docs-qa":
 4   Load 1 ASDF system:
 5     docs-qa
 6 ; Loading "docs-qa"
 7 ..................................................
 8 [package docs-qa].To load "sqlite":
 9   Load 1 ASDF system:
10     sqlite
11 ; Loading "sqlite"
12 #<sqlite-handle {7005D9B9D3}>
13 * (in-package :docs-qa)
14 #<package "DOCS-QA">
15 * (create-document "data/chemistry.txt")
16 
17 insert-document:
18   content:Amyl alcohol is an organic compound with the formula C 5 H 12 O. All eight\
19  isomers of amyl alcohol are known. The most important is isobutyl carbinol, this be\
20 ing the chief constituent of fermentation 
21  ;; output from all other document chunks is not shown
22  
23  * (CHAT)
24 
25 Enter chat (STOP or empty line to stop) >> what is the history of chemistry?
26 
27 Response: Chemistry is the science of matter, its composition, structure and its pro\
28 perties. Chemistry is concerned with atoms and their interactions with other atoms, \
29 and thus is central to all other sciences. Chemistry is also concerned
30 
31 Enter chat (STOP or empty line to stop) >> what is the boiling temperature?
32 
33 Response: The boiling temperature of a liquid is the temperature at which the vapor \
34 pressure of the liquid equals the pressure surrounding the liquid, and the liquid ch\
35 anges into a vapor. At the boiling temperature, bubbles of vapor
36 
37 Enter chat (STOP or empty line to stop) >> 

Wrap Up for Using Local Embeddings Vector Database to Enhance the Use of GPT3 APIs With Local Documents

As I write this in early April 2023, I have been working almost exclusively with OpenAI APIs for the last year and using the Python libraries for LangChain and LlamaIndex for the last three months.

I prefer using Common Lisp over Python when I can, so I am implementing a tiny subset of the LangChain and LlamaIndex libraries in Common Lisp for my own use. By writing about my Common Lisp experiments here I hope that I get pull requests for https://github.com/mark-watson/docs-qa from readers who are interested in helping to extend the Common Lisp library.

Prompt Engineering for Large Language Models

We have been using prompt engineering in an informal way in the last few chapters. In this chapter we look at prompt engineering in more detail. My hope is that, dear reader, you find this chapter to be both a short reference for Large Language Models (LLMS) and a useful source of prompt examples at the end of the chapter.

Two Types of LLMS

In order to avoid “model hallucinations” (i.e., models making up answers with no basis in fact) when using Large Language Models (LLMS) it is important to explain in detail instructions and add context text to help the model better answer questions of solve problems.

Base LLMs and Instruction Tuned LLMs are both types of advanced AI language models. While they share some similarities in their underlying architecture and training processes, they have distinct differences in terms of their fine-tuning and usage. Here, we will discuss the similarities and differences in detail.

The similarities are:

  • Architecture: Both models are based on the Transformer architecture, which has been shown to be highly effective for various natural language processing tasks. This architecture relies on self-attention mechanisms to process input data and generate outputs.
  • Pretraining: Both models undergo a pretraining phase where they learn from a large corpus of text, such as websites, books, and articles. This helps the models acquire a general understanding of language, grammar, facts, and some reasoning abilities.
  • Transfer learning: Both models can use transfer learning, which involves fine-tuning the pretrained models on specific tasks or datasets. This process adapts the model’s general knowledge to the target application, improving its performance on the desired task.

The differences are:

  • Fine-tuning objectives: The key difference between the two models lies in their fine-tuning process. Base Large Language Models are fine-tuned to generate human-like text with minimal explicit guidance, whereas Instruction Tuned Large Language Models are specifically optimized to follow instructions provided in the input prompt.
  • Task performance: Due to their different fine-tuning objectives, Instruction Tuned Large Language Models tend to perform better on tasks that require following explicit instructions, while Base Large Language Models may produce more diverse and creative responses that may not always adhere to the given instructions.
  • Usage: Base Large Language Models can be used for a wide variety of tasks, including text generation, summarization, translation, and question-answering. Instruction Tuned Large Language Models, on the other hand, are designed to be more controllable and responsive to input instructions, making them more suitable for applications that require precise output based on specific instructions.
  • Prompt engineering: To get the desired output from a Base Large Language Model, users often need to carefully craft prompts, which can be a trial-and-error process. With Instruction Tuned Large Language Models, users can provide more explicit instructions to achieve the desired output more consistently.

In summary, both Base and Instruction Tuned Large Language Models share the same foundation in terms of architecture and pretraining but differ in their fine-tuning objectives and application strengths. While Base Large Language Models generate more diverse and creative text, Instruction Tuned Large Language Models excel at tasks that require following specific instructions and are more controllable.

Prompt Examples

We have used both OpenAI and Hugging Face Models so far in this book. When you practice prompt engineering and learn new techniques, your knowledge of prompt engineering can be applied to the OpenAI and Hugging Face APIs and models.

As Andrew Ng and Isa Fulford point out in their one hour class ChatGPT Prompt Engineering for Developers, you can craft prompts as if you were instructing an intelligent person who is not necessarily familiar with the current problem you are trying to solve with prompt engineering. There is a bit of a tradeoff here: you pay per input and output tokens with the OpenAI APIs so you may be tempted to compress your context text, instructions, and questions when creating prompts. However, in general to more context and information you can provide increases the quality of the resulting output.

The following sections show examples of prompt engineering that I use in my own work. In order to learn more techniques I recommend also performing web search for prompt engineering with a description of the problem you are working on. As an example, I asked Microsoft Bing+ChatGPT in a chat to help me with I need help with prompt engineering using the OpenAI APIs. My problem is: I have a text file that contains customer names and phone numbers. I need a prompt to list names and phone numbers. and the advice was a useful starting point. In a similar way I received useful advice when chatting with Google Bard.

Find Entities and Relationships

Prompt:

 1 The task is to extract as many relevant entities and the relationships between these\
 2  entities.\n
 3 When possible, return the type of an entity using the Wikipedia class system.\n
 4 Extract all relevant relationships between identified entities.\n
 5 The relationships should follow the Wikipedia schema type.
 6 The output of a relationship should be in a form of a triple Head, Relationship, Tai\
 7 l, for example
 8 Bill Gates, BORN_AT, Seatle.\n
 9 As an example "Bill Gates worked at Microsoft" should have an output with the follow\
10 ing format:\n
11 entity
12 Bill Gates, person
13 Microsoft, organization
14 
15 relationships
16 Bill Gates, WORKET AT, Microsoft\n
17 Task: process this text:
18 Steve Jobes founded Apple Computer and Bill Gates started Microsoft. Seattle is in t\
19 he state of Washington.
20 Steve lived in Palo Alto. Palo Alto is located in California.

ChatGPT 4 returns:

 1 entities
 2 Steve Jobs, person
 3 Apple Computer, organization
 4 Bill Gates, person
 5 Microsoft, organization
 6 Seattle, city
 7 Washington, administrative territorial entity
 8 
 9 relationships
10 Steve Jobs, FOUNDED, Apple Computer
11 Bill Gates, FOUNDED, Microsoft
12 Seattle, IS_IN, Washington

Disambiguate Relationship Names

Prompt:

 1 Given a list of relationships like:
 2 
 3 brother
 4 located in
 5 sister
 6 sibling
 7 found at
 8 father
 9 location
10 mother
11 relative
12 
13 Then return a list with an index for each unique relationship, like:
14 
15 brother, 1
16 located in, 2
17 sister, 1
18 sibling, 1
19 found at, 2
20 father, 1
21 location, 2
22 mother, 1
23 relative, 1
24 
25 Task: process the following text:
26 
27 Sam has a brother Mike. Suzy has a sister Sally. Suszy ahs a mother Jill, and a fath\
28 er Bill.
29 Mike is in San Diego. Sam is located is Oakland. Suzy's family all live in Los Angel\
30 es.

ChatGPT 4 results are:

 1 Based on the provided list of relationships, here is the processed text with an inde\
 2 x for each unique relationship:
 3 
 4 Sam has a brother Mike. (brother, 1)
 5 Suzy has a sister Sally. (sister, 1)
 6 Suzy has a mother Jill, and a father Bill. (mother, 1; father, 1)
 7 Mike is in San Diego. (located in, 2)
 8 Sam is located in Oakland. (located in, 2)
 9 Suzy's family all live in Los Angeles. (location, 2)
10 
11 Here, relationships like "brother," "sister," "mother," and "father" are assigned an\
12  index of 1, while relationships like "located in" and "location" are assigned an in\
13 dex of 2.

Disambiguate Relationship Names and Rewrite Text

Prompt:

 1 We want to disambiguate relationship names and then rewrite text using canonical rel\
 2 ationship names.
 3 
 4 Given a list of relationships like:
 5 
 6 brother
 7 located in
 8 sister
 9 sibling
10 found at
11 father
12 location
13 mother
14 relative
15 
16 Then generate a list with an index for each unique relationship, like:
17 
18 brother, 1
19 located in, 2
20 sister, 1
21 sibling, 1
22 found at, 2
23 father, 1
24 location, 2
25 mother, 1
26 relative, 1
27 
28 And then rewrite the original text with the relation indices.
29 
30 Task: process the following text:
31 
32 Sam has a brother Mike. Suzy has a sister Sally. Suszy ahs a mother Jill, and a fath\
33 er Bill.
34 Mike is in San Diego. Sam is located is Oakland. Suzy's family all live in Los Angel\
35 es.

The ChatGPT 4 output is:

 1 First, we create the indexed list of unique relationships:
 2 
 3 brother, 1
 4 located in, 2
 5 sister, 1
 6 sibling, 1
 7 found at, 2
 8 father, 1
 9 location, 2
10 mother, 1
11 relative, 1
12 
13 Now, we rewrite the original text using the relation indices:
14 
15 Sam has a 1 Mike. Suzy has a 1 Sally. Suszy has a 1 Jill, and a 1 Bill.
16 Mike is 2 San Diego. Sam is 2 Oakland. Suzy's family all 2 Los Angeles.

In the first sentence Sam has a 1 Mike the number 1 refers to brother.

Disambiguate Relationship Names and Rewrite Text, Only Output Rewritten Text

Here we output just results in JSON format.

Prompt:

 1 We want to disambiguate relationship names and then rewrite text using canonical rel\
 2 ationship names.
 3 
 4 Given a list of relationships like:
 5 
 6 brother
 7 located in
 8 sister
 9 sibling
10 found at
11 father
12 location
13 mother
14 relative
15 
16 Then generate a list with an index for each unique relationship, like:
17 
18 brother, <1>
19 located in, <2>
20 sister, <1>
21 sibling, <1>
22 found at, <2>
23 father, <1>
24 location, <2>
25 mother, <1>
26 relative, <1>
27 
28 And then rewrite the original text using the relation indices, and list relation nam\
29 es with indices. Only output rewritten text and names with indices.
30 
31 Task: process the following text outputing JSON format:
32 
33 Sam has a brother Mike. Suzy has a sister Sally. Suzy has a mother Jill, and a fathe\
34 r Bill.
35 Mike is in San Diego. Sam is located is Oakland. Suzy's family all live in Los Angel\
36 es.

The ChatGPT 4 output is:

 1 {
 2   "relationships": {
 3     "brother": "<1>",
 4     "located in": "<2>",
 5     "sister": "<1>",
 6     "sibling": "<1>",
 7     "found at": "<2>",
 8     "father": "<1>",
 9     "location": "<2>",
10     "mother": "<1>",
11     "relative": "<1>"
12   },
13   "rewritten_text": "Sam has a brother Mike. Suzy has a sister Sally. Suzy has a mot\
14 her Jill, and a father Bill.\nMike is <2> San Diego. Sam is <2> Oakland. Suzy's fami\
15 ly all live <2> Los Angeles."
16 }

Topic Modeling for Document Classification

In this example we show a complete template that includes document text. To use this in your programs, replace the document text with a token string that you can replace with the text for the document you are topic modeling.

Prompt:

 1 Topic modeling involves reading text and assigning a topic name (also known as a doc\
 2 ument classification).
 3 The topics that you know how to classify are:
 4 
 5 sports
 6 health
 7 chemistry
 8 economy
 9 politics
10 
11 The output format will be:
12 
13 Known topics: <list of all topics>
14 Topic: <topic of text>
15 
16 Task: what is the best topic name for the following text:
17 
18 John and Sam went to the playing field yesterday. Sam sprained his ankle.

The result is:

1 Known topics: sports, health, chemistry, economy, politics
2 
3 Topic: sports

In this prompt template, I used placeholder document text John and Sam went to the playing field yesterday. Sam sprained his ankle. that you would replace with the longer text from the document you want to determine the topic or classification of.

In this template, I like to ask the LLM to repeat the list of topics (or classifications) it knows.

Prompt Engineering Wrapup

When you experiment with the examples in this chapter you can both use the ChatGPT web interface to modify and develop prompts and you can also revisit the chapter on OpenAI APIs and experiment with prompts programmatically. My personal preference is to perform initial experiments with the ChatGPT web interface but to do most of the iterative prompt development in Emacs with a SLIME REPL.

I mentioned earlier Andrew Ng’s and Isa Fulford’s one hour prompt engineering class. The devote time and provide good advice on interactive developement. Their examples use the Python language.

Using Common Lisp with Wolfram/One

If you use Wolfram/One then the material in this short chapter may interest you. The interface that I wrote is simple: I use uiop:run-program to spawn a new process to run the Wolfram Language command line tool that writes results to a temporary file. I then use uiop:read-file-string to read the results and parse them into a convenient form for use.

Before we build and use an interface to Wolfram/One, let’s look at two screen shots of the Wolfram/One interface with examples that we will run later in Common Lisp. The first example finds entities in text:

Using Wolfram/One to find entities in text
Using Wolfram/One to find entities in text

The second example uses a deep learning model to answer a question given text containing the answer to the question:

Using Wolfram/One to answer natural language questions
Using Wolfram/One to answer natural language questions

Here is the package.lisp file for this example:

1 (defpackage #:wolfram
2   (:use #:cl #:uiop)
3   (:export #:wolfram #:cleanup-lists
4            #:find-answer-in-text #:entities))

And the wolfram.asd file:

1 (asdf:defsystem #:wolfram
2   :description "Wolfram Language interface experiments"
3   :author "Mark Watson <markw@markwatson.com>"
4   :license "Apache 2"
5   :depends-on (#:uiop #:cl-json #:myutils)
6   :components ((:file "package")
7                (:file "wolfram")))

The implementation in Wolfram.lisp is simple enough. In lines 6-8 I create a Common Lisp path object in /tmp (and absolute pathname is required) and then use file-namestring to get just the file name as a string. In lines 8-10 we are creating an operating system shell and running the Wolfram Language command line tool with arguments to execute the query and write the results to the temporary file. In lines 11-15 we read the contents of the temporary file, delete the file, and decode the returned string as JSON data.

The Data returned form calling the Wolfram Language command line tool contains excess structure that we don’t need (a sample of the raw returned data is shown later) so the function cleanup-lists shown in lines 17-19 discards heads of lists when the first value in a list or sublist is Rule or List. The function recursive-remove seen in lines 20-24 will remove all occurrences of an item from a nested list.

 1 (in-package #:wolfram)
 2 
 3 ;; General query utilities
 4 
 5 (defun wolfram (statement)
 6   (let ((temp-file-path
 7          (file-namestring (uiop:tmpize-pathname "/tmp/wolfram"))))
 8     (uiop:run-program (concatenate 'string "wolframscript -code 'Export[\""
 9                                    temp-file-path "\"," statement
10                                    ",\"ExpressionJSON\"]'"))
11     (let* ((ret (uiop:read-file-string temp-file-path)))
12       (delete-file temp-file-path)
13       (with-input-from-string (s (myutils:replace-all
14                                   (myutils:replace-all ret "\"'" "\"") "'\"" "\""))
15         (json:decode-json s)))))
16 
17 (defun cleanup-lists (r)
18   (cdr (recursive-remove "Rule" (recursive-remove "List" r))))
19 
20 (defun recursive-remove (item tree)
21   (if (atom tree)
22       tree
23       (mapcar (lambda (nested-list) (recursive-remove item nested-list))
24               (remove item tree :test #'equal))))
25 
26 ;; Higher level utilities for specific types of queries
27 
28 (defun entities (text)
29   (let* ((noquotes (myutils:replace-all (myutils:replace-all text "\"" " ") "'" " "))
30          (query2
31           (concatenate
32            'string "TextCases['" noquotes
33            "',  {'City', 'Country', 'Date', 'Person'} ->"
34            " {'String', 'Interpretation', 'Probability'}]"))
35          (query (myutils:replace-all query2 "'" "\"")))
36     (remove-if #'(lambda (a) (null (cadr a)))
37                (cleanup-lists (wolfram query)))))
38 
39 (defun find-answer-in-text (text question)
40   (let* ((nqtext (myutils:replace-all (myutils:replace-all text "\"" " ") "'" " "))
41          (nqquestion (myutils:replace-all
42                       (myutils:replace-all question "\"" " ") "'" " "))
43          (query2 (concatenate 'string "FindTextualAnswer['" nqtext
44                               "', '" nqquestion "']"))
45          (query (myutils:replace-all query2 "'" "\"")))
46     (wolfram query)))

The last two functions in the last code listing, entities and find-answer-in-text are higher level functions intended to work with the Wolfram Language procedures TextCases (see Wolfram documentation for TextCases) and FindTextualAnswer (see Wolfram documentation for FindTextualAnswer).

The functions cleanup-lists and recursive-remove can be used to clean up results. First, we will just call function wolfram and show the raw results:

 1 $ sbcl
 2 * (ql:quickload "wolfram")
 3 To load "wolfram":
 4   Load 1 ASDF system:
 5     wolfram
 6 ; Loading "wolfram"
 7 [package myutils].................................
 8 [package wolfram]
 9 ("wolfram")
10 * (setf example "TextCases['NYC, Los Angeles, and Chicago are the largest cities in \
11 the USA in 2018 according to Pete Wilson.', {'City', 'Country', 'Date', 'Person'} ->\
12  {'String', 'Interpretation', 'Probability'}]")
13 "TextCases['NYC, Los Angeles, and Chicago are the largest cities in the USA in 2018 \
14 according to Pete Wilson.', {'City', 'Country', 'Date', 'Person'} -> {'String', 'Int\
15 erpretation', 'Probability'}]"
16 * (setf example-str (myutils:replace-all  example "'" "\""))
17 "TextCases[\"NYC, Los Angeles, and Chicago are the largest cities in the USA in 2018\
18  according to Pete Wilson.\", {\"City\", \"Country\", \"Date\", \"Person\"} -> {\"St\
19 ring\", \"Interpretation\", \"Probability\"}]"
20 * (setf results (wolfram:wolfram example-str))
21 * (pprint results)
22 
23 ("Association"
24  ("Rule" "City"
25   ("List"
26    ("List" "NYC" ("Entity" "City" ("List" "NewYork" "NewYork" "UnitedStates"))
27     0.75583166)
28    ("List" "Los Angeles"
29     ("Entity" "City" ("List" "LosAngeles" "California" "UnitedStates"))
30     0.84206486)
31    ("List" "Chicago"
32     ("Entity" "City" ("List" "Chicago" "Illinois" "UnitedStates"))
33     0.91092855)))
34  ("Rule" "Country"
35   ("List" ("List" "USA" ("Entity" "Country" "UnitedStates") 0.9285077)))
36  ("Rule" "Date"
37   ("List"
38    ("List" "2018" ("DateObject" ("List" 2018) "Year" "Gregorian" -7.0)
39     0.8364356)))
40  ("Rule" "Person"
41   ("List"
42    ("List" "Pete Wilson" ("Entity" "Person" "PeteWilson::s7259") 0.9274548))))
43 * 

Now we clean up the output:

 1 * (defvar results-cleaned (wolfram:cleanup-lists results))
 2 * (pprint results-cleaned)
 3 
 4 (("City"
 5   (("NYC" ("Entity" "City" ("NewYork" "NewYork" "UnitedStates")) 0.75583166)
 6    ("Los Angeles" ("Entity" "City" ("LosAngeles" "California" "UnitedStates"))
 7     0.84206486)
 8    ("Chicago" ("Entity" "City" ("Chicago" "Illinois" "UnitedStates"))
 9     0.91092855)))
10  ("Country" (("USA" ("Entity" "Country" "UnitedStates") 0.9285077)))
11  ("Date" (("2018" ("DateObject" (2018) "Year" "Gregorian" -7.0) 0.8364356)))
12  ("Person" (("Pete Wilson" ("Entity" "Person" "PeteWilson::s7259") 0.9274548))))
13 * 

Next we will try the two higher-level utility functions. The first example shows finding entities in text:

1 CL-USER 21 > (pprint
2                (wolfram:entities "Sedona Arizona is home to Mark Louis Watson"))
3 
4 (("City"
5   (("Sedona" ("Entity" "City" ("Sedona" "Arizona" "UnitedStates")) 0.8392784)))
6  ("Person" (("Mark Louis Watson" "Mark Louis Watson" 0.9023427))))

The second example uses a Wolfram pre-trained deep learning model for question answering:

 1 CL-USER 22 > (pprint
 2                (wolfram::find-answer-in-text "International Business Machines Corpor\
 3 ation (IBM) is an American multinational technology company headquartered in Armonk,\
 4  New York, with operations in over 170 countries. The company began in 1911, founded\
 5  in Endicott, New York, as the Computing-Tabulating-Recording Company (CTR) and was \
 6 renamed \"International Business Machines\" in 1924. IBM is incorporated in New York\
 7 ."
 8                             "where is IBM is headquartered?"))
 9 
10 "Armonk, New York"

If you use Wolfram/One then these examples should get you started wrapping other Wolfram Language functionality for use in your Common Lisp applications.

Using the Lem Editor

The Lem editor is written 100% in Common Lisp and is tailored specifically for Common Lisp development. The source code and documentation are found here https://github.com/lem-project/lem.

Book Wrapup

Congratulations for finishing this book!

I love programming in Lisp languages with concise code and a bottom-up approach to development. I hope you now also share this enthusiasm with me.

Common Lisp is sometimes criticised as not having as many useful libraries as some newer languages like Python and Java, and this is a valid criticism. That said, I hope the wide variety of examples in this book will convince you that Common Lisp is a good choice for many types of programming projects.

I would like to thank you for reading my book and I hope that you enjoyed it. As I mentioned in the Introduction I have been using Common Lisp since the mid-1980s, and other Lisp dialects for longer than that. I have always found something almost magical developing in Lisp. Being able to extend the language with macros and using the development technique of building a mini-language in Lisp customized for an application enables programmers to be very efficient in their work. I have usually found that this bottom-up development style helps me deal with software complexity because the lower level functions tend to get well tested while the overall system being developed is not yet too complex to fully understand. Later in the development process these lower level functions and utilities almost become part of the programming language and the higher level application logic is easier to understand because you have fewer lines of code to fit inside your head during development.

I think that unless a programmer works in very constrained application domains, it often makes sense to be a polyglot programmer. I have tried, especially in the new material for this fourth edition, to give you confidence that Common Lisp is good for both general software development language and also as “glue” to tie different systems together.

Thank you for buying and reading my book!

Mark Watson