Table of Contents
- Preface
- A Quick Racket Tutorial
- Datastores
- Implementing a Simple RDF Datastore With Partial SPARQL Support in Racket
- Web Scraping
- Using the OpenAI, Anthropic, Mistral, and Local Hugging Face Large Language Model APIs in Racket
- Retrieval Augmented Generation of Text Using Embeddings
- Natural Language Processing
- Knowledge Graph Navigator
- Conclusions
Preface
I have been using Lisp languages since the 1970s. In 1982 my company bought a Lisp Machine for my use. A Lisp Machine provided an “all batteries included” working environment, but now no one seriously uses Lisp Machines. In this book I try to lead you, dear reader, through a process of creating a “batteries included” working environment using Racket Scheme.
The latest edition is always available for purchase at https://leanpub.com/racket-ai. You can also read free online at https://leanpub.com/racket-ai/read. I offer the purchase option for readers who wish to directly support my work.
If you read my eBooks free online then please consider tipping me https://markwatson.com/#tip.
This is a “live book:” there will never be a second edition. As I add material and make corrections, I simply update the book and the free to read online copy and all eBook formats for purchase get updated.
I have been developing commercial Artificial Intelligence (AI) tools and applications since the 1980s and I usually use the Lisp languages Common Lisp, Clojure, Racket Scheme, and Gambit Scheme. Here you will find Racket code that I wrote for my own use and I am wrapping in a book in the hopes that it will also be useful to you, dear reader.
I wrote this book for both professional programmers and home hobbyists who already know how to program in Racket (or another Scheme dialect) and who want to learn practical AI programming and information processing techniques. I have tried to make this an enjoyable book to work through. In the style of a “cook book,” the chapters can be read in any order.
You can find the code examples and the files for the manuscript for this book in the following GitHub repository:
https://github.com/mark-watson/Racket-AI-book
Git pull requests with code improvements for either the source code or manuscript markdown files will be appreciated by me and the readers of this book.
License for Book Manuscript: Creative Commons
Copyright 2022-2024 Mark Watson. All rights reserved.
This book may be shared using the Creative Commons “share and share alike, no modifications, no commercial reuse” license.
Book Example Programs
Here are the projects for this book:
- embeddingsdb - Vector database for embeddings. This project implements a vector database for storing and querying embeddings. It provides functionality for creating, storing, and retrieving vector representations of text or other data, which is crucial for many modern AI applications, especially those involving semantic search or similarity comparisons.
- kgn - Knowledge Graph Navigator. The Knowledge Graph Navigator is a tool for exploring and querying knowledge graphs. It includes utilities querying graph structures, executing queries, and visualizing relationships between entities. This can be particularly useful for working with semantic web technologies or large-scale knowledge bases.
- llmapis - Interfaces to various LLM APIs. This module provides interfaces to various Large Language Model APIs, including OpenAI, Anthropic, Mistral, and others. It offers for interact ing with different LLM providers, allowing users to easily switch between models or compare outputs from different services. This can be valuable for developers looking to integrate state-of-the-art language models into their applications.
- misc_code - Miscellaneous utility code. This directory contains various utility functions and examples for common programming tasks in Racket. It includes code for working with hash tables, parsing HTML, handling HTTP requests, and interacting with SQLite databases. These utilities can serve as building blocks for larger AI projects or as reference implementations for common tasks.
- nlp - Natural Language Processing utilities. The NLP module provides tools for natural language processing tasks. It includes implementations for part-of-speech tagging and named entity recognition. These fundamental NLP tasks are essential for many text analysis and understanding applications, making this module a valuable resource for developers working with textual data.
- pdf_chat - PDF text extraction. This project focuses on extracting text from PDF documents. It provides utilities for parsing PDF files and converting their content into plain text format. This can be particularly useful for applications that need to process or analyze information contained in PDF documents, such as document summarization or information retrieval systems.
- search_brave - Brave Search API wrapper. The Brave Search API wrapper provides an interface for interacting with the Brave search engine programmatically. It offers functions for sending queries and processing search results, making it easier to integrate Brave’s search capabilities into Racket applications.
- sparql - SPARQL querying utilities for DBpedia. This module focuses on SPARQL querying, particularly for interacting with DBpedia. It includes utilities for constructing SPARQL queries, executing them against DBpedia’s endpoint, and processing the results. This can be valuable for applications that need to extract structured information from large knowledge bases.
- webscrape - Web scraping utilities. The web scraping module provides tools for extracting information from websites. It includes functions for fetching web pages, parsing HTML content, and extracting specific data elements. These utilities can be useful for a wide range of applications, from data collection to automated information gathering and analysis.
The following diagram showing Racket software examples configured for your local laptop. There are several combined examples that build both to a Racket package that get installed locally, as well as command line programs that get built and deployed to ~/bin. Other examples are either a command line tool or a Racket package.
Racket, Scheme, and Common Lisp
I like Common Lisp slightly more than Racket and other Scheme dialects, even though Common Lisp is ancient and has defects. Then why do I use Racket? Racket is a family of languages, a very good IDE, and a rich ecosystem supported by many core Racket developers and Racket library authors. Choosing Racket Scheme was an easy decision, but there are also other Scheme dialects that I enjoy using:
- Gambit/C Scheme
- Gerbil Scheme (based on Gambit/C)
- Chez Scheme
Personal Artificial Intelligence Journey: or, Life as a Lisp Developer
I have been interested in AI since reading Bertram Raphael’s excellent book Thinking Computer: Mind Inside Matter in the early 1980s. I have also had the good fortune to work on many interesting AI projects including the development of commercial expert system tools for the Xerox LISP machines and the Apple Macintosh, development of commercial neural network tools, application of natural language and expert systems technology, medical information systems, application of AI technologies to Nintendo and PC video games, and the application of AI technologies to the financial markets. I have also applied statistical natural language processing techniques to analyzing social media data from Twitter and Facebook. I worked at Google on their Knowledge Graph and I managed a deep learning team at Capital One where I was awarded 55 US patents.
I enjoy AI programming, and hopefully this enthusiasm will also infect you.
Acknowledgements
I produced the manuscript for this book using the leanpub.com publishing system and I recommend leanpub.com to other authors.
Editor: Carol Watson
Thanks to the following people who found typos in this and earlier book editions: David Rupp.
A Quick Racket Tutorial
If you are an experienced Racket developer then feel free to skip this chapter! I wrote this tutorial to cover just the aspects of using Racket that you, dear reader, will need in the book example programs.
I assume that you have read the section Racket Essentials in the The Racket Guide written by Matthew Flatt, Robert Bruce Findler, and the PLT group. Here I just cover some basics of getting started so you can enjoy the later code examples without encountering “road blocks.”
Installing Packages
The DrRacket IDE lets you interactively install packages. I prefer using the command line so, for example, I would install SQlite support using:
We can then require the code in this package in our Racket programs:
Note that when the argument to require is a symbol (not a string) then modules are searched and loaded from your system. When the argument is a string like “utils.rkt” that a module is loaded from a file in the current directory.
Installing Local Packages In Place
In a later chapter Natural Language Processing (NLP) we define a fairly complicated local package. This package has one unusual requirement that you may or may not need in your own projects: My NLP library requires static linguistic data files that are stored in the directory Racket-AI-book-code/nlp/data. If I am in the directory Racket-AI-book-code/nlp working on the Racket code, it is simple enough to just open the files in ./data/….
The default for installing your own Racket packages is to link to the original source directory on your laptop’s file system. Let’s walk through this. First, I will make sure my library code is compiled and then install the code in the current directory:
Then I can run the racket REPL (or DrRacket) on my laptop and use my NLP package by requiring the code in this package in our Racket programs (shown in a REPL):
Mapping Over Lists
We will be using functions that take other functions as arguments:
Hash Tables
The following listing shows the file misc_code/hash_tests.rkt:
Here is a lising of the output window after running this file and then manually evaluating h1, h2, and h3 in the REPL (like all listings in this book, I manually edit the output to fit page width):
Racket Structure Types
A structurer type is like a list that has named list elements. When you define a structure the Racket system writes getter and setter methods to access and change structure attribute values. Racket also generates a constructor function with the structure name. Let’s look at a simple example in a Racket REPL of creating a structure with mutable elements:
If you don’t add #:mutable to a struct definition, then no set-NAME-ATTRIBUTE! methods are generated.
Racket also supports object oriented programming style classes with methods. I don’t use classes in the book examples so you, dear reader, can read the official Racket documentation on classes if you want to use Racket in a non-functional way.
Simple HTTP GET and POST Operations
We will be using HTTP GET and POST instructions in later chapters for web scraping and accessing remote APIs, such as those for OpenAI GPT-4, Hugging Face, etc. We will see more detail later but for now, you can try a simple example:
The output is:
Using Racket ~/.racketrc Initialization File
In my Racket workflow I don’t usually use ~/.racketrc to define initial forms that are automatically loaded when starting the racket command line tool or the DrRacket application. That said I do like to use ~/.racketrc for temporary initialization forms when working on a specific project to increase the velocity of interactive development.
Here is an example use:
If you have local and public libraries you frequently load you can permanently keep require forms for them in ~/.racketrc but that will slightly slow down the startup times of racket and DrRacket.
Tutorial Wrap Up
The rest of this book is comprised of example Racket programs that I have written for my own enjoyment that I hope will also be useful to you, dear reader. Please refer to the https://docs.racket-lang.org/guide/ for more technical detail on effectively using the Racket language and ecosystem.
Datastores
For my personal research projects the only datastores that I often use are the embedded relational database and Resource Description Framework (RDF) datastores that might be local to my laptop or public Knowledge Graphs like DBPedia and WikiData. The use of RDF data and the SPARQL query language is part of the fields of the semantic web and linked data.
Accessing Public RDF Knowledge Graphs - a DBPedia Example
I will not cover RDF data and the SPARQL query language in great detail here. Rather, please reference the following link to read the RDF and SPARQL tutorial data in my Common Lisp book: Loving Common Lisp, or the Savvy Programmer’s Secret Weapon.
In the following Racket code example for accesing data on DBPedia using SPARQL, the primary objective is to interact with DBpedia’s SPARQL endpoint to query information regarding a person based on their name or URI. The code is structured into several functions, each encapsulating a specific aspect of the querying process, thereby promoting modular design and ease of maintenance.
Function Definitions:
-
sparql-dbpedia-for-person: This function takes a
person-uri
as an argument and constructs a SPARQL query to retrieve the comment and website associated with the person. The@string-append
macro helps in constructing the SPARQL query string by concatenating the literals and theperson-uri
argument. -
sparql-dbpedia-person-uri: Similar to the above function, this function accepts a
person-name
argument and constructs a SPARQL query to fetch the URI and comment of the person from DBpedia. -
sparql-query->hash: This function encapsulates the logic for sending the constructed SPARQL query to the DBpedia endpoint. It takes a
query
argument, encodes it into a URL format, and sends an HTTP request to the DBpedia SPARQL endpoint. The response, expected in JSON format, is then converted to a Racket expression usingstring->jsexpr
. - json->listvals: This function is designed to transform the JSON expression obtained from the SPARQL endpoint into a more manageable list of values. It processes the hash data structure, extracting the relevant bindings and converting them into a list format.
-
gd (Data Processing Function): This function processes the data structure obtained from
json->listvals
. It defines four inner functionsgg1
,gg2
,gg3
, andgg4
, each designed to handle a specific number of variables returned in the SPARQL query result. It uses acase
statement to determine which inner function to call based on the length of the data. -
sparql-dbpedia: This is the entry function which accepts a
sparql
argument, invokessparql-query->hash
to execute the SPARQL query, and then callsgd
to process the resulting data structure.
Usage Flow:
The typical flow would be to call sparql-dbpedia-person-uri with a person’s name to obtain the person’s URI and comment from DBpedia. Following that, sparql-dbpedia-for-person can be invoked with the obtained URI to fetch more detailed information like websites associated with the person. The results from these queries are then processed through sparql-query->hash, json->listvals, and gd to transform the raw JSON response into a structured list format, making it easier to work with within the Racket environment.
Let’s try an example in a Racket REPL:
In practice, I start exploring data on DBPedia using the SPARQL query web app https://dbpedia.org/sparql. I experiment with different SPARQL queries for whatever application I am working on and then embed those queries in my Racket, Common Lisp, Clojure (link to read my Clojure AI book free online), and other programming languages I use.
In addition to using DBPedia I often also use the WikiData public Knowledge Graph and local RDF data stores hosted on my laptop with Apache Jena. I might add examples for these two use cases in future versions of this live eBook.
Sqlite
Using SQlite in Racket is simple so we will just look at a simple example. We will be using the Racket source file sqlite.rkt in the directory Racket-AI-book-code/misc_code for the code snippets in this REPL:
Here we see how to interact with a SQLite database using the db and sqlite-table libraries in Racket. The sqlite3-connect function is used to connect to the SQLite database specified by the string value of db-file. The #:mode ‘create keyword argument indicates that a new database should be created if it doesn’t already exist. The database connection object is bound to the identifier db.
The query-exec function call is made to create a permanent table named person with three columns: name of type varchar(30), age of type integer, and email of type varchar(20). The next query-exec function call is made to insert a new row into the person table with the values ‘Mary’, 34, and ‘mary@test.com’. There is a function query that we don’t use here that returns the types of the columns returned by a query. We use the alternative function query-rows that only returns the query results with no type information.
Implementing a Simple RDF Datastore With Partial SPARQL Support in Racket
This chapter explains a Racket implementation of a simple RDF (Resource Description Framework) datastore with partial SPARQL (SPARQL Protocol and RDF Query Language) support. We’ll cover the core RDF data structures, query parsing and execution, helper functions, and the main function with example queries. The file rdf_sparql.rkt can be found online at https://github.com/mark-watson/Racket-AI-book/source-code/simple_RDF_SPARQL.
Before looking at the code we look at sample use and output. The function test function demonstrates the usage of the RDF datastore and SPARQL query execution:
This function test:
- Initializes the RDF store with sample data.
- Prints all triples in the datastore.
- Defines a
print-query-results
function to execute and display query results. - Executes three example SPARQL queries:
- Query all name-age-food combinations.
- Query all subject-object pairs for the “likes” predicate.
- Query all people who like pizza and their ages.
Function test generates this output:
1. Core RDF Data Structures and Basic Operations
There are two parts to this example in file rdf_sparql.rkt: a simple unindexed RDF datastore and a partial SPARQL query implementation that supports compound where clause matches like: select * where { ?name age ?age . ?name likes pizza }.
1.1 RDF Triple Structure
The foundation of our RDF datastore is the triple
structure:
This structure represents an RDF triple, consisting of a subject, predicate, and object. The #:transparent
keyword makes the structure’s fields accessible for easier debugging and printing.
1.2 RDF Datastore
The RDF datastore is implemented as a simple list:
1.3 Basic Operations
Two fundamental operations are defined for the datastore:
- Adding a triple:
- Removing a triple:
2. Query Parsing and Execution
2.1 SPARQL Query Structure
A simple SPARQL query is represented by the sparql-query
structure:
2.2 Query Parsing
The parse-sparql-query
function takes a query string and converts it into a sparql-query
structure:
2.3 Query Execution
The main query execution function is execute-sparql-query
:
This function parses the query, executes the WHERE patterns, and projects the results based on the SELECT variables.
3. Helper Functions and Utilities
Several helper functions are implemented to support query execution:
-
variable?
: Checks if a string is a SPARQL variable (starts with ‘?’). -
triple-to-binding
: Converts a triple to a binding based on a pattern. -
query-triples
: Filters triples based on a given pattern. -
apply-bindings
: Applies bindings to a pattern. -
merge-bindings
: Merges two sets of bindings. -
project-results
: Projects the final results based on the SELECT variables.
Conclusion
This implementation provides a basic framework for an RDF datastore with partial SPARQL support in Racket. While it lacks many features of a full-fledged RDF database and SPARQL engine, it demonstrates the core concepts and can serve as a starting point for more complex implementations. The code is simple and can be fun experimenting with.
Web Scraping
I often write software to automatically collect and use data from the web and other sources. As a practical matter, much of the data that many people use for machine learning comes from either the web or from internal data sources. This section provides some guidance and examples for getting text data from the web.
Before we start a technical discussion about web scraping I want to point out that much of the information on the web is copyright, so first you should read the terms of service for web sites to insure that your use of “scraped” or “spidered” data conforms with the wishes of the persons or organizations who own the content and pay to run scraped web sites.
We start with low-level Racket code examples in the GitHub repository for this book in the directory Racket-AI-book-code/misc_code. We will then implement a standalone library in the directory Racket-AI-book-code/webscrape.
Getting Started Web Scraping
All of the examples in the section can be found in the Racket code snippet files in the directory Racket-AI-book-code/misc_code.
I have edited the output for brevity in the following REPL outoput:
Different element types are html, head, p, h1, h2, etc. If you are familiar with XPATH operations for XML data, then the function se-path/list will make more sense to your. The function se-path/list takes a list of element types from a list and recursively searches an input s-expression for lists starting with one of the target element types. In the following example we extract all elements of type p:
Now we will extract HTML anchor links:
Implementation of a Racket Web Scraping Library
The web scraping library listed below can be found in the directory Racket-AI-book/manuscript. The following listing of webscrape.rkt should look familiar after reading the code snippets in the last section.
The provided Racket Scheme code defines three functions to interact with and process web resources: web-uri->xexp
, web-uri->text
, and web-uri->links
.
web-uri->xexp
:
- Requires three libraries: net/http-easy
, html-parsing
, and net/url xml xml/path
.
- Given a URI (a-uri
), it creates a stream (a-stream
) using the get
function from the net/http-easy
library to fetch the contents of the URI.
- Converts the HTML content of the URI to an S-expression (xexp
) using the html->xexp
function from the html-parsing
library.
- Closes the response stream using response-close!
and returns the xexp
.
web-uri->text
:
- Calls web-uri->xexp
to convert the URI content to an xexp
.
- Utilizes se-path*/list
from the xml/path
library to extract all paragraph elements (p
) from the xexp
.
- Filters the paragraph elements to retain only strings (excluding nested tags or other structures).
- Joins these strings with a newline separator, normalizing spaces using string-normalize-spaces
from the srfi/13
library.
web-uri->links
:
- Similar to web-uri->text
, it starts by converting URI content to an xexp
.
- Utilizes se-path*/list
to extract all href
attributes from the xexp
.
- Filters these href
attributes to retain only those that are external links (those beginning with “http”).
In summary, these functions collectively enable the extraction and processing of HTML content from a specified URI, converting HTML to a more manageable S-expression format, and then extracting text and links as required.
Here are a few examples in a Racket REPL (most output omitted for brevity):
If you want to install this library on your laptop using linking (requiring the library access a link to the source code in the directory Racket-AI-book-code/webscrape) run the following in the library source directory Racket-AI-book-code/webscrape:
raco pkg install –scope user
Using the OpenAI, Anthropic, Mistral, and Local Hugging Face Large Language Model APIs in Racket
Note: November 21, 2024 change: added examples using William J. Bowman’s Racket language llm to the end of this chapter.
As I write the first version of this chapter in October 2023, Peter Norvig and Blaise Agüera y Arcas just wrote an article Artificial General Intelligence Is Already Here making the case that we might already have Artificial General Intelligence (AGI) because of the capabilities of Large Language Models (LLMs) to solve new tasks.
In the development of practical AI systems, LLMs like those provided by OpenAI, Anthropic, and Hugging Face have emerged as pivotal tools for numerous applications including natural language processing, generation, and understanding. These models, powered by deep learning architectures, encapsulate a wealth of knowledge and computational capabilities. As a Racket Scheme enthusiast embarking on the journey of intertwining the elegance of Racket with the power of these modern language models, you are opening a gateway to a realm of possibilities that we begin to explore here.
The OpenAI and Anthropic APIs serve as gateways to some of the most advanced language models available today. By accessing these APIs, developers can harness the power of these models for a variety of applications. Here, we delve deeper into the distinctive features and capabilities that these APIs offer, which could be harnessed through a Racket interface.
OpenAI provides an API for developers to access models like GPT-4. The OpenAI API is designed with simplicity and ease of use in mind, making it a favorable choice for developers. It provides endpoints for different types of interactions, be it text completion, translation, or semantic search among others. We will use the completion API in this chapter. The robustness and versatility of the OpenAI API make it a valuable asset for anyone looking to integrate advanced language understanding and generation capabilities into their applications.
On the other hand, Anthropic is a newer entrant in the field but with a strong emphasis on building models that are not only powerful but also understandable and steerable. The Anthropic API serves as a portal to access their language models. While the detailed offerings and capabilities might evolve, the core ethos of Anthropic is to provide models that developers can interact with in a more intuitive and controlled manner. This aligns with a growing desire within the AI community for models that are not black boxes, but instead, offer a level of interpretability and control that makes them safer and more reliable to use in different contexts. We will use the Anthropic completion API.
What if you want the total control of running open LLMs on your own computers? The company Hugging Face maintains a huge repository of pre-trained models. Some of these models are licensed for research only but many are licensed (e.g., using Apache 2) for any commercial use. Many of the Hugging Face models are derived from Meta and other companies. We will use the llama.cpp server at the end of this chapter to run our own LLM on a laptop and access it via Racket code.
Lastly, this chapter will delve into practical examples showing the synergy between systems developed in Racket and the LLMs. Whether it’s automating creative writing, conducting semantic analysis, or building intelligent chatbots, the fusion of Racket with OpenAI, Anthropic, and Hugging Face’s LLMs provides many opportunities for you, dear reader, to write innovative software that utilizes the power of LLMs.
Introduction to Large Language Models
Large Language Models (LLMs) represent a huge advance in the evolution of artificial intelligence, particularly in the domain of natural language processing (NLP). They are trained on vast corpora of text data, learning to predict subsequent words in a sequence, which imbues them with the ability to generate human-like text, comprehend the semantics of language, and perform a variety of language-related tasks. The architecture of these models, typically based on deep learning paradigms such as Transformer, empowers them to encapsulate intricate patterns and relationships within language. These models are trained utilizing substantial computational resources.
The utility of LLMs extends across a broad spectrum of applications including but not limited to text generation, translation, summarization, question answering, and sentiment analysis. Their ability to understand and process natural language makes them indispensable tools in modern AI-driven solutions. However, with great power comes great responsibility. The deployment of LLMs raises imperative considerations regarding ethics, bias, and the potential for misuse. Moreover, the black-box nature of these models presents challenges in interpretability and control, which are active areas of research in the quest to make LLMs more understandable and safe. The advent of LLMs has undeniably propelled the field of NLP to new heights, yet the journey towards fully responsible and transparent utilization of these powerful models is an ongoing endeavor. I recommend reading material at Center for Humane Technology for issues of the safe use of AI. You might also be interested in a book I wrote in April 2023 Safe For Humans AI: A “humans-first” approach to designing and building AI systems (link for reading my book free online).
Using the OpenAI APIs in Racket
We will now have some fun using Racket Scheme and OpenAI’s APIs. The combination of Racket’s language features and programming environment with OpenAI’s linguistic models opens up many possibilities for developing sophisticated AI-driven applications.
Our goal is straightforward interaction with OpenAI’s APIs. The communication between your Racket code and OpenAI’s models is orchestrated through well-defined API requests and responses, allowing for a seamless exchange of data. The following sections will show the technical aspects of interfacing Racket with OpenAI’s APIs, showcasing how requests are formulated, transmitted, and how the JSON responses are handled. Whether your goal is to automate content generation, perform semantic analysis on text data, or build intelligent systems capable of engaging in natural language interactions, the code snippets and explanations provided will serve as a valuable resource in understanding and leveraging the power of AI through Racket and OpenAI’s APIs.
The Racket code listed below defines two functions, question and completion, aimed at interacting with the OpenAI API to leverage the GPT-3.5 Turbo model for text generation. The function question accepts a prompt argument and constructs a JSON payload following the OpenAI’s chat models schema. It constructs a value for prompt-data string containing a user message that instructs the model to “Answer the question” followed by the provided prompt. The auth lambda function within question is utilized to set necessary headers for the HTTP request, including the authorization header populated with the OpenAI API key obtained from the environment variable OPENAI_API_KEY. The function post from the net/http-easy library is employed to issue a POST request to the OpenAI API endpoint “https://api.openai.com/v1/chat/completions” with the crafted JSON payload and authentication headers. The response from the API is then parsed as JSON, and the content of the message from the first choice is extracted and returned.
The function completion, on the other hand, serves a specific use case of continuing text from a given prompt. It reformats the prompt to prepend the phrase “Continue writing from the following text: “ to the provided text, and then calls the function question with this modified prompt. This setup encapsulates the task of text continuation in a separate function, making it straightforward for developers to request text extensions from the OpenAI API by merely providing the initial text to the function completion. Through these functions, the code provides a structured mechanism to generate responses or text continuations.
This example was updated May 13, 2024 when OpenAI released the new GPT-4o model.
The output looks like (output from the second example shortened for brevity):
Using the Anthropic APIs in Racket
The Racket code listed below defines two functions, question and completion, which facilitate interaction with the Anthropic API to access a language model named claude-instant-1 for text generation purposes. The function question takes two arguments: a prompt and a max-tokens value, which are used to construct a JSON payload that will be sent to the Anthropic API. Inside the function, several Racket libraries are utilized for handling HTTP requests and processing data. A POST request is initiated to the Anthropic API endpoint “https://api.anthropic.com/v1/complete” with the crafted JSON payload. This payload includes the prompt text, maximum tokens to sample, and specifies the model to be used. The auth lambda function is used to inject necessary headers for authentication and specifying the API version. Upon receiving the response from the API, it extracts the completion field from the JSON response, trims any leading or trailing whitespace, and returns it.
The function completion is defined to provide a more specific use-case scenario, where it is intended to continue text from a given prompt. It also accepts a max-tokens argument to limit the length of the generated text. This function internally calls the function question with a modified prompt that instructs the model to continue writing from the provided text. By doing so, it encapsulates the common task of text continuation, making it easy to request text extensions by simply providing the initial text and desired maximum token count. Through these defined functions, the code offers a structured way to interact with the Anthropic API for generating text responses or completions in a Racket Scheme environment.
We will try the same examples we used with OpenAI APIs in the previous section:
While I usually use the OpenAPI APIs, I always like to have alternatives when I am using 3rd party infrastructure, even for personal research projects. The Anthropic LLMs definitely have a different “feel” than the OpenAPI APIs, and I enjoy using both.
Using a Local Hugging Face Llama2-13b-orca Model with Llama.cpp Server
Now we look at an approach to run LLMs locally on your own computers.
Diving into AI unveils many ways where modern language models play a pivotal role in bridging the gap between machines and human language. Among the many open and public models, I chose Hugging Face’s Llama2-13b-orca model because of its support for natural language processing tasks. To truly harness the potential of Llama2-13b-orca, an interface to Racket code is essential. This is where we use the Llama.cpp Server as a conduit between the local instance of the Hugging Face model and the applications that seek to utilize it. The combination of Llama2-13b-orca with the llama.cpp server code will meet our requirements for local deployment and ease of installation and use.
Installing and Running Llama.cpp server with a Llama2-13b-orca Model
The llama.cpp server acts as a conduit for translating REST API requests to the respective language model APIs. By setting up and running the llama.cpp server, a channel of communication is established, allowing Racket code to interact with these language models in a seamless manner. There is also a Python library to encapsulate running models inside a Python program (a subject I leave to my Python AI books).
I run the llama.cpp service easily on a M2 Mac with 16G of memory. Start by cloning the llama.cpp project and building it:
Then get a model file from https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGUF and copy to ./models directory:
Note that there are many different variations of this model that trade off quality for memory use. I am using one of the larger models. If you only have 8G of memory try a smaller model.
Run the REST server:
We can test the REST server using the curl utility:
The important part of the output is:
In the next section we will write a simple library to extract data from Llama.cpp server responses.
A Racket Library for Using a Local Llama.cpp server with a Llama2-13b-orca Model
The following Racket code is designed to interface with a local instance of a Llama.cpp server to interact with a language model for generating text completions. This setup is particularly beneficial when there’s a requirement to have a local language model server, reducing latency and ensuring data privacy. We start by requiring libraries for handling HTTP requests and responses. The functionality of this code is encapsulated in three functions: helper, question, and completion, each serving a unique purpose in the interaction with the Llama.cpp server.
The helper function provides common functionality, handling the core logic of constructing the HTTP request, sending it to the Llama.cpp server, and processing the response. It accepts a prompt argument which forms the basis of the request payload. A JSON string is constructed with three key fields: prompt, n_predict, and top_k, which respectively contain the text prompt, the number of tokens to generate, and a parameter to control the diversity of the generated text. A debug line with displayln
is used to output the constructed JSON payload to the console, aiding in troubleshooting. The function post is employed to send a POST request to the Llama.cpp server hosted locally on port 8080 at the /completion endpoint, with the constructed JSON payload as the request body. Upon receiving the response, it’s parsed into a Racket hash data structure, and the content field, which contains the generated text, is extracted and returned.
The question and completion functions serve as specialized interfaces to the helper function, crafting specific prompts aimed at answering a question and continuing a text, respectively. The question function prefixes the provided question text with “Answer: “ to guide the model’s response, while the completion function prefixes the provided text with a phrase instructing the model to continue from the given text. Both functions then pass these crafted prompts to the helper function, which in turn handles the interaction with the Llama.cpp server and extracts the generated text from the response.
The following code is in the file llama_local.rkt:
We can try this in a Racket REPL (output of the second example is edited for brevity):
Using a Local Mistral-7B Model with Ollama.ai
Now we look at another approach to run LLMs locally on your own computers. The Ollama.ai project supplies a simple-to-install application for macOS and Linux (Windows support expected soon). When you download and run the application, it will install a command line tool ollama that we use here.
Installing and Running Ollama.ai server with a Mistral-7B Model
The Mistral model is the best 7B LLM that I have used (as I write this chapter in October 2023). When you run the ollama command line tool it will download and cache for future use the requested model.
For example, the first time we run ollama requesting the mistral LLM, you see that it is downloading the model:
When you run the ollama command line tool, it also runs a REST API serve which we use later. The next time you run the mistral model, there is no download delay:
While we use the mistral LLM here, there are many more available models listed in the GitHub repository for Ollama.ai: https://github.com/jmorganca/ollama.
A Racket Library for Using a Local Ollama.ai REST Server with a Mistral-7B Model
The example code in the file ollama_ai_local.rkt is very similar to the example code in the last section. The main changes are a different REST service URI and the format of the returned JSON response:
The function embeddings-ollama can be used to create embedding vectors from text input. Embeddings are used for chat with local documents, web sites, etc. We will run the same examples we used in the last section for comparison:
While I often use larger and more capable proprietary LLMs like Claude 2.1 and GPT-4, smaller open models from Mistral are very capable and sufficient for most of my experiments embedding LLMs in application code. As I write this, you can run Mistral models locally and through commercially hosted APIs.
Examples Using William J. Bowman’s Racket Language LLM
I implemented the code in this chapter using REST API interfaces for LLM providers like OpenAI and Anthropic and also for running local models using Ollama.
Since I wrote my LLM client libraries, William J. Bowman wrote a very interesting new Racket language for LLMs that can be used with DrRacket’s language support for interactively experimenting with LLMs and alternatively used in Racket programs using the standard Racket language. I added three examples to the directory Racket-AI-book-code/racket_llm_language:
- test_lang_mode_llm_openai.rkt - uses #lang llm
- test_llm_openai.rkt - uses #lang racket
- test_llm_ollama.rkt - uses #lang racket
For the Ollama example, make sure you have Ollama installed and the phi3:latest model downloaded.
The documentation for the LLM language can be found here: https://docs.racket-lang.org/llm/index.html and the GitHub repository for the project can be found here: https://github.com/wilbowma/llm-lang.
LLM Language Example
In the listing of file test_lang_mode_llm_openai.rkt notice that Racket statements are escaped using @ and plain text is treated as a prompt to send to a LLM:
Evaluating this in a DrRacket buffer produces output like this:
This makes a DrRacket edit buffer a convenient way to experiment with models. Also, once the example buffer is loaded, the DrRacket REPL can be used to enter LLM prompts since the REPL is also using #lang llm.
Using the LLM Language as a Library Using the Standard Racket Language Mode
Here we look at examples for accessing the OpenAI gpt4o-mini model and the phi3 model running locally on your laptop using Ollama.
Install the llm package:
Here is the example file test_llm_openai.rkt:
The output looks like this:
This is a simple way to use the OpenAI gpt4o-mini model in your Racket programs. A similar example supports local models running on Ollama; here is the example file test_llm_ollama.rkt:
That generates the output text:
Here I entered more examples in the DrRacket REPL.
For general work and experimentation with LLMs I like the flexibility of using my own Racket LLM client code, but for the LLM package makes it simple to experiment with prompts and if you only need to generate text from a prompt the LLM package lets generate text using just two lines of Racket code that is using the standard #language racket language..
Retrieval Augmented Generation of Text Using Embeddings
Retrieval-Augmented Generation (RAG) is a framework that combines the strengths of pre-trained language models (LLMs) with retrievers. Retrievers are system components for accessing knowledge from external sources of text data. In RAG a retriever selects relevant documents or passages from a corpus, and a generator produces a response based on both the retrieved information and the input query. The process typically follows these steps that we will use in the example Racket code:
- Query Encoding: The input query is encoded into a vector representation.
- Document Retrieval: A retriever system uses the query representation to fetch relevant documents or passages from an external corpus.
- Document Encoding: The retrieved documents are encoded into vector representations.
- Joint Encoding: The query and document representations are combined, often concatenated or mixed via attention mechanisms.
- Generation: A generator, usually LLM, is used to produce a response based on the joint representation.
RAG enables the LLM to access and leverage external text data sources, which is crucial for tasks that require information beyond what the LLM has been trained on. It’s a blend of retrieval-based and generation-based approaches, aimed at boosting the factual accuracy and informativeness of generated responses.
Example Implementation
In the following short Racket example program (file Racket-AI-book-code/embeddingsdb/embeddingsdb.rkt) I implement some ideas of a RAG architecture. At file load time the text files in the subdirectory data are read, split into “chunks”, and each chunk along with its parent file name and OpenAI text embedding is stored in a local SQLite database. When a user enters a query, the OpenAI embedding is calculated, and this embedding is matched against the embeddings of all chunks using the dot product of two 1536 element embedding vectors. The “best” chunks are concatenated together and this “context” text is passed to GPT-4 along with the user’s original query. Here I describe the code in more detail:
The provided Racket code uses a local SQLite database and OpenAI’s APIs for calculating text embeddings and for text completions.
Utility Functions:
-
floats->string
andstring->floats
are utility functions for converting between a list of floats and its string representation. -
read-file
reads a file’s content. -
join-strings
joins a list of strings with a specified separator. -
truncate-string
truncates a string to a specified length. -
interleave
merges two lists by interleaving their elements. -
break-into-chunks
breaks a text into chunks of a specified size. -
string-to-list
anddecode-row
are utility functions for parsing and processing database rows.
Database Setup:
- Database connection is established to “test.db” and a table named “documents” is created with columns for document_path, content, and embedding.
Document Management:
-
insert-document
inserts a document and its associated information into the database. -
get-document-by-document-path
andall-documents
are utility functions for querying documents from the database. -
create-document
reads a document from a file path, breaks it into chunks, computes embeddings for each chunk via a functionembeddings-openai
, and inserts these into the database.
Semantic Matching and Interaction:
-
execute-to-list
anddot-product
are utility functions for database queries and vector operations. -
semantic-match
performs a semantic search by calculating the dot product of embeddings of the query and documents in the database. It then aggregates contexts of documents with a similarity score above a certain threshold, and sends a new query constructed with these contexts to OpenAI for further processing. -
QA
is a wrapper aroundsemantic-match
for querying. -
CHAT
initiates a loop for user interaction where each user input is processed throughsemantic-match
to generate a response, maintaining a context of the previous chat.
Test Code:
-
test
function creates documents by reading from specified file paths, and performs some queries using theQA
function.
The code uses a local SQLite database to store and manage document embeddings and the OpenAI API for generating embeddings and performing semantic searches based on user queries. Two functions are exported in case you want to use this example as a library: create-document and QA. Note: in the test code at the bottom of the listing, change the absolute path to reflect where you cloned the GitHub repository for this book.
Let’s look at a few examples form a Racket REPL:
This output is the combination of data found in the text files in the directory Racket-AI-book-code/embeddingsdb/data and the data that OpenAI GPT-4 was trained on. Since the local “document” file chemistry.txt is very short, most of this output is derived from the innate knowledge GPT-4 has from its training data.
In order to show that this example is also using data in the local “document” text files, I manually edited the file data/chemistry.txt adding the following made-up organic compound:
GPT-4 was never trained on my made-up data so it has no idea what the non-existent compound ZorroOnian Alcohol is. The following answer is retrieved via RAG from the local document data (for brevity, most of the output for adding the local document files to the embedding index is not shown):
There is also a chat interface:
Retrieval Augmented Generation Wrap Up
Retrieval Augmented Generation (RAG) is one of the best use cases for semantic search. Another way to write RAG applications is to use a web search API to get context text for a query, and add this context data to whatever context data you have in a local embeddings data store.
Natural Language Processing
I have a Natural Language Processing (NLP) library that I wrote in Common Lisp. Here we will use code that I wrote in pure Scheme and converted to Racket.
The NLP library is still a work in progress so please check for future updates to this live eBook.
Since we will use the example code in this chapter as a library we start by defining a main.rkt file:
There are two main source files for the NLP library: fasttag.rkt and names.rkt.
The following listing of fasttag.rkt is a conversion of original code I wrote in Java and later translated to Common Lisp. The provided Racket Scheme code is designed to perform part-of-speech tagging for a given list of words. The code begins by loading a hash table (lex-hash
) from a data file (“data/tag.dat”), where each key-value pair maps a word to its possible part of speech. Then it defines several helper functions and transformation rules for categorizing words based on various syntactic and morphological criteria.
The core function, parts-of-speech
, takes a vector of words and returns a vector of corresponding parts of speech. Inside this function, a number of rules are applied to each word in the list to refine its part of speech based on both its individual characteristics and its context within the list. For instance, Rule 1 changes the part of speech to “NN” (noun) if the previous word is “DT” (determiner) and the current word is categorized as a verb form (“VBD”, “VBP”, or “VB”). Rule 2 changes a word to a cardinal number (“CD”) if it contains a period, and so on. The function applies these rules in sequence, updating the part of speech for each word accordingly.
The parts-of-speech
function iterates over each word in the input vector, checks it against lex-hash
, and then applies the predefined rules. The result is a new vector of tags, one for each input word, where each tag represents the most likely part of speech for that word, based on the rules and the original lexicon.
The following listing of file names.rkt identifies human and place names in text. The Racket Scheme code is a script for Named Entity Recognition (NER). It is specifically designed to recognize human names and place names in given text:
- It provides two main functions:
find-human-names
andfind-place-names
. - Uses two kinds of data: human names and place names, loaded from text files.
- Employs Part-of-Speech tagging through an external
fasttag.rkt
module. - Uses hash tables and lists for efficient look-up.
- Handles names with various components (prefixes, first name, last name, etc.)
The function process-one-word-per-line
reads each line of a file and applies a given function func
on it.
Initial data preparation consists of defining the hash tables *last-name-hash*
, *first-name-hash*
, *place-name-hash*
are populated with last names, first names, and place names, respectively, from specified data files.
We define two Named Entity Recognition (NER) functions:
-
find-human-names
: Takes a word vector and an exclusion list.- Utilizes parts-of-speech tags.
- Checks for names that have 1 to 4 words.
- Adds names to
ret
list if conditions are met, considering the exclusion list. - Returns processed names (
ret2
).
-
find-place-names
: Similar tofind-human-names
, but specifically for place names.- Works on 1 to 3 word place names.
- Returns processed place names.
We define one helper functions not-in-list-find-names-helper
to ensures that an identified name does not overlap with another name or entry in the exclusion list.
Overall, the code is fairly optimized for its purpose, utilizing hash tables for constant-time look-up and lists to store identified entities.
Let’s try some examples in a Racket REPL:
NLP Wrap Up
The NLP library is still a work in progress so please check for updates to this live eBook and the GitHub repository for this book:
https://github.com/mark-watson/Racket-AI-book-code
Knowledge Graph Navigator
The Knowledge Graph Navigator (which I will often refer to as KGN) is a tool for processing a set of entity names and automatically exploring the public Knowledge Graph DBPedia using SPARQL queries. I started to write KGN for my own use to automate some things I used to do manually when exploring Knowledge Graphs, and later thought that KGN might be also useful for educational purposes. KGN shows the user the auto-generated SPARQL queries so hopefully the user will learn by seeing examples. KGN uses the SPARQL queries.
I cover SPARQL and linked data/knowledge Graphs is previous books I have written and while I give you a brief background here, I ask interested users to look at either for more details:
- The chapter Knowledge Graph Navigator in my book Loving Common Lisp, or the Savvy Programmer’s Secret Weapon
- The chapters Background Material for the Semantic Web and Knowledge Graphs, Knowledge Graph Navigator in my book Practical Artificial Intelligence Programming With Clojure
We use the Natural Language Processing (NLP) library from the last chapter to find human and place names in input text and then construct SPARQL queries to access data from DBPedia.
The KGN application is still a work in progress so please check for updates to this live eBook. The following screenshots show the current version of the application:
I have implemented parts of KGN in several languages: Common Lisp, Java, Clojure, Racket Scheme, Swift, Python, and Hy. The most full featured version of KGN, including a full user interface, is featured in my book Loving Common Lisp, or the Savvy Programmer’s Secret Weapon that you can read free online. That version performs more speculative SPARQL queries to find information compared to the example here that I designed for ease of understanding, and modification. I am not covering the basics of RDF data and SPARQL queries here. While I provide sufficient background material to understand the code, please read the relevant chapters in my Common Lisp book for more background material.
We will be running an example using data containing three person entities, one company entity, and one place entity. The following figure shows a very small part of the DBPedia Knowledge Graph that is centered around these entities. The data for this figure was collected by an example Knowledge Graph Creator from my Common Lisp book:
I chose to use DBPedia instead of WikiData for this example because DBPedia URIs are human readable. The following URIs represent the concept of a person. The semantic meanings of DBPedia and FOAF (friend of a friend) URIs are self-evident to a human reader while the WikiData URI is not:
I frequently use WikiData in my work and WikiData is one of the most useful public knowledge bases. I have both DBPedia and WikiData SPARQL endpoints in the example code that we will look at later, with the WikiData endpoint comment out. You can try manually querying WikiData at the WikiData SPARQL endpoint. For example, you might explore the WikiData URI for the person concept using:
For the rest of this chapter we will just use DBPedia or data copied from DBPedia.
After looking at an interactive session using the example program for this chapter we will look at the implementation.
Entity Types Handled by KGN
To keep this example simple we handle just two entity types:
- People
- Places
The Common Lisp version of KGN also searches for relationships between entities. This search process consists of generating a series of SPARQL queries and calling the DBPedia SPARQL endpoint. I may add this feature to the Racket version of KGN in the future.
KGN Implementation
The example application works processing a list or Person, Place, and Organization names. We generate SPARQL queries to DBPedia to find information about the entities and relationships between them.
We are using two libraries developed for this book that can be found in the directories Racket-AI-book-code/sparql and Racket-AI-book-code/nlp to supply support for SPARQL queries and natural language processing.
SPARQL Client Library
We already looked at code examples for making simple SPARQL queries in the chapter Datastores and here we continue with more examples that we need to the KGN application.
The following listing shows Racket-AI-book-code/sparql/sparql.rkt where we implement several functions for interacting with DBPedia’s SPARQL endpoint. There are two functions sparql-dbpedia-for-person and sparql-dbpedia-person-uri crafted for constructing SPARQL queries. The function sparql-dbpedia-for-person takes a person URI and formulates a query to fetch associated website links and comments, limiting the results to four. On the other hand, the function sparql-dbpedia-person-uri takes a person name and builds a query to obtain the person’s URI and comments from DBpedia. Both functions utilize string manipulation to embed the input parameters into the SPARQL query strings. There are similar functions for places.
Another function sparql-query->hash executes SPARQL queries against the DBPedia endpoint. It takes a SPARQL query string as an argument, sends an HTTP request to the DBpedia SPARQL endpoint, and expects a JSON response. The call/input-url function is used to send the request, with uri-encode ensuring the query string is URL-encoded. The response is read from the port, converted to a JSON expression using the function string->jsexpr, and is expected to be in a hash form which is returned by this function.
Lastly, there are two functions json->listvals and gd for processing the JSON response from DBPedia. The function json->listvals extracts the variable bindings from the SPARQL result and organizes them into lists. The function gd further processes these lists based on the number of variables in the query result, creating lists of lists which represent the variable bindings in a structured way. The sparql-dbpedia function serves as an interface to these functionalities, taking a SPARQL query string, executing the query via sparql-query->hash, and processing the results through gd to provide a structured output. This arrangement encapsulates the process of querying DBPedia and formatting the results, making it convenient for further use within a Racket program.
We already saw most of the following code listing in the previous chapter Datastores. The following listings in this chapter will be updated in future versions of this live eBook when I finish writing the KGN application.
Part of solving this problem is constructing SPARQL queries as strings. We will look in some detail at one utility function sparql-dbpedia-for-person
that constructs a SPARQL query string for fetching data from DBpedia about a specific person. The function takes one parameter, person-uri
, which is expected to be the URI of a person in the DBpedia dataset. The query string is built by appending strings, including the dynamic insertion of the person-uri
parameter value. Here’s a breakdown of how the code works:
-
Function Definition: The function
sparql-dbpedia-for-person
is defined with one parameter,person-uri
. This parameter is used to dynamically insert the person’s URI into the SPARQL query. -
String Appending (
@string-append
): The@string-append
construct (which seems like a custom or pseudo-syntax, as the standard Scheme function for string concatenation isstring-append
without the@
) is used to concatenate multiple strings to form the complete SPARQL query. This includes static parts of the query as well as dynamic parts where theperson-uri
is inserted. -
SPARQL Query Construction: The function constructs a SPARQL query with the following key components:
-
SELECT Clause: This part of the query specifies what information to return. It uses
GROUP_CONCAT
to aggregate multiple?website
values into a single string, separated by” | “
, and also selects the?comment
variable. -
OPTIONAL Clauses: Two OPTIONAL blocks are included:
- The first block attempts to fetch English comments (
?comment
) associated with the person, filtering to ensure the language of the comment is English (lang(?comment) = ‘en’
). - The second block fetches external links (
?website
) associated with the person but filters out any URLs containing “dbpedia” (case-insensitive), to likely avoid self-references within DBpedia.
- The first block attempts to fetch English comments (
-
Dynamic URI Insertion: The
@person-uri
placeholder is replaced with the actualperson-uri
passed to the function. This dynamically targets the SPARQL query at a specific DBpedia resource. -
LIMIT Clause: The query is limited to return at most 4 results with
LIMIT 4
.
-
SELECT Clause: This part of the query specifies what information to return. It uses
-
Usage of
@person-uri
Placeholder: The code shows@person-uri
placeholders within the query string, indicating where theperson-uri
parameter’s value should be inserted. However, the mechanism for replacing these placeholders with the actual URI value is not explicitly shown in the snippet. Typically, this would involve string replacement functionality, ensuring the final query string includes the specific URI of the person of interest.
In summary, the sparql-dbpedia-for-person
function dynamically constructs a SPARQL query to fetch English comments and external links (excluding DBpedia links) for a given person from DBpedia, with the results limited to a maximum of 4 entries. The use of string concatenation (or a pseudo-syntax resembling @string-append
) allows for the dynamic insertion of the person’s URI into the query.
The function gd converts JSON data to Scheme nested lists and then extracts the values for up to four variables.
NLP Library
We implemented a library in the chapter Natural Language Processing that we use here.
Please make sure you have read that chapter before the following sections.
Implementation of KGN Application Code
The file Racket-AI-book-code/kgn/main.rkt contains library boilerplate and the file Racket-AI-book-code/kgn/kgn.rkt the application code. The provided Racket scheme code is structured for interacting with the DBPedia SPARQL endpoint to retrieve information about persons or places based on a user’s string query. The code is organized into several defined functions aimed at handling different steps of the process:
Query Parsing and Entity Recognition:
The parse-query function takes a string query-str and tokenizes it into a list of words after replacing certain characters (like “.” and “?”). It then checks for keywords like “who” or “where” to infer the type of query - person or place. Using find-human-names and find-place-names (defined in the earlier section on SPARQL), it extracts the entity names from the tokens. Depending on the type of query and the entities found, it returns a list indicating the type and name of the entity, or unknown if no relevant entities are identified.
SPARQL Query Construction and Execution:
The functions get-person-results and get-place-results take a name string, construct a SPARQL query to get information about the entity from DBPedia, execute the query, and process the results. They utilize the sparql-dbpedia-person-uri, sparql-query->hash, and json->listvals functions that we listed previously to construct the query, execute it, and convert the returned JSON data to a list, respectively.
Query Interface:
The ui-query-helper function acts as the top-level utility for processing a string query to generate a SPARQL query, execute it, and return the results. It first calls parse-query to understand the type of query and the entity in question. Depending on whether the query is about a person or a place, it invokes get-person-results or get-place-results, respectively, to get the relevant information from DBPedia. It then returns a list containing the SPARQL query and the results, or #f if the query type is unknown.
This code structure facilitates the breakdown of a user’s natural language query into actionable SPARQL queries to retrieve and present information about identified entities from a structured data source like DBPedia.
The file Racket-AI-book-code/kgn/dialog-utils.rkt contains the user interface specific code for implementing a dialog box.
The local file sparql-utils.rkt contains additional utility functions for accessing information in DBPedia.
The local file kgn.rkt is the main program for this application.
The two screen shot figures seen earlier show the GUI application running.
Knowledge Graph Navigator Wrap Up
This KGN example was hopefully both interesting to you and simple enough in its implementation to use as a jumping off point for your own projects.
I had the idea for the KGN application because I was spending quite a bit of time manually setting up SPARQL queries for DBPedia (and other public sources like WikiData) and I wanted to experiment with partially automating this process. I have experimented with versions of KGN written in Java, Hy language (Lisp running on Python that I wrote a short book on), Swift, and Common Lisp and all four implementations take different approaches as I experimented with different ideas.
Conclusions
The material in this book was informed by my own work interests and experiences. If you enjoyed reading it and you make practical use of at least some of the material I covered, then I consider my effort to be worthwhile.
Racket is a language that many people use for both fun personal projects and for professional development. I have tried, dear reader, to make the case here that Racket is a practical language that integrates well with my work flows on both Linux and macOS.
Writing software is a combination of a business activity, promoting good for society, and an exploration to try out new ideas for self improvement. I believe that there is sometimes a fine line between spending too many resources tracking many new technologies versus getting stuck using old technologies at the expense of lost opportunities. My hope is that reading this book was an efficient and pleasurable use of your time, letting you try some new techniques and technologies that you had not considered before.
If we never get to meet in person or talk on the telephone, then I would like to thank you now for taking the time to read my book.