Table of Contents
- Cover Material, Copyright, and License
- Preface
- Part 1: Introduction and Short Examples
- Setting Up Swift for Command Line Development
- Background Information for Writing Swift Command Line Utilities
- Web Scraping
- Part 2: Large Language Models
- Using the OpenAI LLM APIs
- Using APIs for Anthropic Claude LLMs
- Using Groq APIs to Open Weight LLM Models
- Using the xAI Grok LLM
- Using Ollama to Run Local LLMs
- Using Apple’s MLX Framework to Run Local LLMs
- Part 3: Apple’s CoreML and NLP Libraries
- Deep Learning Introduction
- Natural Language Processing Using Apple’s Natural Language Framework
-
Documents Question Answering Using OpenAI GPT4 APIs and a Local Embeddings Vector Database
- Extending the String Class
- Implementing a Local Vector Database for Document Embeddings
- Create Local Embeddings Vectors From Local Text Files With OpenAI GPT APIs
- Using Local Embeddings Vector Database With OpenAI GPT APIs
- Wrap Up for Using Local Embeddings Vector Database to Enhance the Use of GPT3 APIs With Local Documents
- Part 4: Knowledge Representation and Data Acquisition
- Linked Data and the Semantic Web
- Example Application: iOS and macOS Versions of my KnowledgeBookNavigator
- Book Wrap Up
Cover Material, Copyright, and License
Copyright 2022-2024 Mark Watson. All rights reserved. This book may be shared using the Creative Commons “share and share alike, no modifications, no commercial reuse” license.
This eBook will be updated occasionally so please periodically check the leanpub.com web page for this book for updates.
The first edition was released spring of 2022. The second edition was released December 2024.
If you would like to support my work please consider purchasing my books on Leanpub and star my git repositories that you find useful on GitHub. You can also interact with me on social media on Mastodon and Twitter.
Preface
Why use Swift for hacking AI? Common Lisp has been my go-to language for artificial intelligence development and research since 1982. The transition to using Swift was a slow transition for me. During this transition I prototyped a new project in parallel using both Swift and Common Lisp, weighing the advantages of both for my current requirements. The Swift version of this project included in this book runs on macOS, iOS, and iPadOS. The macOS version is available on the Apple Store. Several of the utilities developed in this book were used in this project.
Notes on the Second Edition
The second edition of this book deletes some of the old material and adds two new themes:
- A new Part II of the book that covers Large Language Models (LLMS). We will use both commercial LLM APIs and running local LLMs using Ollama and Apple’s MLX framework.
- Several examples from the first edition are augmented using LLMs.
- As much as possible, I support some of the book examples as Swift Playgrounds, usable on iPads and Macs.
Book Structure
This book starts out slowly in Part I with simple examples which I wrote showing how to access the Swift library packages on GitHub, tips on writing Swift command line apps.
Part II will show you how to effectively integrate LLMs into your own applications.
Part III starts with a simple example using web scraping and commercial web search APIs. We then work through examples integrating web search with LLMs and then show how we can modify web scraping applications to specifically process topics and have better control of outputing structured data.
We then proceed to using Apple’s CoreML for Natural Language Processing (NLP), training and using your own CoreML models, using OpenAI’s GPT-4 APIs, and finally several semantic web/linked data examples. The book ends with the example macOS application Knowledge Graph Navigator. It is not my intention to cover in detail the use of SwiftUI for building iOS/iPadOS/macOS applications but I thought my readers might enjoy seeing several of the techniques covered in the book integrated into an example app.
I have used Common Lisp for AI research projects and for AI product development and delivery since 1982. There is something special about using a language for almost forty years. I now find Swift a compelling choice for several reasons:
- Flexible language with many features I rely on like supporting closures and an interactive functional programming style.
- Built-in support for deep learning neural network models for natural language processing, predictive models, etc.
- First class support for iOS and macOS development.
- Good support for server side applications hosted on Linux.
Swift is a programmer-efficient language: code is concise and easy to read, and high quality libraries from Apple and third parties mean that often there is less code to write. I will share with you my Swift development work flow that combines interactive development of code in playgrounds, development of higher level libraries in text only or command line applications, and my general strategy for writing iOS and macOS applications after low level and intermediate code is written and debugged.
Parts of this Book are Specific for macOS and iOS, with Some Support for Linux
Swift is a general purpose language that is well supported in macOS, iOS, and Linux, with some support in Windows. Here, we cover the use of Swift on macOS and iOS. Some of the examples in this book rely on libraries that are specifically available on macOS and iOS like CoreML and the NLP libraries. Several book examples also work on Linux, such as the examples using SQLite, the Microsoft Azure search APIs, web scraping, and semantic web/linked data.
Code for this Book
Because of the way the Swift Package Manager works, I organized all book examples that build libraries as separate GitHub repos so the libraries can be easily used in other book examples as well as your own software projects. The separate library GitHub repositories are:
- https://github.com/mark-watson/SparqlQuery_swift - SPARQL Swift library for my Swift AI book.
- https://github.com/mark-watson/QuestionAnswering_BERT_swift - modification of Apple’s question answering demo to use DBPedia.
- https://github.com/mark-watson/swift-coreml-wisconsin_data_create_model - create CoreML models from training data files of Wisconsin Caner data.
- https://github.com/mark-watson/swift-coreml-wisconsin_data_predict_with_model - use the pretrained Wisconsin Cancer data model.
- https://github.com/mark-watson/ShellProcess_swift - library for spawning shell processes and capturing output to stdout.
- https://github.com/mark-watson/WebScraping_swift - library for scrapping web sites.
- https://github.com/mark-watson/OpenAI_swift - library for using OpenAI’s GPT3 APIs.
- https://github.com/mark-watson/Nlp_swift - library that uses pretrained CoreML NLP models.
- https://github.com/mark-watson/KGN - SwiftUI based application supporting macOS, iPadOS, and iOS. The macOS version is in Apple’s app store.
I suggest cloning all of these GitHub repositories right now so you can have the example source code at hand while reading this book.
All of the code examples are licensed using the Apache 2 license. You are free to reuse the book example code in your own projects (open source, commercial), with attribution of my copyright and the Apache 2 license.
Except for the last SwiftUI example application, all sample programs are written as command line utilities. I considered using Swift playgrounds for some of the examples but decided that packaging as a combination of libraries and command line utilities would tend to make the example code more useful for your own projects.
http://www.knowledgegraphnavigator.com/
Author’s Background
I live in Flagstaff, Arizona with my wife and pet parrot. Our children and grandchildren live in California, RHode Island, and Arizona.
I have written 20+ books, mostly about artificial intelligence. I have over 50 US patents.
I write about technologies that I have used throughout my career: knowledge representation using semantic web and linked data, machine learning and deep learning, and natural language processing. I am grateful for the companies where I have worked (SAIC, Google, Capital One, Olive AI, Babylist, etc.) that have supported this work since 1982.
As an author, I hope that the material in this book entertains you and will be useful in your work.
A Request from the Author
I spent time writing this book to help you, dear reader. I release this book under the Creative Commons license and set the minimum purchase price to Free in order to reach the most readers. If you found this book on the web (or it was given to you) and if it provides value to you then please consider doing the following to support my future writing efforts and also to support future updates to this book:
- Purchase a copy of this book or any other of my leanpub books at https://leanpub.com/u/markwatson
I enjoy writing and your support helps me write new editions and updates for my books and to develop new book projects. Thank you!
Cover Art
The cover picture was taken by WikiMedia Commons user Keta and is available for use under the Creative Commons License CC BY-SA 2.5.
CoreML Libraries Used in this Book
- CoreML general overview: https://developer.apple.com/documentation/coreml
- MLClassifier https://developer.apple.com/documentation/createml/mlclassifier
- MLTextClassifier https://developer.apple.com/documentation/createml/mltextclassifier
- NLModel https://developer.apple.com/documentation/naturallanguage/nlmodel
- Natural Language Framework https://developer.apple.com/documentation/naturallanguage
- MLCustomLayer https://developer.apple.com/documentation/coreml/mlcustomlayer
Swift 3rd Party Libraries
We use the following 3rd party libraries:
Acknowledgements
I thank my wife Carol for editing this manuscript, finding typos, and suggesting improvements.
Part 1: Introduction and Short Examples
We begin with a sufficient introduction for Swift to understand the programming examples. After introducing the language we will look at a few short examples that provide code and techniques we use later in the book:
- Creating Swift projects
- Writing command line utilities
- Web scraping
Setting Up Swift for Command Line Development
Except for the last chapter in this book that uses Xcode for developing a complete macOS/iOS/iPadOS example application, I assume that you will work through the book examples using the command line and your favorite editor. If you want to use Xcode for the command line examples, you can open the Swift package file on the command line and open Xcode using, for example:
You notice that most of the examples are command line apps or libraries with command line test programs and the README.md files in the example directories provide instructions for building and running on the command line.
You can also run Xcode and from the File Menu open an example Package.swift file. You can then use the Product / Test menu to run the test code for the example. You might need to use the View / Debug Area / Active Console menu to show the output area.
I assume that you are familiar with the Swift programming language and Xcode.
Swift is a general purpose language that is well supported in macOS and iOS, with good support for Linux, and with some support in Windows. For the purposes of this book, we are only considering the use of Swift on macOS and iOS. Most of the examples in this book rely on libraries that are specifically available on macOS and iOS like CoreML and the NLP libraries.
There are great free resources for the Swift language on the web, in other commercial books, and Apple’s free Swift books. Here I provide just enough material on the Swift language for you to understand and work with the book examples. After working through this book’s material you will be able to add machine learning, natural language processing, and knowledge representation to your applications. There will be parts of the Swift language that we don’t need for the material here, and we won’t cover.
Installing Swift Packages
We will use the Swift Package Manager. You should pause reading now and install the Swift Package Manager if you have not already done so.
I occasionally use https://vapor.codes web framework library (although not in this book). We use this 3rd party library as an example for building a library locally from source code. Start by cloning the git repository https://github.com/vapor/vapor. Then:
I don’t usually install libraries locally from source code unless I am curious about the implementation and want to read through the source code. Later we will see how to reference Swift libraries hosted on GitHub in a project’s Package.swift file.
Creating Swift Packages
We will cover using the Swift Package Manager to create new packages using the command line here. Later we will create projects using Apple’s XCode IDE when we develop the example application Knowledge Graph Navigator.
You will want to use the Swift Package Manager documentation for reference.
We will be generating executable projects and library (with a sample main program) projects. The commands for generating the stub for an executable application project are:
and build the stub of a library with a demo main program:
Accessing Libraries that You Write in Other Projects
You can reference Swift libraries using the Swift.package file for each of your projects. We will look at parts of two Swift.package files here. The first is for my SPARQL query client library that we will develop in a later chapter. This library SparqlQuery_swift is used in both book examples Knowledge Graph Navigator (KGN) macOS/iOS/iPadOS example application as well as a text version KnowledgeGraphNavigator_swift.
This Swift package file is used to declare a Swift package named “SparqlQuery_swift”. The package contains one library target named “SparqlQuery_swift” and one test target named “SparqlQuery_swiftTests”. The library target depends on the “SwiftyJSON” package, which is specified as a dependency in the “dependencies” section of the package.
The “products” section defines the products that this package provides. In this case, the package provides a library product named “SparqlQuery_swift”. The library is built from the source code in the “SparqlQuery_swift” target.
The “dependencies” section lists the packages that this package depends on. In this case, it depends on the “SwiftyJSON” package, which is specified as a Git repository URL.
The “targets” section lists the targets that are part of the package. In this case, there are two targets: “SparqlQuery_swift” and “SparqlQuery_swiftTests”. The “SparqlQuery_swift” target depends on “SwiftyJSON”. The “SparqlQuery_swiftTests” target depends on both “SparqlQuery_swift” and “SwiftyJSON”.
The Swift.package file for text version KnowledgeGraphNavigator_swift is shown here:
This Swift package file is used to declare a Swift package named “KnowledgeGraphNavigator_swift”. The package contains one target named “KnowledgeGraphNavigator_swift”. The target depends on the “SparqlQuery_swift”, “Nlp_swift”, “SwiftyJSON”, and “SwiftSoup” packages, which are specified as dependencies in the “dependencies” section of the package.
The “platforms” section specifies the minimum platform version that the package supports. In this case, the package supports macOS version 10.15 and later.
The “dependencies” section lists the packages that this package depends on. In this case, it depends on four packages:
- SwiftyJSON: a Swift library for working with JSON data.
- SwiftSoup: a Swift library for parsing HTML and XML documents.
- SparqlQuery_swift: a Swift library for querying RDF data using the SPARQL query language.
- Nlp_swift: a Swift library for natural language processing.
The “targets” section lists the targets that are part of the package. In this case, there is one target named KnowledgeGraphNavigator_swift. The target depends on “SparqlQuery_swift, Nlp_swift, SwiftyJSON, and SwiftSoup.
Hopefully you have cloned the git repositories for each book example and understand how I have configured the examples for your use.
For the rest of this book, you can read chapters in any order. In some cases, earlier chapters will contain implementations of libraries used in later chapters.
Background Information for Writing Swift Command Line Utilities
This short chapter contains example code and utilities for writing command line programs, using external shell processes, and using the FileIO library.
Using Shell Processes
The library for using shell processes is one of my GitHub projects so you can include it in other projects using:
You can clone this repository if you want to have the source code at hand:
The following listing shows the library implementation. In line 5 we use the constructor Process from the Apple Foundation library to get a new process object that we set fields executableURL and argList. In lines 8 and 9 we create a new Unix style pipe to capture the output from the shell process we are starting and attach it to the process. After we run the task, we capture the output and return it as the value of function run_in_shell.
The function named run_in_shell takes two parameters: commandPath (a string representing the path to the executable command to be run) and argList (an array of strings representing the arguments to be passed to the command). The function returns a string that represents the output of the command.
Function run_in_shell first creates an instance of the Process class, which is used to run the command. It sets the executableURL property of the task instance to the commandPath value and sets the arguments property to the argList value. This function then creates a Pipe instance, which is used to capture the output of the command. It sets the standardOutput property of the task instance to the Pipe instance.
The function then runs the command using the run() method of the task instance. If the command runs successfully, the function reads the output of the command from the Pipe instance using the readDataToEndOfFile() method of the fileHandleForReading property. It then converts the output data to a string using the String(data:encoding:) initializer.
If the output string is not empty, this function trims leading and trailing whitespace and returns the resulting string. Otherwise, the function returns an empty string.
Overall, this function provides a simple way to run a shell command and capture its output in a Swift program.
As in most examples in this book we use the Swift testing framework to run the example code at the command line using swift test. Running swift test does an implicit swift build.
This Swift unit test function is part of a test suite for the ShellProcess_swift package. The function is named testExample and is decorated with the @testable import statement to indicate that it tests an internal implementation detail of the ShellProcess_swift package.
The function uses the run_in_shell function to run three shell commands: ps a, ls ., and sleep 2. It prints the output of each command to the console.
This test function is an example of a functional test case. It doesn’t actually verify that the functions being tested produce the correct results. Instead, it’s a simple way to visually inspect the output of the commands and ensure that they are working as expected.
The allTests variable is an array of tuples that map the test function names to the corresponding function references. This variable is used by the XCTest framework to discover and run the test functions.
The test output (with some text removed for brevity) is:
FileIO Examples
This file I/O example uses the ShellProcess_swift library we saw in the last section so if you were to create your own Swift project with the following code listing, you would have to add this dependency in the Project.swift file.
When writing command line Swift programs you will often need to do simple file IO so let’s look at some examples here:
The OS version checks in this Swift code use the #available conditional compilation block.
The #available block is used to conditionally compile code based on the availability of APIs or features in the operating system version. In this case, the code inside the #available(OSX 10.13, *) block will only be executed if the running operating system is macOS 10.13 or later.
If the running operating system version is earlier than 10.13, the code inside the #available block will be skipped and the program will exit without running the test_files_demo() function.
These operating system version checks are done to ensure that the program is only executed on operating systems that support the APIs and features used by the code. This helps to prevent runtime errors and crashes on older operating system versions that may not support the required features.
This function demonstrates how to write to and read from files using the write(toFile:atomically:encoding:) and String(contentsOfFile:) methods, how to list files in the current directory using the ls shell command, and how to remove files using the rm shell command.
I created a temporary Swift project with the previous code listing and a Project.swift file. I built and ran this example using the swift command line tool.
Unlike the example in the last section where we built a reusable library with a test program, here we have a standalone program contained in a single file so we will use swift run to build and run this example:
Swift REPL
There is an example of using the Swift REPL at the end of the next chapter on web scraping. For reference, you can start a REPL with:
You can import packages and interactively enter Swift expressions, including defining functions.
In the next chapter we will look at a longer example that scrapes web sites.
In the next chapter we will look at one more simple example, building a web scraping library, before getting to the machine learning and NLP part of the book.
Web Scraping
It is important to respect the property rights of web site owners and abide by their terms and conditions for use. This Wikipedia article on Fair Use provides a good overview of using copyright material.
The web scraping code we develop here uses the Swift library SwiftSoup that is loosely based on the BeautifulSoup libraries available in other programming languages.
For my work and research, I have been most interested in using web scraping to collect text data for natural language processing but other common applications include writing AI news collection and summarization assistants, trying to predict stock prices based on comments in social media which is what we did at Webmind Corporation in 2000 and 2001, etc.
I wrote a simple web scraping library that is available at https://github.com/mark-watson/WebScraping_swift that you can use in your projects by putting the following dependency in your Project.swift file:
Here is the main implementation file for the library:
This Swift code defines several functions that can be used to scrape information from a web page located at a given URI.
The webPageText function takes a URI as input and returns the plain text content of the web page located at that URI. It first checks if the URI is valid and then reads the content of the web page using the contentsOf method of the String class. It then uses the parse method of the SwiftSoup library to parse the HTML content of the page and extract the plain text.
The webPageH1Headers and webPageH2Headers functions use the webPageHeadersHelper function to extract the H1 and H2 header texts respectively from the web page located at a given URI. The webPageHeadersHelper function uses the same technique as the webPageText function to read and parse the HTML content of the page. It then selects the headers using the specified headerName parameter and extracts the text of the headers.
The webPageAnchors function extracts all the anchor tags <a> from the web page located at a given URI, along with their corresponding text and URI. It also uses the webPageHeadersHelper function to read and parse the HTML content of the page, selects the anchor tags using the “a” selector, and extracts their text and href attributes.
Overall, these functions provide a simple way to scrape information from a web page and extract specific information such as plain text, header texts, and anchor tags.
I wrote these utility functions to get the plain text from a web site, HTML header text, and anchors. You can clone this library and extend it for other types of HTML elements you may need to process.
The test program shows how to call the APIs in the library:
This Swift test program tests the functionality of the WebScraping_swift library. It defines two test functions: testGetWebPage and testToShowSwiftSoupExamples.
The testGetWebPage function uses the webPageText function to retrieve the plain text content of my website located at “https://markwatson.com”. It then prints the retrieved text to the console.
The testToShowSwiftSoupExamples function demonstrates the use of webPageH1Headers, webPageH2Headers, and webPageAnchors functions on the same website. It extracts and prints the H1 and H2 header texts and anchor tags of the same website.
The allTests variable is an array of tuples that map the test function names to the corresponding function references. This variable is used by the XCTest framework to discover and run the test functions.
Overall, this Swift test program demonstrates how to use the functions defined in the WebScraping_swift library to extract specific information from a web page.
Here we run the unit tests (with much of the output not shown for brevity):
Running in the Swift REPL
This chapter finishes a quick introduction to using Swift and Swift packages for command line utilities. The remainder of this book comprises machine learning, natural language processing, and semantic web/linked data examples.
Part 2: Large Language Models
In this part we cover:
- Commercial OpenAI LLM APIs
- Commercial Anthropic LLM APIs
- Accessing open weight models using the commercial Groq service
- Accesiing xAIs Grok model via an API
- Accessing local LLMs using Ollama
- Using Local LLMs with Apple’s MLX Framework
Using the OpenAI LLM APIs
I have been working as an artificial intelligence practitioner since 1982 and the capability of Large Language Models (LLMs) is unlike anything I have seen before. I managed a deep learning team at Capital One in 2017-2019 and we used precursors of TransFormer models like OpenAI’s ChatGPT, and Anthropic’s Claude.
You will need to apply to OpenAI for an access key at:
The GitHub repository for this example is:
I recommend reading the online documentation for the online documentation for the APIs to see all the capabilities of the beta OpenAI APIs. Let’s start by jumping into the example code that is a GitHub repository https://github.com/mark-watson/OpenAI_swift that you can use in your projects.
The library that I wrote for this chapter supports four functions: for completing text, summarizing text, answering general questions, and getting embeddings for text. The get-4o-mini that we will use here is very inexpensive and capable.
You need to request an API key (I had to wait a few weeks to receive my key) and set the value of the environment variable OPENAI_KEY to your key. You can add a statement like:
to your .profile or other shell resource file that contains your key value (the above key value is made-up and invalid).
The file Sources/OpenAI_swift/OpenAI_swift.swift contains the source code (code description follows the listing):
This Swift implementation provides a streamlined interface to OpenAI’s API services, focusing primarily on chat completions and text embeddings functionality. The code is structured around a central OpenAI struct that encapsulates all API interactions and provides a clean, type-safe interface for making requests.
Core Architecture
The implementation follows a modular design pattern, separating concerns between network communication, request/response handling, and utility functions. It utilizes Swift’s strong type system through dedicated request models and leverages environment variables for secure API key management.
Key Features
Authentication and Configuration
The client automatically retrieves the OpenAI API key from environment variables, providing a secure way to handle authentication credentials. The base URL is configured as a constant, making it easy to modify for different environments or API versions.
Chat Completions
The chat completion functionality supports the GPT-4 model family, allowing for structured conversations through an array of messages. Each message contains a role (system, user, or assistant) and content. The implementation provides fine-grained control over:
- Maximum token output
- Temperature settings for response randomness
- Message context management
- Text embeddings
The embeddings feature implements OpenAI’s text-embedding-ada-002 model, converting text inputs into high-dimensional vector representations. These embeddings can be used for:
- Semantic search
- Text similarity comparisons
- Document classification
- Other natural language processing tasks
Utility Functions
The implementation includes pre-built utility functions for common use cases:
- Text summarization with customizable length
- Question-answering with concise responses
- General text completions
Technical Implementation Details
Network Communication
The networking layer uses URLSession with a synchronous approach via DispatchSemaphore. While this ensures straightforward usage, it’s worth noting that this approach should be carefully considered for production environments where asynchronous communication might be more appropriate.
Error Handling
The implementation includes basic error handling through Swift’s optional binding and guard statements, providing graceful fallbacks for common failure scenarios. The embedding function, for instance, returns a default value rather than throwing an error when processing fails.
Data Parsing
JSON parsing is handled through a combination of JSONEncoder for requests and JSONSerialization for responses, with careful optional chaining to safely handle malformed or unexpected responses.
Running Tests
The file SWIFT_BOOK/OpenAI_swift/Tests/OpenAI_swiftTests/OpenAI_swiftTests.swift contains test code:
Output from this test code is:
Using APIs for Anthropic Claude LLMs
Here, I decided to not write a new client library for the Anthropic APIs since there are several existing high quality libraries for accessing the Anthropic Claude APIs.
This is not a strong recommendation of one Anthropic client library over another, but I very much enjoy using the following project because of the simplicity of its API:
My examples using this library to access the Anthropic Claude APIs can be found here:
You need to set the following environment variable for your person Anthropic API key: Anthropic API key:
that you can get by creating an account:
Note that there is no library implemented in this chapter.
Running the examples
All of the examples are packaged as Swift tests so git clone my examples repository https://github.com/mark-watson/Anthropic_swift_examples and run:
The test Swift source file defines a test class (just the first few lines shown here):
If you print the value of response you see:
If you print the value of response.content you see:
For normal use you want just the string contents of the model’s response to your prompt, so use:
That outputs:
In the general case of the Claude model returning images, tools used, and tool results, use code like this:
Using Groq APIs to Open Weight LLM Models
Groq develops custom silicon for fast LLM inference.
Groq’s API service supports a variety of openly available models, including:
- Llama 3.1 Series: Models like llama-3.1-70b-versatile, llama-3.1-8b-instant, and others, offering up to 128K context windows.
- Llama 3.2 Vision Series: Multimodal models such as llama-3.2-90b-vision-preview and llama-3.2-11b-vision-preview, capable of processing both text and image inputs.
- Llama 3 Groq Tool Use Models: Specialized for function calling, including llama3-Groq-70b-8192-tool-use-preview and llama3-Groq-8b-8192-tool-use-preview.
- Mixtral 8x7b: A model with a 32,768-token context window, suitable for extensive context applications.
- Gemma Series: Models like gemma2-9b-it and gemma-7b-it, each with an 8,192-token context window.
- Whisper Series: Models such as whisper-large-v3 and whisper-large-v3-turbo, designed for audio transcription and translation tasks.
To obtain an API key, visit Groq’s API keys management page:
The code for this chapter can be found here:
Implementation of a Client Library for the Groq APIs
Groq supports the OpenAI APIs so the following client library for Groq is similar to what I wrote previously for OpenAI:
Explanation of the Swift Groq API Code
1. Setting Up the API
- The
Groq
struct is designed to interact with the Groq API, mimicking OpenAI’s API. -
API Key: Retrieved from the environment variable
GROQ_API_KEY
. -
Base URL: The API’s base endpoint is
https://api.groq.com/openai/v1/
. -
Model: The model being used is predefined as
llama3-8b-8192
.
2. Structure of a Chat Request
- A private struct,
ChatRequest
, defines the JSON payload for requests:-
model
: The model name. -
messages
: A history of the conversation. -
max_tokens
: Limits the number of tokens in the response. -
temperature
: Controls randomness in the responses.
-
3. Making an HTTP POST Request
- The
makeRequest
function handles API communication:- Constructs the full URL by appending the endpoint to the base URL.
- Sets up a POST request with:
- JSON content type.
- Authorization header using the API key.
- Encodes the request body into JSON using
JSONEncoder
. - Sends the request asynchronously but waits for the response using a semaphore.
- Parses the response data into a string.
4. Chat Functionality
- The
chat
function simplifies sending messages to the API:- Constructs a
ChatRequest
object with the given parameters. - Sends the request to the
/chat/completions
endpoint. - Processes the JSON response to extract the model’s reply from the
choices
array.
- Constructs a
5. Usage Functions
summarize - Summarizes a text. - Sends a conversation history where the system is described as “a helpful assistant that summarizes text concisely.”
questionAnswering - Answers a user-provided question directly. - Sends a conversation history where the system is described as “a helpful assistant that answers questions directly and concisely.”
completions - Generates continuations for a given user prompt.
Running the Tests
Here is the test/example code for this library:
Here is sample output from this example use of the library:
Using the xAI Grok LLM
xAI’s Grok is a large language model (LLM) developed by Elon Musk’s AI startup, xAI, to compete with leading AI systems like OpenAI’s GPT-4. Launched in 2023, Grok is designed to handle a variety of tasks, including answering questions, assisting with writing, and solving coding problems. It is integrated with the social media platform X (formerly Twitter), providing users with real-time information and a conversational AI experience.
Grok has undergone several iterations, with Grok-2 being released in August 2024. This version introduced image generation capabilities, enhancing its versatility. xAI has also made Grok-1 open-source, allowing developers to access its weights and architecture for further research and application development.
To support Grok’s development, xAI has invested in substantial computational resources, including the Colossus supercomputer, which utilizes 100,000 Nvidia H100 GPUs, positioning it as one of the most powerful AI training systems globally.
Implementation of a Grok API Client Library
The code for my xAI Grok client code can be found here:
The Grok is similar to the OpenAI APIs so I copied the code we saw earlier that I wrote for OpenAI and made the simple modifications required to access Grok APIs (code discussion appears after this listing):
This Swift code defines a client library, X_GROK, to interact with the xAI Grok Large Language Model (LLM) API. It leverages Swift’s Foundation framework to handle HTTP requests and JSON encoding/decoding. The library retrieves the API key from the environment variable X_GROK_API_KEY and sets the base URL for the API. It specifies a default model, grok-beta, for generating responses.
The core functionality is encapsulated in the makeRequest function, which constructs and sends HTTP POST requests to the API. It accepts an endpoint and a request body conforming to the Encodable protocol. The function sets the necessary HTTP headers, including Content-Type and Authorization, and encodes the request body into JSON. To handle the asynchronous nature of network calls synchronously, it employs a semaphore, ensuring the function waits for the response before proceeding. The response is then returned as a string.
The chat function utilizes makeRequest to send chat messages to the API. It constructs a ChatRequest struct with parameters like the model, messages, maximum tokens, and temperature. After receiving the response, it parses the JSON to extract the generated content. Additionally, the code provides utility functions summarize, questionAnswering, and completions which use the chat function to perform specific tasks such as text summarization, question answering, and text completion, respectively.
Here is the test/example code for this library:
Here is the example code output:
Using Ollama to Run Local LLMs
Ollama is a program and framework written in Go that allows you to download, run models on the command line, and call using a REST style interface. You need to downnload the Ollama executable for your operation system at https://ollama.com.
Similarly to our use of a third party for accessing the Anthropic Clause models, here we will not write a wrapper libary. The example code for ths chapter is in the test code for the Swift project in the GitHub repository https://github.com/mark-watson/Ollama_swift_examples.
We use the library in the GitHub repository https://github.com/mattt/ollama-swift.
Running the Ollama Service
Assuming you have Ollama installed, download the following model that required two gigabytes of disk space:
When the model is downloaded it is also cached for future use on your laptop.
Here is the test/example code we will run:
The output looks like:
The ollama_swift library also supports text generation. You can also do single shot text generation using the code in the previous example, but only using one user call, for example:
The output looks like:
Ollama Wrap Up
This is a short chapter but an important one. I do over half my work with LLMs running locally on my laptop using Ollama, with the rest of my work using OpenAI, Anthropic, and Groq commercial APIs.
Using Apple’s MLX Framework to Run Local LLMs
Apple’s MLX framework is an efficient way to use LLMs embedded in applications written in Swift using the SwiftUI user interface library for macOS, iOS, and iPadOS.
It is difficult to create simple command line Swift apps using MLX but there are several complete MLX, Swift, and SwiftUI demo applications that you can use to start your own projects. Here we will use the LLMEval application from the GitHub repository https://github.com/ml-explore/mlx-swift-examples.
MLX Framework History
Apple’s MLX framework, introduced in December 2023, is a key part of Apple’s strategy to support AI on its hardware platforms by leveraging the unique capabilities of Apple Silicon, including the M1, M2, M3, and M4 series. Designed as an open-source, NumPy-like array framework, MLX optimizes machine learning workloads, particularly large language models (LLMs), by utilizing Apple Silicon’s unified architecture that integrates CPU, GPU, Neural Engine, and shared memory. This architecture eliminates data transfer bottlenecks, enabling faster and more efficient ML tasks, such as training and deploying LLMs directly on devices like MacBooks and iPhones. MLX aligns with Apple’s privacy-focused approach by supporting on-device processing, enhancing performance for applications like natural language processing, speech recognition, and content generation while offering a seamless transition for Python or Swift ML engineers familiar with frameworks like NumPy and PyTorch. MLX stands out by leveraging Apple’s unified memory architecture, allowing shared memory access between CPU and GPU, which eliminates data transfer overhead and accelerates machine learning tasks, especially with large datasets.
MLX Resources on GitHub
In this chapter we will look at an example application that is part of the Swift MLX Examples project. After working through this example, the following resources on GitHub are worth looking at:
- https://github.com/ml-explore/mlx-swift: The Swift API for MLX, enabling integration with Swift-based projects.
- https://github.com/ml-explore/mlx-swift-examples: Examples showcasing the use of MLX with Swift.
You can find the documentation here:
https://swiftpackageindex.com/ml-explore/mlx-swift/0.18.0/documentation/mlx.
These repositories provide a comprehensive set of tools and examples to effectively utilize MLX for machine learning tasks on Apple silicon. There are many other repositories for MLX and Python and if you need to perform tasks like fine tuning a MLX model, that task should probably be done using Python.
Example Application for MLX Swift Examples Repository
You will want to download the complete MLX Swift examples repository:
Open the top level XCode project by:
Here is the file browser view of this project:
Running the LLMEval project:
Initially the model is downloaded and cached on your laptop for future use. Here is the app used to solve a simple word problem:
Analysis of Swift and SwiftUI Code in the LLMEval Application
This example is part of the Swift MLX Examples project that currently has twenty contributors and a thousand stars on GitHub https://github.com/ml-explore/mlx-swift-examples.
Unfortunately the SwiftUI user interface code is mixed in with the code that uses MLX. Let’s walk through the code:
Here is a walk through a Swift-based program using Apple’s frameworks for Machine Learning and Language Models with the code interspersed with explanations.
Imports
These imports bring in essential libraries:
- LLM and MLX for working with language models.
- MarkdownUI for rendering Markdown content.
- SwiftUI for creating the user interface.
- Tokenizers for tokenizing text.
The ContentView Struct
The ContentView struct defines the main interface of the app.
State Variables
In this code snippet:
- @State allows the view to track changes in the prompt and llm instances.
- @Environment fetches device statistics, such as GPU memory usage.
Display Style Enum
In this code snippet:
- displayStyle defines whether the output is plain text or Markdown.
- A segmented picker toggles between the two styles.
UI Layout
Input Section
This code displays model information and statistics.
Output Section
The ScrollView shows the model’s output, which updates dynamically as the model generates text.
Toolbar
The toolbar includes:
- GPU memory usage information.
- A “Copy Output” button to copy the generated text.
The LLMEvaluator Class
This class handles the logic for loading and generating text with the language model.
Core Properties
This code snippet:
- Tracks the model state and output.
- Configures the model (phi3_5_4bit).
Loading the Model (if required)
This code snippet:
- Downloads and caches the model.
- Updates modelInfo during the download.
Generating Output
This code snippet:
- Prepares the prompt for the model.
- Generates tokens and dynamically updates the view.
This program demonstrates how to integrate ML and UI components for interactive LLM-based applications in Swift.
This code example uses the MIT License so you can modify the example code if you need to write a combined SwiftUI GUI app that uses LLM-based text generation.
Part 3: Apple’s CoreML and NLP Libraries
In this part we cover:
- Short introduction to the ideas behind Deep Learning
- Introduction of CoreML
- Examples using CoreML
- Introduction of NLP
- Examples using NLP libraries
This section used to contain Apple CoreML examples to train a back-propagation model from the University of Wisconsin cancer data set. As of April 2022, these example do not work because of a problem with latest CreateML library so this material has been removed from this book.
Deep Learning Introduction
Apple’s work in smoothly integrating deep learning into their developer tools for macOS, iOS, and iPadOS applications is in my opinion nothing short of brilliant. We will finish this book with an application that uses two deep learning models that provide almost all of the functionality of the application.
Before diving into Apple’s CoreML libraries in later chapters we will take a shallow dive into the principles of deep learning and take a lay-of-the-land look at the type of most commonly used models. This chapter has no example programs and is intended as background material.
Most of my professional career since 2014 has involved Deep Learning, mostly with TensorFlow using the Keras APIs. In the late 1980s I was on a DARPA neural network technology advisory panel for a year, I wrote the first prototype of the SAIC ANSim neural network library commercial product, and I wrote the neural network prediction code for a bomb detector my company designed and built for the FAA for deployment in airports. More recently I have used GAN (generative adversarial networks) models for synthesizing numeric spreadsheet data and LSTM (long short term memory) models to synthesize highly structured text data like nested JSON and for NLP (natural language processing). I have also written a product recommendation model for an online store using TensorFlow Recommenders. I have several USA and European patents using neural network and Deep Learning technology.
Here we will learn a vocabulary for discussing Deep Learning neural network models and look at possible architectures.
If you want to use Deep Learning professionally, there are two specific online resources that I recommend: Andrew Ng leads the efforts at deeplearning.ai and Jeremy Howard leads the efforts at fast.ai.
There are many Deep Learning neural architectures in current practical use; a few types that I use are:
- Multi-layer perceptron networks with many fully connected layers. An input layer contains placeholders for input data. Each element in the input layer is connected by a two-dimensional weight matrix to each element in the first hidden layer. We can use any number of fully connected hidden layers, with the last hidden layer connected to an output layer.
- Convolutional networks for image processing and text classification. Convolutions, or filters, are small windows that can process input images (filters are two-dimensional) or sequences like text (filters are one-dimensional). Each filter uses a single set of learned weights independent of where the filter is applied in an input image or input sequence.
- Autoencoders have the same number of input layer and output layer elements with one or more hidden fully connected layers. Autoencoders are trained to produce the same output as training input values using a relatively small number of hidden layer elements. Autoencoders are capable of removing noise in input data.
- LSTM (long short term memory) process elements in a sequence in order and are capable of remembering patterns that they have seen earlier in the sequence.
- GAN (generative adversarial networks) models comprise two different and competing neural models, the generator and the discriminator. GANs are often trained on input images (although in my work I have applied GANs to two-dimensional numeric spreadsheet data). The generator model takes as input a “latent input vector” (this is just a vector of specific size with random values) and generates a random output image. The weights of the generator model are trained to produce random images that are similar to how training images look. The discriminator model is trained to recognize if an arbitrary output image is original training data or an image created by the generator model. The generator and discriminator models are trained together.
The core functionality of libraries like TensorFlow are written in C++ and take advantage of special hardware like GPUs, custom ASICs, and devices like Google’s TPUs. Most people who work with Deep Learning models don’t need to even be aware of the low level optimizations used to make training and using Deep Learning models more efficient. That said, in the following section I am going to show you how simple neural networks are trained and used.
Simple Multi-layer Perceptron Neural Networks
I use the terms Multi-layer perceptron neural networks, backpropagation neural networks and delta-rule networks interchangeably. Backpropagation refers to the model training process of calculating the output errors when training inputs are passed in the forward direction from input layer, to hidden layers, and then to the output layer. There will be an error which is the difference between the calculated outputs and the training outputs. This error can be used to adjust the weights from the last hidden layer to the output layer to reduce the error. The error is then backprogated backwards through the hidden layers, updating all weights in the model. I have detailed example code in several of my older artificial intelligence books. Here I am satisfied to give you an intuition of how simple neural networks are trained.
The basic idea is that we start with a network initialized with random weights and for each training case we propagate the inputs through the network towards the output neurons, calculate the output errors, and back-up the errors from the output neurons back towards the input neurons in order to make small changes to the weights to lower the error for the current training example. We repeat this process by cycling through the training examples many times.
The following figure shows a simple backpropagation network with one hidden layer. Neurons in adjacent layers are connected by floating point connection strength weights. These weights start out as small random values that change as the network is trained. Weights are represented in the following figure by arrows; in the code the weights connecting the input to the output neurons are represented as a two-dimensional array.
Each non-input neuron has an activation value that is calculated from the activation values of connected neurons feeding into it, gated (adjusted) by the connection weights. For example, in the above figure, the value of Output 1 neuron is calculated by summing the activation of Input 1 times weight W1,1 and Input 2 activation times weight W2,1 and applying a “squashing function” like Sigmoid or Relu (see figures below) to this sum to get the final value for Output 1’s activation value. We want to flatten activation values to a relatively small range but still maintain relative values. To do this flattening we use the Sigmoid function that is seen in the next figure, along with the derivative of the Sigmoid function which we will use in the code for training a network by adjusting the weights.
Simple neural network architectures with just one or two hidden layers are easy to train using backpropagation and I have from scratch code (using no libraries) for this several of my previous books. However, here we are using Hy to write models using the TensorFlow framework which has the huge advantage that small models you experiment with on your laptop can be scaled to more parameters (usually this means more neurons in hidden layers which increases the number of weights in a model) and run in the cloud using multiple GPUs.
Except for pedantic purposes, I now never write neural network code from scratch. I take instead advantage of the many person-years of engineering work put into the development of frameworks like TensorFlow, PyTorch, mxnet, etc. We now move on to two examples built with TensorFlow.
Deep Learning
Deep Learning models are generally understood to have many more hidden layers than simple multi-layer perceptron neural networks and often comprise multiple simple models combined together in series or in parallel. Complex architectures can be iteratively developed by manually adjusting the size of model components, changing the components, etc. Alternatively, model architecture search can be automated. At Capital One I used Google’s AdaNet project that efficiently searches for effective model architectures inside a single TensorFlow session. Now all major cloud compute provides support some form of AutoML. You need to make a decision for yourself how much effort you want to put into deeply understanding the technology, or simply learning how to use pre-trained models.
Natural Language Processing Using Apple’s Natural Language Framework
I have been working in the field of Natural Language Processing (NLP) since 1985 so I ‘lived through’ the revolutionary change in NLP that has occurred since 2014: Deep Learning results out-classed results from previous symbolic methods.
https://developer.apple.com/documentation/naturallanguage
I will not cover older symbolic methods of NLP here, rather I refer you to my previous books Practical Artificial Intelligence Programming With Java, Loving Common Lisp, or the Savvy Programmer’s Secret Weapon, and Haskell Tutorial and Cookbook for examples. We get better results using Deep Learning (DL) for NLP and the libraries that Apple provides.
You will learn how to apply both DL and NLP by using the state-of-the-art full-feature libraries that Apple provides in their iOS and macOS development tools.
Using Apple’s NaturalLanguage Swift Library
We will use one of Apple’s NLP libraries consisting of pre-built models in the last chapter of this book. In order to fully understand the example in the last chapter you will need to read Apple’s high-level discussion of using CoreML https://developer.apple.com/documentation/coreml and their specific support for NLP https://developer.apple.com/documentation/naturallanguage/.
There are many pre-trained CoreML compatible models on the web, both from Apple and also from third party (e.g., https://github.com/likedan/Awesome-CoreML-Models).
Apple also provides tools for converting TensorFlow and PyTorch models to be compatible with CoreML https://coremltools.readme.io/docs.
A simple Wrapper Library for Apple’s NLP Models
I will not go into too much detail here but I created a small wrapper library for Apple’s NLP models that will make it easier for you to jump in and have fun experimenting with them: https://github.com/mark-watson/Nlp_swift.
The main library implementation file uses the @available(OSX 10.13, *) attribute to indicate that the following function is available on macOS 10.13 and later versions.
The public function getEntities takes a String parameter called text and returns an array of tuples containing (String, String). Here’s a breakdown of what this function does:
- The function initializes an empty array called words to store the extracted entities.
- The line tagger.string = text sets the input text for a tagger object. The tagger is an instance of NSLinguisticTagger, which is a natural language processing class provided by Apple’s Foundation framework.
- The next line creates an NSRange object called range that represents the entire length of the input text.
- The tagger.enumerateTags(in:range, unit:.word, scheme:.nameType, options:options) method is called to iterate over the words in the input text and extract their associated tags. The in: parameter specifies the range of the text to process. The unit: parameter specifies that the enumeration should be done on a word-by-word basis. The scheme: parameter specifies the linguistic scheme to use, in this case, the .nameType scheme, which is used to identify named entities. The options: parameter specifies additional options or settings for the tagger.
- Inside the enumeration block, the code retrieves the current word and its associated tag using the tokenRange and tag parameters.
- The line let word = (text as NSString).substring(with: tokenRange) extracts the substring corresponding to the current word using tokenRange.
- The line words.append((word, tag?.rawValue ?? “unknown”)) appends a tuple containing the extracted word and its associated tag to the words array. If the tag is nil, it uses the default value of “unknown”.
- Finally, the words array is returned, which contains all the extracted entities (words and their associated tags) from the input text.
The public function called getLemmas that takes a String parameter called text and returns an array of tuples containing (String, String). Here’s a breakdown of what the function getLemmas is very similar to the last function getEntities. The function getLemmas does the following:
- The function initializes an empty array called words to store the extracted lemmas.
- The line tagger.string = text sets the input text for a tagger object.
- The next line creates an NSRange object called range that represents the entire length of the input text.
- The tagger.enumerateTags(in:range, unit:.word, scheme:.lemma, options: options) method is called to iterate over the words in the input text and extract their corresponding lemmas.
- Inside the enumeration block, the code retrieves the current word and its associated lemma using the tokenRange and tag parameters.
- The line let word = (text as NSString).substring(with: tokenRange) extracts the substring corresponding to the current word using tokenRange.
- Finally, the words array is returned, which contains all the extracted lemmas (words and their associated base forms) from the input text.
In summary, function getLemmas uses the NSLinguisticTagger to perform linguistic analysis on a given text and extract the base forms (lemmas) of words. The lemmas are then stored in an array of tuples and returned as the result of the function.
Here is some test code:
Here is an edited listing of the output with most of the output removed for brevity:
Documents Question Answering Using OpenAI GPT4 APIs and a Local Embeddings Vector Database
The examples in this chapter are inspired by the Python LangChain and LlamaIndex projects, with just the parts I need for my projects written from scratch in Common Lisp. I wrote a Python book “LangChain and LlamaIndex Projects Lab Book: Hooking Large Language Models Up to the Real World Using GPT-3, ChatGPT, and Hugging Face Models in Applications” in March 2023: https://leanpub.com/langchain that you might also be interested in.
The GitHub repository for this example can be found here: https://github.com/mark-watson/Docs_QA_Swift.
The entire example is in one Swift source file main.swift. All of the program listings in this chapter can be found in this single source file.
We use two models in this example: a vector embedding model and a gpt-4o-mini conversation model (see bottom of this file). The vector embedding model is used to generate a vector embedding. The gpt-4o-mini model is used to generate a response to a prompt. The vector embedding model is used to compare the similarity of two prompts.
Extending the String Class
Implementing a Local Vector Database for Document Embeddings
The source file contains example code for creating embeddings and using dot product work to find semantic similarity:
The output is:
For this example, we use an in-memory store of embedding vectors and chunk text. A text document is broken into smaller chunks of text. Each chunk is embedded and stored in the embeddingsStore. The chunk text is stored in the chunks array. The embeddingsStore and chunks array are used to find the most similar chunk to a prompt. The most similar chunk is used to generate a response to the prompt.
Create Local Embeddings Vectors From Local Text Files With OpenAI GPT APIs
Using Local Embeddings Vector Database With OpenAI GPT APIs
We use the OpenAI QA API using gpt-4o-mini model (reformatted to fit the page width):
The output for these three questions looks like:
Wrap Up for Using Local Embeddings Vector Database to Enhance the Use of GPT3 APIs With Local Documents
As I write this in early April 2023, I have been working almost exclusively with OpenAI APIs for the last year and using the Python libraries for LangChain and LlamaIndex for the last three months.
I started writing the examples in this chapter for my own use, implementing a tiny subset of the LangChain and LlamaIndex libraries in Swift in order to write efficient command line utilities for creating local embedding vector data stores and for interactive chat using my own data.
By writing about my “scratching my own itch” command line experiments here I hope that I get pull requests for https://github.com/mark-watson/Docs_QA_Swift from readers who are interested in helping to extend this code with new functionality.
Part 4: Knowledge Representation and Data Acquisition
In this part we cover:
- Introduction to the semantic web and linked data
- A general discussion of Knowledge Representation
- Create Knowledge Graphs from text input
- Knowledge Graph Explorer application
Linked Data and the Semantic Web
Tim Berners Lee, James Hendler, and Ora Lassila wrote in 2001 an article for Scientific American where they introduced the term Semantic Web. Here I do not capitalize semantic web and use the similar term linked data somewhat interchangeably with semantic web.
In the same way that the web allows links between related web pages, linked data supports linking associated data on the web together. I view linked data as a relatively simple way to specify relationships between data sources on the web while the semantic web has a much larger vision: the semantic web has the potential to be the entirety of human knowledge represented as data on the web in a form that software agents can work with to answer questions, perform research, and to infer new data from existing data.
While the “web” describes information for human readers, the semantic web is meant to provide structured data for ingestion by software agents. This distinction will be clear as we compare WikiPedia, made for human readers, with DBPedia which uses the info boxes on WikiPedia topics to automatically extract RDF data describing WikiPedia topics. Let’s look at the WikiPedia topic for the town I live in Sedona, Arizona, and show how the info box on the English version of the WikiPedia topic page for Sedona https://en.wikipedia.org/wiki/Sedona,_Arizona maps to the DBPedia page http://dbpedia.org/page/Sedona,_Arizona. Please open both of these WikiPedia and DBPedia URIs in two browser tabs and keep them open for reference.
I assume that the format of the WikiPedia page is familiar so let’s look at the DBPedia page for Sedona that in human readble form shows the RDF statements with Sedona Arizona as the subject. RDF is used to model and represent data. RDF is defined by three values so an instance of an RDF statement is called a triple with three parts:
- subject: a URI (also referred to as a “Resource”)
- property: a URI (also referred to as a “Resource”)
- value: a URI (also referred to as a “Resource”) or a literal value (like a string or a number with optional units)
The subject for each Sedona related triple is the above URI for the DBPedia human readable page. The subject and property references in an RDF triple will almost always be a URI that can ground an entity to information on the web. The human readable page for Sedona lists several properties and the values of these properties. One of the properties is “dbo:areaCode” where “dbo” is a name space reference (in this case for a DatatypeProperty).
The following two figures show an abstract representation of linked data and then a sample of linked data with actual web URIs for resources and properties:
We will use the SPARQL query language (SPARQL for RDF data is similar to SQL for relational database queries). Let’s look at an example using the RDF in the last figure:
This query should return the result “Sun ONE Services - J2EE”. If you wanted to query for all URI resources that are books with the literal value of their titles, then you can use:
Note that ?s and ?v are arbitrary query variable names, here standing for “subject” and “value”. You can use more descriptive variable names like:
We will be diving a little deeper into RDF examples in the next chapter when we write a tool for using RDF data from DBPedia to find information about entities (e.g., people, places, organizations) and the relationships between entities. For now I want you to understand the idea of RDF statements represented as triples, that web URIs represent things, properties, and sometimes values, and that URIs can be followed manually (often called “dereferencing”) to see what they reference in human readable form.
Understanding the Resource Description Framework (RDF)
Text data on the web has some structure in the form of HTML elements like headers, page titles, anchor links, etc. but this structure is too imprecise for general use by software agents. RDF is a method for encoding structured data in a more precise way.
RDF specifies graph structures and can be serialized for storage or for service calls in XML, Turtle, N3, and other formats. I like the Turtle format and suggest that you pause reading this book for a few minutes and look at this World Wide Web Consortium Turtle RDF primer at https://www.w3.org/2007/02/turtle/primer/.
Frequently Used Resource Namespaces
The following standard namespaces are frequently used:
- RDF https://www.w3.org/TR/rdf-syntax-grammar/
- RDFS https://www.w3.org/TR/rdf-schema/
- OWL http://www.w3.org/2002/07/owl#
- XSD http://www.w3.org/2001/XMLSchema#
- FOAF http://xmlns.com/foaf/0.1/
- SKOS http://www.w3.org/2004/02/skos/core#
- DOAP http://usefulinc.com/ns/doap#
- DC http://purl.org/dc/elements/1.1/
- DCTERMS http://purl.org/dc/terms/
- VOID http://rdfs.org/ns/void#
Let’s look into the Friend of a Friend (FOAF) namespace. Click on the above link for FOAF http://xmlns.com/foaf/0.1/ and find the definitions for the FOAF Core:
and for the Social Web:
You now have seen a few common Schemas for RDF data. Another Schema that is widely used for annotating web sites that we won’t need for our examples here, is schema.org.
Understanding the SPARQL Query Language
For the purposes of the material in this book, the two sample SPARQL queries here are sufficient for you to get started using my SPARQL library https://github.com/mark-watson/SparqlQuery_swift with arbitrary RDF data sources and simple queries.
The Apache Foundation has a good introduction to SPARQL that I refer you to for more information.
Semantic Web and Linked Data Wrap Up
In the next chapter we will use natural language processing to extract structured information from raw text from SPARQL queries. We will be using my Swift SPARQL library https://github.com/mark-watson/SparqlQuery_swift as well as two pre-trained CoreML deep learning models.
Example Application: iOS and macOS Versions of my KnowledgeBookNavigator
I used many of the techniques discussed in this book, the Swift language, and the SwiftUI user interface framework to develop Swift version of my Knowledge Graph Navigator application for macOS. I originally wrote this as an example program in Common Lisp for another book project.
The GitHub repository for the KGN example is https://github.com/mark-watson/KGN. I copied the code from my stand-alone Swift libraries to this example to make it self contained. The easiest way to browse the source code is to open this project in Xcode.
I submitted the KGN app that we discuss in this chapter to Apple’s store and is available as a macOS app. If you load this project into Xcode, you can also build and run the iOS and iPadOS targets.
You will need to have read through the last chapter on semantic web and linked data technologies to understand this example because quite a lot of the code has embedded SPARQL queries to get information from DBPedia.org.
The other major part of this app is a slightly modified version of Apple’s question answering (QA) example using the BERT model in CoreML. Apple’s code is in the subdirectory AppleBERT. Please read the README file for this project and follow the directions for downloading and using Apple’s model and vocabulary file.
Screen Shots of macOS Application
In the first screenshot seen below, I had entered query text that included “Steve Jobs” and the popup list selector is used to let the user select which “Steve Jobs” entity from DBPedia that they want to use.
The previous screenshot shows the results to the query displayed as English text.
Notice the app prompt “Behind the scenes SPARQL queries” near the bottom of the app window. If you click on this field then the SPARQL queries used to answer the question are shown, as on the next screenshot:
Application Code Listings
I will list some of the code for this example application and I suggest that you, dear reader, also open this project in Xcode in order to navigate the sample code and more carefully read through it.
SPARQL
I introduced you to the use of SPARQL in the last chapter. This library can be used by adding a reference to the Project.swift file for this project. You can also clone the GitHub repository https://github.com/mark-watson/Nlp_swift to have the source code for local viewing and modification and I have copied the code into the KGN project.
The file SparqlQuery.swift is shown here:
The file QueryCache.swift contains code written by Khoa Pham (MIT License) that can be found in the GitHub repository https://github.com/onmyway133/EasyStash. This file is used to cache SPARQL queries and the results. In testing this application I noticed that there were many repeated queries to DBPedia so I decided to cache results. Here is the simple API I added on top of Khoa Pham’s code:
The code in file GenerateSparql.swift is used to generate queries for DBPedia. The line-wrapping for embedded SPARQL queries in the next code section is difficult to read so you may want to open the source file in Xcode. Please note that the KGN application prints out the SPARQL queries used to fetch information from DBPedia. The embedded SPARQL query templates used here have variable slots that filled in at runtime to customize the queries.
The file AppSparql contains more utility functions for getting entity and relationship data from DBPedia:
AppleBERT
The files in the directory AppleBERT were copied from Apple’s example https://developer.apple.com/documentation/coreml/model_integration_samples/finding_answers_to_questions_in_a_text_document with a few changes to get returned results in a convenient format for this application. Apple’s BERT documentation is excellent and you should review it.
Relationships
The file Relationships.swift fetches relationship data for pairs of DBPedia entities. Note that the first SPARQL template has variable slots <e1> and <e2> that are replaced at runtime with URIs representing the entities that we are searching for relationships between these two entities:
NLP
The file NlpWhiteboard provides high level NLP utility functions for the application:
The file NLPutils.swift provides lower level NLP utilities:
Views
This is not a book about SwiftUI programming, and indeed I expect many of you dear readers know much more about UI development with SwiftUI than I do. I am not going to list the four view files:
- MainView.swift
- QueryView.swift
- AboutView.swift
- InfoView.swift
Main KGN
The top level app code in the file KGNApp.swift is fairly simple. I hardcoded the window size for macOS and the window sizes for running this example on iPadOS or iOS are commented out:
I was impressed by the SwiftUI framework. Applications are fairly portable across macOS, iOS, and iPadOS. I am not a UI developer by profession (as this application shows) but I enjoyed learning just enough about SwiftUI to write this example application.
Book Wrap Up
I hope that you dear reader enjoyed this short book. While I enjoy programming in Swift and appreciate how well Apple has integrated machine learning capabilities in their iOS/iPadOS/macOS ecosystems, I still find myself writing most of my experimental code in Lisp languages and using Python for deep learning experiments and projects. That said, I am very happy that I have done the work to add Swift, CoreML, and SwiftUI to my personal programming tool belt.
I usually update my eBooks so if there is some topic or application domain that you would like added to future versions of this book, then please let me know. My email address is markw <at> markwatson <dot> com.