Artificial Intelligence Using Swift
Artificial Intelligence Using Swift
Mark Watson
Buy on Leanpub

Table of Contents

Copyright 2022 Mark Watson. All rights reserved. This book may be shared using the Creative Commons “share and share alike, no modifications, no commercial reuse” license.

This eBook will be updated occasionally so please periodically check the leanpub.com web page for this book for updates.

This is the first edition released spring of 2022.

If you would like to support my work please consider purchasing my books on Leanpub and star my git repositories that you find useful on GitHub. You can also interact with me on social media on Mastodon and Twitter.

Preface

Why use Swift for hacking AI? Common Lisp has been my go-to language for artificial intelligence development and research since 1982. The transition to using Swift was a slow transition for me. During this transition I prototyped a new project in parallel using both Swift and Common Lisp, weighing the advantages of both for my current requirements. The Swift version of this project included in this book runs on macOS, iOS, and iPadOS. The macOS version is available on the Apple Store. Several of the utilities developed in this book were used in this project.

This book starts out slowly with simple examples which I wrote showing how to access the Swift library packages on GitHub, tips on writing Swift command line apps, and web scraping. We then proceed to using Apple’s CoreML for Natural Language Processing (NLP), training and using your own CoreML models, using OpenAI’s GPT-3 APIs, and finally several semantic web/linked data examples. The book ends with the example KGN on the App Store. It is not my intention to cover in detail the use of SwiftUI for building iOS/iPadOS/macOS applications but I thought my readers might enjoy seeing several of the techniques covered in the book integrated into an example app.

I have used Common Lisp for AI research projects and for AI product development and delivery since 1982. There is something special about using a language for almost forty years. All that said, I find Swift a compelling choice now for several reasons:

  • Flexible language with many features I rely on like supporting closures and an interactive functional programming style.
  • Built in support for deep learning neural network models for natural language processing, predictive models, etc.
  • First class support for iOS and macOS development.
  • Good support for server side applications hosted on Linux.

Swift is a programmer-efficient language: code is concise and easy to read, and high quality libraries from Apple and third parties mean that often there is less code to write. I will share with you my Swift development work flow that combines interactive development of code in playgrounds, development of higher level libraries in text only or command line applications, and my general strategy for writing iOS and macOS applications after low level and intermediate code is written and debugged.

Parts of this Book are Specific for macOS and iOS, with Some Support for Linux

Swift is a general purpose language that is well supported in macOS, iOS, and Linux, with some support in Windows. Here, we cover the use of Swift on macOS and iOS. Some of the examples in this book rely on libraries that are specifically available on macOS and iOS like CoreML and the NLP libraries. Several book examples also work on Linux, such as the examples using SQLite, the Microsoft Azure search APIs, web scraping, and semantic web/linked data.

Code for this Book

Because of the way the Swift Package Manager works, I organized all book examples that build libraries as separate GitHub repos so the libraries can be easily used in other book examples as well as your own software projects. The separate library GitHub repositories are:

I suggest cloning all of these GitHub repositories right now so you can have the example source code at hand while reading this book.

All of the code examples are licensed using the Apache 2 license. You are free to reuse the book example code in your own projects (open source, commercial), with attribution of my copyright and the Apache 2 license.

Except for the last SwiftUI example application, all sample programs are written as command line utilities. I considered using Swift playgrounds for some of the examples but decided that packaging as a combination of libraries and command line utilities would tend to make the example code more useful for your own projects.

http://www.knowledgegraphnavigator.com/

Author’s Background

I live in Sedona, Arizona with my wife and pet parrot. Our children and grandchildren live in California, Rhode Island, and the state of Washington.

I have written 20+ books, mostly about artificial intelligence. I have over 50 US patents.

I write about technologies that I have used throughout my career: knowledge representation using semantic web and linked data, machine learning and deep learning, and natural language processing. I am grateful for the companies where I have worked (SAIC, Google, Capital One, Olive AI, Babylist, etc.) that have supported this work since 1982.

As an author, I hope that the material in this book entertains you and will be useful in your work.

A Request from the Author

I spent time writing this book to help you, dear reader. I release this book under the Creative Commons license and set the minimum purchase price to Free in order to reach the most readers. If you found this book on the web (or it was given to you) and if it provides value to you then please consider doing the following to support my future writing efforts and also to support future updates to this book:

I enjoy writing and your support helps me write new editions and updates for my books and to develop new book projects. Thank you!

Cover Art

The cover picture was taken by WikiMedia Commons user Keta and is available for use under the Creative Commons License CC BY-SA 2.5.

CoreML Libraries Used in this Book

  • CoreML general overview: https://developer.apple.com/documentation/coreml
  • MLClassifier https://developer.apple.com/documentation/createml/mlclassifier
  • MLTextClassifier https://developer.apple.com/documentation/createml/mltextclassifier
  • NLModel https://developer.apple.com/documentation/naturallanguage/nlmodel
  • Natural Language Framework https://developer.apple.com/documentation/naturallanguage
  • MLCustomLayer https://developer.apple.com/documentation/coreml/mlcustomlayer

Swift 3rd Party Libraries

We use the following 3rd party libraries:

Acknowledgements

I thank my wife Carol for editing this manuscript, finding typos, and suggesting improvements.

Part 1: Introduction and Short Examples

We begin with a sufficient introduction for Swift to understand the programming examples. After introducing the language we will look at a few short examples that provide code and techniques we use later in the book:

  • Creating Swift projects
  • Writing command line utilities
  • Web scraping

Setting Up Swift for Command Line Development

Except for the last chapter in this book that uses Xcode for developing a complete macOS/iOS/iPadOS example application, I assume that you will work through the book examples using the command line and your favorite editor. If you want to use Xcode for the command line examples, you can open the Swift package file on the command line and open Xcode using, for example:

cd SparqlQuery_swift
open Package.swift

You notice that most of the examples are command line apps or libraries with command line test programs and the README.md files in the example directories provide instructions for building and running on the command line.

You can also run Xcode and from the File Menu open an example Package.swift file. You can then use the Product / Test menu to run the test code for the example. You might need to use the View / Debug Area / Active Console menu to show the output area.

I assume that you are familiar with the Swift programming language and Xcode.

Swift is a general purpose language that is well supported in macOS and iOS, with good support for Linux, and with some support in Windows. For the purposes of this book, we are only considering the use of Swift on macOS and iOS. Most of the examples in this book rely on libraries that are specifically available on macOS and iOS like CoreML and the NLP libraries.

There are great free resources for the Swift language on the web, in other commercial books, and Apple’s free Swift books. Here I provide just enough material on the Swift language for you to understand and work with the book examples. After working through this book’s material you will be able to add machine learning, natural language processing, and knowledge representation to your applications. There will be parts of the Swift language that we don’t need for the material here, and we won’t cover.

Installing Swift Packages

We will use the Swift Package Manager. You should pause reading now and install the Swift Package Manager if you have not already done so.

I occasionally use https://vapor.codes web framework library (although not in this book). We use this 3rd party library as an example for building a library locally from source code. Start by cloning the git repository https://github.com/vapor/vapor. Then:

git clone https://github.com/vapor/vapor.git
cd vapor
swift build

I don’t usually install libraries locally from source code unless I am curious about the implementation and want to read through the source code. Later we will see how to reference Swift libraries hosted on GitHub in a project’s Package.swift file.

Creating Swift Packages

We will cover using the Swift Package Manager to create new packages using the command line here. Later we will create projects using Apple’s XCode IDE when we develop the example application Knowledge Graph Navigator.

You will want to use the Swift Package Manager documentation for reference.

We will be generating executable projects and library (with a sample main program) projects. The commands for generating the stub for an executable application project are:

mkdir BingSearch
cd BingSearch
swift package init --type executable

and build the stub of a library with a demo main program:

mkdir SparqlQuery
cd SparqlQuery
swift package init --type library

Accessing Libraries that You Write in Other Projects

You can reference Swift libraries using the Swift.package file for each of your projects. We will look at parts of two Swift.package files here. The first is for my SPARQL query client library that we will develop in a later chapter. This library SparqlQuery_swift is used in both book examples Knowledge Graph Navigator (KGN) macOS/iOS/iPadOS example application as well as a text version KnowledgeGraphNavigator_swift.

 1 import PackageDescription
 2 
 3 let package = Package(
 4     name: "SparqlQuery_swift",
 5     products: [
 6         .library(
 7             name: "SparqlQuery_swift",
 8             targets: ["SparqlQuery_swift"]),
 9     ],
10     dependencies: [
11       .package(url: "https://github.com/SwiftyJSON/SwiftyJSON.git",
12           .branch("master")),
13    ],
14     targets: [
15         .target(
16             name: "SparqlQuery_swift",
17             dependencies: ["SwiftyJSON"]),
18         .testTarget(
19             name: "SparqlQuery_swiftTests",
20             dependencies: ["SparqlQuery_swift", "SwiftyJSON"]),
21     ]
22 )

The Swift.package file for text version KnowledgeGraphNavigator_swift is shown here:

 1 import PackageDescription
 2 
 3 let package = Package(
 4     name: "KnowledgeGraphNavigator_swift",
 5     platforms: [
 6         .macOS(.v10_15),
 7     ],
 8     dependencies: [
 9         .package(url: "https://github.com/SwiftyJSON/SwiftyJSON.git",
10             .branch("master")),
11         .package(url: "https://github.com/scinfu/SwiftSoup.git", from: "1.7.4"),
12         .package(url: "git@github.com:mark-watson/SparqlQuery_swift.git",
13             .branch("main")),
14         .package(url: "git@github.com:mark-watson/Nlp_swift.git", .branch("main")),
15     ],
16     targets: [
17         // Targets are the basic building blocks of a package.
18         // A target can define a module or a test suite.
19         // Targets can depend on other targets in this package,
20         // and on products in packages this package depends on.
21         .target(
22             name: "KnowledgeGraphNavigator_swift",
23             dependencies: ["SparqlQuery_swift", "Nlp_swift",
24               "SwiftyJSON", "SwiftSoup"]),
25     ]
26 )

Hopefully you have cloned the git repositories for each book example and understand how I have configured the examples for your use.

For the rest of this book, you can read chapters in any order. In some cases, earlier chapters will contain implementations of libraries used in later chapters.

Background Information for Writing Swift Command Line Utilities

This short chapter contains example code and utilities for writing command line programs, using external shell processes, and using the FileIO library.

Using Shell Processes

The library for using shell processes is one of my GitHub projects so you can include it in other projects using:

1  dependencies: [
2    .package(url: "git@github.com:mark-watson/ShellProcess_swift.git",
3             .branch("main")),
4  ],

You can clone this repository if you want to have the source code at hand:

1 git clone https://github.com/mark-watson/ShellProcess_swift.git

The following listing shows the library implementation. In line 5 we use the constructor Process from the Apple Foundation library to get a new process object that we set fields executableURL and argList. In lines 8 and 9 we create a new Unix style pipe to capture the output from the shell process we are starting and attach it to the process. After we run the task, we capture the output and return it as the value of function run_in_shell.

 1 import Foundation
 2 
 3 @available(OSX 10.13, *)
 4 public func run_in_shell(commandPath: String, argList: [String] = []) -> String {
 5     let task = Process()
 6     task.executableURL = URL(fileURLWithPath: commandPath)
 7     task.arguments = argList
 8     let pipe = Pipe()
 9     task.standardOutput = pipe
10     do {
11         try! task.run()
12         let data = pipe.fileHandleForReading.readDataToEndOfFile()
13         let output: String? = String(data: data, encoding: String.Encoding.utf8)
14         if let output = output {
15           if !output.isEmpty {
16             return output.trimmingCharacters(in: .whitespacesAndNewlines)
17           }
18         }
19     }
20     return ""
21 }

As in most examples in this book we use the Swift testing framework to run the example code at the command line using swift test. Running swift test does an implicit swift build.

 1 import XCTest
 2 @testable import ShellProcess_swift
 3 
 4 final class ShellProcessTests: XCTestCase {
 5     func testExample() {
 6         // This is an example of a functional test case.
 7         // Use XCTAssert and related functions to verify your tests produce the
 8         // correct results.
 9         print("** s1:")
10         let s1 = run_in_shell(commandPath: "/bin/ps", argList: ["a"])
11         print(s1)
12         let s2 = run_in_shell(commandPath: "/bin/ls", argList: ["."])
13         print("** s2:")
14         print(s2)
15         let s3 = run_in_shell(commandPath: "/bin/sleep", argList: ["2"])
16         print("** s3:")
17         print(s3)
18 
19     }
20 
21     static var allTests = [
22         ("testExample", testExample),
23     ]
24 }

The test output (with some text removed for brevity) is:

 1 $ swift test
 2 Test Suite 'All tests' started at 2021-08-06 16:36:21.447
 3 ** s1:
 4 PID   TT  STAT      TIME COMMAND
 5  3898 s000  Ss     0:00.01 login -pf markw8
 6  3899 s000  S+     0:00.18 -zsh
 7  3999 s001  Ss     0:00.02 login -pfl markw8 /bin/bash -c exec -la zsh /bin/zsh
 8  4000 s001  S+     0:00.38 -zsh
 9  5760 s002  Ss     0:00.02 login -pfl markw8 /bin/bash -c exec -la zsh /bin/zsh
10  5761 s002  S      0:00.14 -zsh
11  8654 s002  S+     0:00.06 /Applications/Xcode.app/Contents/Developer/Toolchains/Xco\
12 deDefault.xctoolchain/usr/bin/swift-test
13  8665 s002  S      0:00.03 /Applications/Xcode.app/Contents/Developer/usr/bin/xctest\
14  /Users/markw_1/GIT_swift_book/ShellProcess_swift/.build/arm64-apple-macosx/debug/Sh\
15 ellProcess_swiftPackageTests.xctest
16  8666 s002  R      0:00.00 /bin/ps a
17 ** s2:
18 Package.swift
19 README.md
20 Sources
21 Tests
22 ** s3:
23 
24 Test Suite 'All tests' passed at 2021-08-06 16:36:23.468.
25 	 Executed 1 test, with 0 failures (0 unexpected) in 2.019 (2.021) seconds

FileIO Examples

This file I/O example uses the ShellProcess_swift library we saw in the last section so if you were to create your own Swift project with the following code listing, you would have to add this dependency in the Project.swift file.

When writing command line Swift programs you will often need to do simple file IO so let’s look at some examples here:

 1 import Foundation
 2 import ShellProcess_swift // my library
 3 
 4 @available(OSX 10.13, *)
 5 func test_files_demo() -> Void {
 6     // In order to append to an existing file, you need to get a file handle
 7     // and seek to the end of a file. The following will not work:
 8     let s = "the dog chased the cat\n"
 9     try! s.write(toFile: "out.txt", atomically: true,
10                  encoding: String.Encoding.ascii)
11     let s2 = "a second string\n"
12     try! s2.write(toFile: "out.txt", atomically: true,
13                   encoding: String.Encoding.ascii)
14     let aString = try! String(contentsOfFile: "out.txt")
15     print(aString)
16 
17     // For simple use cases, simply appending strings, then writing
18     // the result atomically works fine:
19     var s3 = "the dog chased the cat\n"
20     s3 += "a second string\n"
21     try! s3.write(toFile: "out2.txt", atomically: true,
22                   encoding: String.Encoding.ascii)
23     let aString2 = try! String(contentsOfFile: "out2.txt")
24     print(aString2)
25 
26     // list files in current directory:
27     let ls = run_in_shell(commandPath: "/bin/ls", argList: ["."])
28     print(ls)
29 
30     // remove two temnporary files:
31     let shellOutput = run_in_shell(commandPath: "/bin/rm",
32                                    argList: ["out.txt", "out2.txt"])
33     print(shellOutput)
34 }
35 
36 if #available(OSX 10.13, *) {
37     test_files_demo()
38 }

I created a temporary Swift project with the previous code listing and a Project.swift file. I built and ran this example using the swift command line tool.

Unlike the example in the last section where we built a reusable library with a test program, here we have a standalone program contained in a single file so we will use swift run to build and run this example:

 1 $ swift run
 2 Fetching git@github.com:mark-watson/ShellProcess_swift.git from cache
 3 Cloning git@github.com:mark-watson/ShellProcess_swift.git
 4 Resolving git@github.com:mark-watson/ShellProcess_swift.git at main
 5 [5/5] Build complete!
 6 a second string
 7 
 8 the dog chased the cat
 9 a second string
10 
11 Package.resolved
12 Package.swift
13 README.md
14 Sources
15 out.txt
16 out2.txt

Swift REPL

There is an example of using the Swift REPL at the end of the next chapter on web scraping. For reference, you can start a REPL with:

1 $ swift run --repl
2 Type :help for assistance.
3 1> import WebScraping_swift
4 2> webPageText(uri: "https://markwatson.com")
5 $R0: String = "Mark Watson: AI Practitioner and Polyglot Programmer"...
6 3> public func foo(s: String) -> String { return s } 
7 4> foo(s: "cat") 
8 $R1: String = "cat"
9 5> 

You can import packages and interactively enter Swift expressions, including defining functions.

In the next chapter we will look at a longer example that scrapes web sites.

In the next chapter we will look at one more simple example, building a web scraping library, before getting to the machine learning and NLP part of the book.

Web Scraping

It is important to respect the property rights of web site owners and abide by their terms and conditions for use. This Wikipedia article on Fair Use provides a good overview of using copyright material.

The web scraping code we develop here uses the Swift library SwiftSoup that is loosely based on the BeautifulSoup libraries available in other programming languages.

For my work and research, I have been most interested in using web scraping to collect text data for natural language processing but other common applications include writing AI news collection and summarization assistants, trying to predict stock prices based on comments in social media which is what we did at Webmind Corporation in 2000 and 2001, etc.

I wrote a simple web scraping library that is available at https://github.com/mark-watson/WebScraping_swift that you can use in your projects by putting the following dependency in your Project.swift file:

1     dependencies: [
2          .package(url: "git@github.com:mark-watson/WebScraping_swift.git",
3              .branch("main")),
4     ],

Here is the main implementation file for the library:

 1 import Foundation
 2 import SwiftSoup
 3 
 4 public func webPageText(uri: String) -> String {
 5     guard let myURL = URL(string: uri) else {
 6         print("Error: \(uri) doesn't seem to be a valid URL")
 7         fatalError("invalid URI")
 8     }
 9     let html = try! String(contentsOf: myURL, encoding: .ascii)
10     let doc: Document = try! SwiftSoup.parse(html)
11     let plain_text = try! doc.text()
12     return plain_text
13 }
14 
15 func webPageHeadersHelper(uri: String, headerName: String) -> [String] {
16     var ret: [String] = []
17     guard let myURL = URL(string: uri) else {
18         print("Error: \(uri) doesn't seem to be a valid URL")
19         fatalError("invalid URI")
20     }
21     do {
22         let html = try String(contentsOf: myURL, encoding: .ascii)
23         let doc: Document = try SwiftSoup.parse(html)
24         let h1_headers = try doc.select(headerName)
25         for el in h1_headers {
26             let h1 = try el.text()
27             ret.append(h1)
28         }
29     } catch {
30         print("Error")
31     }
32     return ret
33 }
34 
35 public func webPageH1Headers(uri: String) -> [String] {
36     return webPageHeadersHelper(uri: uri, headerName: "h1")
37 }
38     
39 public func webPageH2Headers(uri: String) -> [String] {
40     return webPageHeadersHelper(uri: uri, headerName: "h2")
41 }
42 
43 public func webPageAnchors(uri: String) -> [[String]] {
44     var ret: [[String]] = []
45     guard let myURL = URL(string: uri) else {
46         print("Error: \(uri) doesn't seem to be a valid URL")
47         fatalError("invalid URI")
48     }
49     do {
50         let html = try String(contentsOf: myURL, encoding: .ascii)
51         let doc: Document = try SwiftSoup.parse(html)
52         let anchors = try doc.select("a")
53         for a in anchors {
54             let text = try a.text()
55             let a_uri = try a.attr("href")
56             if a_uri.hasPrefix("#") {
57                 ret.append([text, uri + a_uri])
58             } else {
59                 ret.append([text, a_uri])
60             }
61         }
62     } catch {
63         print("Error")
64     }
65     return ret
66 }

Here I wrote utility functions to get the plain text from a web site, HTML header text, and anchors. You can clone this library and extend it for other types of HTML elements you may need to process.

The test program shows how to call the APIs in the library:

 1 import XCTest
 2 import Foundation
 3 import SwiftSoup
 4 
 5 @testable import WebScraping_swift
 6 
 7 final class WebScrapingTests: XCTestCase {
 8     func testGetWebPage() {
 9         let text = webPageText(uri: "https://markwatson.com")
10         print("\n\n\tTEXT FROM MARK's WEB SITE:\n\n", text)
11     }
12 
13     func testToShowSwiftSoupExamples() {
14         let myURLString = "https://markwatson.com"
15         let h1_headers = webPageH1Headers(uri: myURLString)
16         print("\n\n++ h1_headers:", h1_headers)
17         let h2_headers = webPageH2Headers(uri: myURLString)
18         print("\n\n++ h2_headers:", h2_headers)
19         let anchors = webPageAnchors(uri: myURLString)
20         print("\n\n++ anchors:", anchors)
21 }
22 
23     static var allTests = [("testGetWebPage", testGetWebPage),
24                            ("testToShowSwiftSoupExamples",
25                             testToShowSwiftSoupExamples)]
26 }

Here we run the unit tests (with much of the output not shown for brevity):

 1 $ swift test
 2 
 3 	TEXT FROM MARK's WEB SITE:
 4 
 5  Mark Watson: AI Practitioner and Polyglot Programmer | Mark Watson    Read my Blog \
 6    Fun stuff    My Books    My Open Source Projects    Hire Me    Free Mentoring    \
 7 Privacy Policy Mark Watson: AI Practitioner and Polyglot Programmer I am the author \
 8 of 20+ books on Artificial Intelligence, Common Lisp, Deep Learning, Haskell, Clojur\
 9 e, Java, Ruby, Hy language, and the Semantic Web. I have 55 US Patents. My customer \
10 list includes: Google, Capital One, Olive AI, CompassLabs, Disney, SAIC, Americast, \
11 PacBell, CastTV, Lutris Technology, Arctan Group, Sitescout.com, Embed.ly, and Webmi\
12 nd Corporation.
13 
14 ++ h1_headers: ["Mark Watson: AI Practitioner and Polyglot Programmer", "The books t\
15 hat I have written", "Fun stuff", "Open Source", "Hire Me", "Free Mentoring", "Priva\
16 cy Policy"]
17 
18 ++ h2_headers: ["I am the author of 20+ books on Artificial Intelligence, Common Lis\
19 p, Deep Learning, Haskell, Clojure, Java, Ruby, Hy language, and the Semantic Web. I\
20  have 55 US Patents.", "Other published books:"]
21 
22 ++ anchors: [["Read my Blog", "https://mark-watson.blogspot.com"], ["Fun stuff", "ht\
23 tps://markwatson.com#fun"], ["My Books", "https://markwatson.com#books"], ["My Open \
24 Source Projects", "https://markwatson.com#opensource"], ["Hire Me", "https://markwat\
25 son.com#consulting"], ["Free Mentoring", "https://markwatson.com#mentoring"], ["Priv\
26 acy Policy", "https://markwatson.com/privacy.html"], ["leanpub", "https://leanpub.co\
27 m/u/markwatson"], ["GitHub", "https://github.com/mark-watson"], ["LinkedIn", "https:\
28 //www.linkedin.com/in/marklwatson/"], ["Twitter", "https://twitter.com/mark_l_watson\
29 "], ["leanpub", "https://leanpub.com/lovinglisp"], ["leanpub", "https://leanpub.com/\
30 haskell-cookbook/"], ["leanpub", "https://leanpub.com/javaai"], 
31 ]
32 Test Suite 'All tests' passed at 2021-08-06 17:37:11.062.
33 	 Executed 2 tests, with 0 failures (0 unexpected) in 0.471 (0.472) seconds

Running in the Swift REPL

 1 $ swift run --repl
 2 [1/1] Build complete!
 3 Launching Swift REPL with arguments: -I/Users/markw_1/GIT_swift_book/WebScraping_swi\
 4 ft/.build/arm64-apple-macosx/debug -L/Users/markw_1/GIT_swift_book/WebScraping_swift\
 5 /.build/arm64-apple-macosx/debug -lWebScraping_swift__REPL
 6 Welcome to Apple Swift version 5.5 (swiftlang-1300.0.29.102 clang-1300.0.28.1).
 7 Type :help for assistance.
 8   1> import WebScraping_swift
 9   2> webPageText(uri: "https://markwatson.com")
10 $R0: String = "Mark Watson: AI Practitioner and Polyglot Programmer | Mark Watson   \
11  Read my Blog    Fun stuff    My Books    My Open Source Projects    Privacy Policy \
12 Mark Watson: AI Practitioner and Polyglot Programmer I am the author of 20+ books on\
13  Artificial Intelligence, Common Lisp, Deep Learning, Haskell, Clojure, Java, Ruby, \
14 Hy language, and the Semantic Web. I have 55 US Patents. My customer list includes: \
15 Google, Capital One, Babylist, Olive AI, CompassLabs, Disney, SAIC, Americast, PacBe\
16 ll, CastTV, Lutris Technology, Arctan Group, Sitescout.com, Embed.ly, and Webmind Co\
17 rporation"...
18   3>  

This chapter finishes a quick introduction to using Swift and Swift packages for command line utilities. The remainder of this book comprises machine learning, natural language processing, and semantic web/linked data examples.

Part 2: Apple’s CoreML and NLP Libraries

In this part we cover:

  • Short introduction to the ideas behind Deep Learning
  • Introduction of CoreML
  • Examples using CoreML
  • Introduction of NLP
  • Examples using NLP libraries

Deep Learning Introduction

Apple’s work in smoothly integrating deep learning into their developer tools for macOS, iOS, and iPadOS applications is in my opinion nothing short of brilliant. We will finish this book with an application that uses two deep learning models that provide almost all of the functionality of the application.

Before diving into Apple’s CoreML libraries in later chapters we will take a shallow dive into the principles of deep learning and take a lay-of-the-land look at the type of most commonly used models. This chapter has no example programs and is intended as background material.

Most of my professional career since 2014 has involved Deep Learning, mostly with TensorFlow using the Keras APIs. In the late 1980s I was on a DARPA neural network technology advisory panel for a year, I wrote the first prototype of the SAIC ANSim neural network library commercial product, and I wrote the neural network prediction code for a bomb detector my company designed and built for the FAA for deployment in airports. More recently I have used GAN (generative adversarial networks) models for synthesizing numeric spreadsheet data and LSTM (long short term memory) models to synthesize highly structured text data like nested JSON and for NLP (natural language processing). I have also written a product recommendation model for an online store using TensorFlow Recommenders. I have several USA and European patents using neural network and Deep Learning technology.

Here we will learn a vocabulary for discussing Deep Learning neural network models and look at possible architectures.

If you want to use Deep Learning professionally, there are two specific online resources that I recommend: Andrew Ng leads the efforts at deeplearning.ai and Jeremy Howard leads the efforts at fast.ai.

There are many Deep Learning neural architectures in current practical use; a few types that I use are:

  • Multi-layer perceptron networks with many fully connected layers. An input layer contains placeholders for input data. Each element in the input layer is connected by a two-dimensional weight matrix to each element in the first hidden layer. We can use any number of fully connected hidden layers, with the last hidden layer connected to an output layer.
  • Convolutional networks for image processing and text classification. Convolutions, or filters, are small windows that can process input images (filters are two-dimensional) or sequences like text (filters are one-dimensional). Each filter uses a single set of learned weights independent of where the filter is applied in an input image or input sequence.
  • Autoencoders have the same number of input layer and output layer elements with one or more hidden fully connected layers. Autoencoders are trained to produce the same output as training input values using a relatively small number of hidden layer elements. Autoencoders are capable of removing noise in input data.
  • LSTM (long short term memory) process elements in a sequence in order and are capable of remembering patterns that they have seen earlier in the sequence.
  • GAN (generative adversarial networks) models comprise two different and competing neural models, the generator and the discriminator. GANs are often trained on input images (although in my work I have applied GANs to two-dimensional numeric spreadsheet data). The generator model takes as input a “latent input vector” (this is just a vector of specific size with random values) and generates a random output image. The weights of the generator model are trained to produce random images that are similar to how training images look. The discriminator model is trained to recognize if an arbitrary output image is original training data or an image created by the generator model. The generator and discriminator models are trained together.

The core functionality of libraries like TensorFlow are written in C++ and take advantage of special hardware like GPUs, custom ASICs, and devices like Google’s TPUs. Most people who work with Deep Learning models don’t need to even be aware of the low level optimizations used to make training and using Deep Learning models more efficient. That said, in the following section I am going to show you how simple neural networks are trained and used.

Simple Multi-layer Perceptron Neural Networks

I use the terms Multi-layer perceptron neural networks, backpropagation neural networks and delta-rule networks interchangeably. Backpropagation refers to the model training process of calculating the output errors when training inputs are passed in the forward direction from input layer, to hidden layers, and then to the output layer. There will be an error which is the difference between the calculated outputs and the training outputs. This error can be used to adjust the weights from the last hidden layer to the output layer to reduce the error. The error is then backprogated backwards through the hidden layers, updating all weights in the model. I have detailed example code in several of my older artificial intelligence books. Here I am satisfied to give you an intuition of how simple neural networks are trained.

The basic idea is that we start with a network initialized with random weights and for each training case we propagate the inputs through the network towards the output neurons, calculate the output errors, and back-up the errors from the output neurons back towards the input neurons in order to make small changes to the weights to lower the error for the current training example. We repeat this process by cycling through the training examples many times.

The following figure shows a simple backpropagation network with one hidden layer. Neurons in adjacent layers are connected by floating point connection strength weights. These weights start out as small random values that change as the network is trained. Weights are represented in the following figure by arrows; in the code the weights connecting the input to the output neurons are represented as a two-dimensional array.

Example Backpropagation network with One Hidden Layer
Example Backpropagation network with One Hidden Layer

Each non-input neuron has an activation value that is calculated from the activation values of connected neurons feeding into it, gated (adjusted) by the connection weights. For example, in the above figure, the value of Output 1 neuron is calculated by summing the activation of Input 1 times weight W1,1 and Input 2 activation times weight W2,1 and applying a “squashing function” like Sigmoid or Relu (see figures below) to this sum to get the final value for Output 1’s activation value. We want to flatten activation values to a relatively small range but still maintain relative values. To do this flattening we use the Sigmoid function that is seen in the next figure, along with the derivative of the Sigmoid function which we will use in the code for training a network by adjusting the weights.

Sigmoid Function and Derivative of Sigmoid Function (SigmoidP)
Sigmoid Function and Derivative of Sigmoid Function (SigmoidP)

Simple neural network architectures with just one or two hidden layers are easy to train using backpropagation and I have from scratch code (using no libraries) for this several of my previous books. However, here we are using Hy to write models using the TensorFlow framework which has the huge advantage that small models you experiment with on your laptop can be scaled to more parameters (usually this means more neurons in hidden layers which increases the number of weights in a model) and run in the cloud using multiple GPUs.

Except for pedantic purposes, I now never write neural network code from scratch. I take instead advantage of the many person-years of engineering work put into the development of frameworks like TensorFlow, PyTorch, mxnet, etc. We now move on to two examples built with TensorFlow.

Deep Learning

Deep Learning models are generally understood to have many more hidden layers than simple multi-layer perceptron neural networks and often comprise multiple simple models combined together in series or in parallel. Complex architectures can be iteratively developed by manually adjusting the size of model components, changing the components, etc. Alternatively, model architecture search can be automated. At Capital One I used Google’s AdaNet project that efficiently searches for effective model architectures inside a single TensorFlow session. Now all major cloud compute provides support some form of AutoML. You need to make a decision for yourself how much effort you want to put into deeply understanding the technology, or simply learning how to use pre-trained models.

Using Apple’s Core ML Machine Learning and Deep Learning Libraries

NOTE: as of April 2022, this example does not work - problem with latest CreateML library.

Please note that this chapter is specific to Apple’s libraries using pre-trained deep learning models.

I assume that you are generally familiar with Apple’s CoreML documentation

There are two example GitHub repositories for this chapter:

In the last chapter we will use two deep learning models in a MacOS application that is available on Apple’s App Store.

If you have taken a class in Machine Learning or Deep Learning, you learned how to divide a training data set into separate training, development (often refered to as “dev” sets), and test data sets. This process is handled internally by the CoreML libraries we use here so we will only be using a single training data file. The CoreML APIs we use here perform a type of AutoML (automatic machine learning) by trying to train a model using several model types and choosing the model type with the best accuracy. This is convenient and saves engineering time. A trained model imported into XCode automatically generates Swift APIs for using the model. You can also take a trained CoreML model and use it in Python programs (documentation for Python use cases).

Training a Classification Model For the University of Wisconsin Cancer Data

When building the example model (data in files wisconsin.mlmodel*), a Swift file wisconsin.swift is auto-generated. In the project Makefile, notice that the make target clean removes these files:

1 build_model: clean
2 	swift build
3 	swift run
4 
5 clean:
6 	rm -f Sources/wisconsin_data/wisconsin.mlmodel*
7 	rm -f Sources/wisconsin_data/wisconsin.swift

The file Sources/wisconsin_data/main.swift reads a training file in CSV format and uses the CoreML libraries to train a prediction model. You might want to uncomment the print statement in line 10 to see the contents of the CSV formatted (i.e., a spreadsheet file) training data file. In lines 11-13 we define which columns in the input training CSV file that we will use to build our model (in this case we use all the data features).

In this example we use Apple’s APIs for MLClassifier that trains the following learning algorithms and keeps the best for the saved model:

  • Boosted trees classifier
  • Random forest classifier
  • Decision tree classifier
  • SVM
  • Logistic regression

There is optional material at the end of this chapter with background for these five types of models.

 1 import Foundation
 2 import CoreML
 3 import CreateML
 4 
 5 func create_model() {
 6     if #available(macOS 10.14, *) {
 7         let fileUrl = URL(fileURLWithPath: "labeled_cancer_data.csv")
 8         print(fileUrl)
 9         if let dataTable = try? MLDataTable(contentsOf: fileUrl) {
10             //print(dataTable)
11             let regressorColumns = ["Cl.thickness", "Cell.size",
12                                     "Cell.shape", "Marg.adhesion",
13                                     "Epith.c.size", "Bare.nuclei",
14                                     "Bl.cromatin", "Normal.nucleoli",
15                                     "Mitoses", "Class"]
16             
17             // Classifier:
18             let classifierTable = dataTable[regressorColumns]
19             let (classifierEvaluationTable, classifierTrainingTable) =
20               classifierTable.randomSplit(by: 0.20, seed: 5)
21             let classifier = try! MLClassifier(trainingData: classifierTrainingTable,
22                                               targetColumn: "Class")
23             print("++ classifier.description:", classifier)
24             /// Classifier training accuracy as a percentage
25             let trainingError = classifier.trainingMetrics.classificationError
26             let trainingAccuracy = (1.0 - trainingError) * 100
27             print("trainingAccuracy:", trainingAccuracy)
28             
29             /// Classifier validation accuracy as a percentage
30             let validationError = classifier.validationMetrics.classificationError
31             print("validationError:", validationError)
32             let validationAccuracy = (1.0 - validationError) * 100
33             print("validationAccuracy:", validationAccuracy)
34             /// Evaluate the classifier
35             let classifierEvaluation =
36               classifier.evaluation(on: classifierEvaluationTable)
37             
38             /// Classifier evaluation accuracy as a percentage
39             let evaluationError = classifierEvaluation.classificationError
40             print("evaluationError:", evaluationError)
41             let evaluationAccuracy = (1.0 - evaluationError) * 100
42             print("evaluationAccuracy:", evaluationAccuracy)
43             
44             let classifierMetadata =
45               MLModelMetadata(author: "Mark Watson",
46                               shortDescription: "Wisconsin Cancer Dataset",
47                               version: "1.0")
48             
49             /// Save the trained classifier model to the Desktop.
50             let _ =
51               try? classifier.write(to: URL(fileURLWithPath:
52                                            "Sources/wisconsin_data/wisconsin.mlmodel\
53 "),
54                                            metadata: classifierMetadata)
55         }
56     }
57 }
58 
59 create_model()
  1 $ make
  2 rm -f Sources/wisconsin_data/wisconsin.mlmodel*
  3 rm -f Sources/wisconsin_data/wisconsin.swift
  4 swift build
  5 [0/0] Build complete!
  6 swift run
  7 [0/0] Build complete!
  8 column_type_hints = {}
  9 Finished parsing file /Users/markw_1/GITHUB/wisconsin_data_create_model/labeled_canc\
 10 er_data.csv
 11 Parsing completed. Parsed 100 lines in 0.01006 secs.
 12 Finished parsing file /Users/markw_1/GITHUB/wisconsin_data_create_model/labeled_canc\
 13 er_data.csv
 14 Parsing completed. Parsed 683 lines in 0.003458 secs.
 15 Using 9 features to train a model to predict Class.
 16 
 17 Automatically generating validation set from 5% of the data.
 18 Boosted trees classifier:
 19 --------------------------------------------------------
 20 Number of examples          : 522
 21 Number of classes           : 2
 22 Number of feature columns   : 9
 23 Number of unpacked features : 9
 24 +-----------+--------------+-------------------+---------------------+--------------\
 25 -----+---------------------+
 26 | Iteration | Elapsed Time | Training Accuracy | Validation Accuracy | Training Log \
 27 Loss | Validation Log Loss |
 28 +-----------+--------------+-------------------+---------------------+--------------\
 29 -----+---------------------+
 30 | 1         | 0.006108     | 0.988506          | 0.904762            | 0.459892     \
 31      | 0.520190            |
 32 | 2         | 0.010718     | 0.984674          | 0.857143            | 0.329561     \
 33      | 0.412062            |
 34 | 3         | 0.015658     | 0.984674          | 0.857143            | 0.245602     \
 35      | 0.337748            |
 36 | 4         | 0.020130     | 0.986590          | 0.857143            | 0.186529     \
 37      | 0.291379            |
 38 | 5         | 0.024706     | 0.990421          | 0.857143            | 0.144306     \
 39      | 0.262312            |
 40 | 10        | 0.043619     | 0.996169          | 0.904762            | 0.049835     \
 41      | 0.180445            |
 42 +-----------+--------------+-------------------+---------------------+--------------\
 43 -----+---------------------+
 44 Random forest classifier:
 45 --------------------------------------------------------
 46 Number of examples          : 522
 47 Number of classes           : 2
 48 Number of feature columns   : 9
 49 Number of unpacked features : 9
 50 +-----------+--------------+-------------------+---------------------+--------------\
 51 -----+---------------------+
 52 | Iteration | Elapsed Time | Training Accuracy | Validation Accuracy | Training Log \
 53 Loss | Validation Log Loss |
 54 +-----------+--------------+-------------------+---------------------+--------------\
 55 -----+---------------------+
 56 | 1         | 0.002102     | 0.984674          | 0.904762            | 0.173523     \
 57      | 0.305533            |
 58 | 2         | 0.003890     | 0.986590          | 0.904762            | 0.171982     \
 59      | 0.306030            |
 60 | 3         | 0.005461     | 0.984674          | 0.904762            | 0.173111     \
 61      | 0.276622            |
 62 | 4         | 0.006758     | 0.984674          | 0.904762            | 0.171693     \
 63      | 0.285118            |
 64 | 5         | 0.007481     | 0.982759          | 0.952381            | 0.172563     \
 65      | 0.273630            |
 66 | 10        | 0.011962     | 0.984674          | 0.952381            | 0.171195     \
 67      | 0.261603            |
 68 +-----------+--------------+-------------------+---------------------+--------------\
 69 -----+---------------------+
 70 Decision tree classifier:
 71 --------------------------------------------------------
 72 Number of examples          : 522
 73 Number of classes           : 2
 74 Number of feature columns   : 9
 75 Number of unpacked features : 9
 76 +-----------+--------------+-------------------+---------------------+--------------\
 77 -----+---------------------+
 78 | Iteration | Elapsed Time | Training Accuracy | Validation Accuracy | Training Log \
 79 Loss | Validation Log Loss |
 80 +-----------+--------------+-------------------+---------------------+--------------\
 81 -----+---------------------+
 82 | 1         | 0.002216     | 0.988506          | 0.904762            | 0.170105     \
 83      | 0.352356            |
 84 +-----------+--------------+-------------------+---------------------+--------------\
 85 -----+---------------------+
 86 SVM:
 87 --------------------------------------------------------
 88 Number of examples          : 522
 89 Number of classes           : 2
 90 Number of feature columns   : 9
 91 Number of unpacked features : 9
 92 Number of coefficients    : 10
 93 Starting L-BFGS 
 94 --------------------------------------------------------
 95 +-----------+----------+-----------+--------------+-------------------+-------------\
 96 --------+
 97 | Iteration | Passes   | Step size | Elapsed Time | Training Accuracy | Validation A\
 98 ccuracy |
 99 +-----------+----------+-----------+--------------+-------------------+-------------\
100 --------+
101 | 0         | 2        | 1.000000  | 0.000629     | 0.350575          | 0.285714    \
102         |
103 | 1         | 6        | 3.000000  | 0.001610     | 0.908046          | 0.857143    \
104         |
105 | 2         | 7        | 3.000000  | 0.002089     | 0.840996          | 0.809524    \
106         |
107 | 3         | 12       | 1.053671  | 0.004093     | 0.961686          | 0.952381    \
108         |
109 | 4         | 13       | 1.053671  | 0.004610     | 0.959770          | 0.904762    \
110         |
111 | 9         | 20       | 1.053671  | 0.007046     | 0.971264          | 0.904762    \
112         |
113 +-----------+----------+-----------+--------------+-------------------+-------------\
114 --------+
115 Logistic regression:
116 --------------------------------------------------------
117 Number of examples          : 522
118 Number of classes           : 2
119 Number of feature columns   : 9
120 Number of unpacked features : 9
121 Number of coefficients      : 10
122 Starting Newton Method 
123 --------------------------------------------------------
124 +-----------+----------+--------------+-------------------+---------------------+
125 | Iteration | Passes   | Elapsed Time | Training Accuracy | Validation Accuracy |
126 +-----------+----------+--------------+-------------------+---------------------+
127 | 1         | 2        | 0.000373     | 0.967433          | 0.904762            |
128 | 2         | 3        | 0.000724     | 0.969349          | 0.904762            |
129 | 3         | 4        | 0.001080     | 0.975096          | 0.904762            |
130 | 4         | 5        | 0.001427     | 0.978927          | 0.904762            |
131 | 5         | 6        | 0.001796     | 0.978927          | 0.904762            |
132 | 7         | 8        | 0.002388     | 0.978927          | 0.904762            |
133 +-----------+----------+--------------+-------------------+---------------------+
134 SUCCESS: Optimal solution found.
135 
136 ++ classifier.description: RandomForestClassifier
137 
138 Parameters
139 Max Depth: 6
140 Max Iterations: 10
141 Min Loss Reduction: 0.0
142 Min Child Weight: 0.0
143 Random Seed: 42
144 Row Subsample: 0.8
145 Column Subsample: 0.8
146 
147 Performance on Training Data
148 Number of examples: 522
149 Number of classes: 2
150 Accuracy: 98.47%
151 
152 Performance on Validation Data
153 Number of examples: 21
154 Number of classes: 2
155 Accuracy: 95.24%
156 
157 trainingAccuracy: 98.46743295019157
158 validationError: 0.04761904761904767
159 validationAccuracy: 95.23809523809523
160 evaluationError: 0.050000000000000044
161 evaluationAccuracy: 95.0
162 Trained model successfully saved at /Users/markw_1/GITHUB/SwiftAI-book-code/wisconsi\
163 n_data_create_model/Sources/wisconsin_data/wisconsin.mlmodel.

Using the Classification Model for the University of Wisconsin Cancer Data

The GitHub repo https://github.com/mark-watson/swift-coreml-wisconsin_data_predict_with_model contains a Makefile with a target for building the prediction code:

 1 build_preditor: clean
 2 	cp ../wisconsin_data_create_model/Sources/wisconsin_data/wisconsin.mlmodel \
 3 	   Sources/wisconsin_data/
 4 	cd Sources/wisconsin_data; \
 5 	   xcrun coremlcompiler generate wisconsin.mlmodel --language Swift .
 6 	cd Sources/wisconsin_data; xcrun coremlcompiler compile wisconsin.mlmodel .
 7 	swift build
 8 	swift run
 9 
10 clean:
11 	rm -rf Sources/wisconsin_data/wisconsin.mlmodel*
12 	rm -rf Sources/wisconsin_data/wisconsin.swift

The file swift-coreml-wisconsin_data_predict_with_model/Sources/wisconsin_data/main.swift contains the prediction code:

 1 import Foundation
 2 import CoreML
 3 import CreateML
 4 
 5 func predict() {
 6     if #available(macOS 10.14, *) {
 7         
 8         let modelUrl = URL(fileURLWithPath:
 9             "Sources/wisconsin_data/wisconsin.mlmodelc")
10         let pretrained_model = try! wisconsin(contentsOf: modelUrl,
11              configuration: MLModelConfiguration())
12         
13         let sampleInput = wisconsinInput(Cl_thickness: 3, Cell_size: 2,
14             Cell_shape: 5, Marg_adhesion: 8, Epith_c_size: 8, Bare_nuclei: 2, Bl_cro\
15 matin: 3,
16             Normal_nucleoli: 7, Mitoses: 4)
17         let a_prediction = try! pretrained_model.prediction(input: sampleInput)
18         print(a_prediction.featureNames)
19         print("Class:", a_prediction.featureValue(for: "Class")!)
20         print("ClassProbability:",
21               a_prediction.featureValue(for: "ClassProbability")!)
22     }
23 }
24 
25 predict()

We can run the prediction example on the command line:

 1 $ make
 2 rm -rf Sources/wisconsin_data/wisconsin.mlmodel*
 3 rm -rf Sources/wisconsin_data/wisconsin.swift
 4 cp ../wisconsin_data_create_model/Sources/wisconsin_data/wisconsin.mlmodel \
 5     Sources/wisconsin_data/
 6 cd Sources/wisconsin_data; \
 7     xcrun coremlcompiler generate wisconsin.mlmodel --language Swift .
 8 /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/wisco\
 9 nsin.swift
10 cd Sources/wisconsin_data; xcrun coremlcompiler compile wisconsin.mlmodel .
11 /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/wisco\
12 nsin.mlmodelc/coremldata.bin
13 /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/wisco\
14 nsin.mlmodelc/analytics/coremldata.bin
15 /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/wisco\
16 nsin.mlmodelc/model0/coremldata.bin
17 /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/wisco\
18 nsin.mlmodelc/model1/coremldata.bin
19 /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/wisco\
20 nsin.mlmodelc/model1/_B0000.DAT
21 swift build
22 'wisconsin_data' /Users/markw_1/GITHUB/wisconsin_data_predict_with_model: warning: f\
23 ound 1 file(s) which are unhandled; explicitly declare them as resources or exclude \
24 from the target
25     /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/w\
26 isconsin.mlmodel
27 
28 [4/4] Build complete!
29 swift run
30 'wisconsin_data' /Users/markw_1/GITHUB/wisconsin_data_predict_with_model: warning: f\
31 ound 1 file(s) which are unhandled; explicitly declare them as resources or exclude \
32 from the target
33     /Users/markw_1/GITHUB/wisconsin_data_predict_with_model/Sources/wisconsin_data/w\
34 isconsin.mlmodel
35 
36 [0/0] Build complete!
37 ["Class", "ClassProbability"]
38 Class: Int : 0
39 ClassProbability: Dictionary : {
40     0 = "0.7969631955468645";
41     1 = "0.2030368044531356";
42 }

I recommend that you read through Apple’s documentation and bookmark the page for the CoreML classification modes.

Boosted Trees Classifiers are comprised of individual models summed together, where the simpler models are learned decision trees (a type of ensemble models).

Random Forest Classifiers are similar to Boosted Trees Classifiers except the ensemble sub-classifier comprising Random Forest Classifiers are each trained with a subset of the data.

You might also want to review Apple’s documentation for the following conventional Machine Learning algorithms: Decision tree classifier, SVM, and Logistic regression.

Natural Language Processing Using Apple’s Natural Language Framework

I have been working in the field of Natural Language Processing (NLP) since 1985 so I ‘lived through’ the revolutionary change in NLP that has occurred since 2014: Deep Learning results out-classed results from previous symbolic methods.

https://developer.apple.com/documentation/naturallanguage

I will not cover older symbolic methods of NLP here, rather I refer you to my previous books Practical Artificial Intelligence Programming With Java, Loving Common Lisp, or the Savvy Programmer’s Secret Weapon, and Haskell Tutorial and Cookbook for examples. We get better results using Deep Learning (DL) for NLP and the libraries that Apple provides.

You will learn how to apply both DL and NLP by using the state-of-the-art full-feature libraries that Apple provides in their iOS and macOS development tools.

Using Apple’s NaturalLanguage Swift Library

We will use one of Apple’s NLP libraries consisting of pre-built models in the last chapter of this book. In order to fully understand the example in the last chapter you will need to read Apple’s high-level discussion of using CoreML https://developer.apple.com/documentation/coreml and their specific support for NLP https://developer.apple.com/documentation/naturallanguage/.

There are many pre-trained CoreML compatible models on the web, both from Apple and also from third party (e.g., https://github.com/likedan/Awesome-CoreML-Models).

Apple also provides tools for converting TensorFlow and PyTorch models to be compatible with CoreML https://coremltools.readme.io/docs.

A simple Wrapper Library for Apple’s NLP Models

I will not go into too much detail here but I created a small wrapper library for Apple’s NLP models that will make it easier for you to jump in and have fun experimenting with them: https://github.com/mark-watson/Nlp_swift.

The main library implementation file is:

 1 import Foundation
 2 import NaturalLanguage
 3 
 4 let tagger = NSLinguisticTagger(tagSchemes:[.tokenType, .language, .lexicalClass,
 5     .nameType, .lemma], options: 0) 
 6 let options: NSLinguisticTagger.Options = [.omitPunctuation, .omitWhitespace,
 7     .joinNames]
 8 
 9 @available(OSX 10.13, *)
10 public func getEntities(for text: String) -> [(String, String)] {
11     var words: [(String, String)] = []
12     tagger.string = text
13     let range = NSRange(location: 0, length: text.utf16.count)
14     tagger.enumerateTags(in: range, unit: .word, scheme: .nameType,
15     options: options) { tag, tokenRange, stop in
16         let word = (text as NSString).substring(with: tokenRange)
17         words.append((word, tag?.rawValue ?? "unkown"))
18     }
19     return words
20 }
21 
22 @available(OSX 10.13, *)
23 public func getLemmas(for text: String) -> [(String, String)] {
24     var words: [(String, String)] = []
25     tagger.string = text
26     let range = NSRange(location: 0, length: text.utf16.count)
27     tagger.enumerateTags(in: range, unit: .word, scheme: .lemma, 
28             options: options) { tag, tokenRange, stop in
29         let word = (text as NSString).substring(with: tokenRange)
30         words.append((word, tag?.rawValue ?? "unkown"))
31     }
32     return words
33 }

Here is some test code:

 1 let quote = "President George Bush went to Mexico with IBM representatives. Here's t\
 2 o the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the \
 3 square holes. The ones who see things differently. They're not fond of rules. And th\
 4 ey have no respect for the status quo. You can quote them, disagree with them, glori\
 5 fy or vilify them. About the only thing you can't do is ignore them. Because they ch\
 6 ange things. They push the human race forward. And while some may see them as the cr\
 7 azy ones, we see genius. Because the people who are crazy enough to think they can c\
 8 hange the world, are the ones who do. - Steve Jobs (Founder of Apple Inc.)"
 9 if #available(OSX 10.13, *) {
10             print("\nEntities:\n")
11             print(getEntities(for: quote))
12             print("\nLemmas:\n")
13             print(getLemmas(for: quote))
14 }

Using the OpenAI APIs

I have been working as an artificial intelligence practitioner since 1982 and the capability of the beta OpenAI APIs is the most impressive thing that I have seen (so far!) in my career. These APIs use the GPT-3 model. You will need to apply to OpenAI for a free API access key. I use their APIs frequently enough in my projects that I am on their paid plan.

I recommend reading the online documentation for the online documentation for the APIs to see all the capabilities of the beta OpenAI APIs. Let’s start by jumping into the example code that is a GitHub repository https://github.com/mark-watson/OpenAI_swift that you can use in your projects.

The library that I wrote for this chapter supports three functions: for completing text, summarizing text, and answering general questions. The single OpenAI model that the beta OpenAI APIs use is fairly general purpose and can generate cooking directions when given an ingredient list, grammar correction, write an advertisement from a product description, generate spreadsheet data from data descriptions in English text, etc.

Given the examples from https://beta.openai.com and the Clojure examples here, you should be able to modify my example code to use any of the functionality that OpenAI documents.

We will look closely at the function completions and then just look at the small differences to the other two example functions. The definitions for all three exported functions are kept in the file src/openai_api/core.clj*. You need to request an API key (I had to wait a few weeks to recieve my key) and set the value of the environment variable OPENAI_KEY to your key. You can add a statement like:

export OPENAI_KEY=sa-hdedds7&dhdhsdffd

to your .profile or other shell resource file that contains your key value (the above key value is made-up and invalid).

While I sometimes use pure Clojure libraries to make HTTP requests, I prefer using the curl utility to experiment with API calls from the command line before starting to write any code.

An example curl command line call to the beta OpenAI APIs is:

1 curl \
2   https://api.openai.com/v1/engines/davinci/completions \
3    -H "Content-Type: application/json"
4    -H "Authorization: Bearer sa-hdffds7&dhdhsdgffd" \
5    -d '{"prompt": "The President went to Congress", \
6         "max_tokens": 22}'

Here the API token “sa-hdffds7&dhdhsdgffd” on line 4 is made up - that is not my API token. All of the OpenAI APIs expect JSON data with query parameters. To use the completion API, we set values for prompt and max_tokens. The value of max_tokens is the requested number of returns words or tokens. We will look at several examples later.

In the file Sources/OpenAI_swift/OpenAI_swift.swift we start with a helper function openAiHelper that takes a string with the OpenAI API call arguments then extracts the results from the returned JSON data:

 1 func openAiHelper(body: String)  -> String {
 2     var ret = ""
 3     var content = "{}"
 4     let requestUrl = URL(string: openAiHost)!
 5     var request = URLRequest(url: requestUrl)
 6     request.httpMethod = "POST"
 7     request.httpBody = body.data(using: String.Encoding.utf8);
 8     request.setValue("application/json", forHTTPHeaderField: "Content-Type")
 9     request.setValue("Bearer " + openai_key, forHTTPHeaderField: "Authorization")
10     let task = URLSession.shared.dataTask(with: request) { (data, response, error) in
11         if let error = error {
12             print("-->> Error accessing OpenAI servers: \(error)")
13             return
14         }
15         if let data = data, let s = String(data: data, encoding: .utf8) {
16             content = s
17             CFRunLoopStop(CFRunLoopGetMain())
18         }
19     }
20     task.resume()
21     CFRunLoopRun()
22     let c = String(content)
23     let i1 = c.range(of: "\"text\": ")
24     if let r1 = i1 {
25         let i2 = c.range(of: "\"index\":")
26         if let r2 = i2 {
27             ret = String(String(String(c[r1.lowerBound..<r2.lowerBound])
28              .dropFirst(9)).dropLast(2))
29         }
30     }
31     return ret
32 }

I convert JSON data to a string output by searching for constants “text:” and “index:” instead of using a JSON parser like I do in the later KGN example.

The three example functions all use this openAiHelper function. The first example function completions sets the parameters to complete a text fragment. You have probably seen examples of the OpenAI GPT-3 model writing stories, given a starting sentence. We are using the same model and functionality here:

1 public func completions(promptText: String, maxTokens: Int = 25) -> String {
2     let body: String = "{\"prompt\": \"" + promptText + "\",
3         \"max_tokens\": \(maxTokens)" + "}"
4     return openAiHelper(body: body)}

Note that the OpenAI models are stochastic. When generating output words (or tokens), the model assigns probabilities to possible words to generate and samples a word using these probabilities. As a simple example, suppose given prompt text “it fell and”, then the model could only generate three words, with probabilities for each word based on this prompt text:

  • the 0.9
  • that 0.1
  • a 0.1

The model would emit the word the 90% of the time, the word that 10% of the time, or the word a 10% of the time. As a result, the model can generate different completion text for the same text prompt. Let’s look at some examples using the same prompt text. Notice the stochastic nature of the returned results with prompt text ““He walked to the river” passed twice to the OpenAI GPT-3 model:

First example:

1  and sat down thinking, the warm evening clotted with insects. The river lapping the\
2  bank in the long grass. He

Another example of text completion:

1 , the beast running slowly behind him. He looked away from the cave now, using rain \
2 and clouds as his curtain to hide

The function summarize is very similar to the function completions except the JSON data passed to the API has a few additional parameters that let the API know that we want a text summary:

  • presence_penalty - penalize words found in the original text (we set this to zero)
  • temperature - higher values the randomness used to select output tokens. If you set this to zero, then the same prompt text will always yield the same results (I never use a zero value).
  • top_p - also affects randomness. All examples I have seen use a value of 1.
  • frequency_penalty - penalize using the same words repeatedly (I usually set this to zero, but you should experiment with different values)

When summarizing text, try varying the number of generated tokens to get shorter or longer summaries; in the following examples we ask for 24, 90, and 150 output tokens (lines are broken to fit page width):

1 public func summarize(text: String, maxTokens: Int = 40) -> String {
2     let body: String = "{\"prompt\": \"" + text + "\",
3        \"max_tokens\": \(maxTokens), \"presence_penalty\": 0.0, \"temperature\": 0.3,
4        \"top_p\": 1.0, \"frequency_penalty\": 0.0}"
5     return openAiHelper(body: body)}

Notice the stochastic nature of the returned summarization results with prompt text “Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter.[19] When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows,[20] and is on average the third-brightest natural object in the night sky after the Moon and Venus.”:

First summarization example:

1  Jupiter is a gas giant because it is predominantly composed of hydrogen and
2  helium; it has a solid core, but it has no surface. Jupiter is a gas giant
3  because it is predominantly composed"

Another summarization example:

1 The planet is usually the fourth-brightest in the night sky, after the Sun,
2 Venus and the Moon.
3 
4 Jupiter is a gas giant because it is predominantly composed of hydrogen

The function answerQuestion is very similar to the function summarize except the JSON data passed to the API has one additional parameter that let the API know that we want a question answered:

  • stop - The OpenAI API examples use the value: [\n], which is what I use here.

We also need to prepend the string “nQ: “ to the prompt text.

Additionally, the model returns a series of answers with the string “nQ:” acting as a delimiter between the answers.

 1 public func questionAnsweering(question: String) -> String {
 2     let body: String = "{\"prompt\": \"nQ: " + question + " nA:\", \"max_tokens\": 2\
 3 5, \"presence_penalty\": 0.0, \"temperature\": 0.3, \"top_p\": 1.0, \"frequency_pena\
 4 lty\": 0.0 , \"stop\": [\"\\n\"]}"
 5     let answer = openAiHelper(body: body)
 6     if let i1 = answer.range(of: "nQ:") {
 7         return String(answer[answer.startIndex..<i1.lowerBound])
 8         //return String(answer.prefix(i1.lowerBound))
 9     }
10     return answer}

I strongly urge you to add a debug printout to the question answering code to print the full answer before we check for the delimiter string. For some questions, the OpenAI APIs generate a series of answers that increase in generality. In the example code we just take the most specific answer.

Let’s look at a few question answering examples and we will discuss possible problems and workarounds. The first two examples ask the same question and get back different, but reasonable answers. The third example asks a general question. The GPT-3 model is trained using a massive amount of text from the web which is why it can generate reasonable answers. Here are two examples for answering the question “Where was Leonardo da Vinci born?”:

1 In Vinci, Italy.

And another generated output for the same question:

1 In Italy.

In addition to reading the beta OpenAI API documentation you might want to read general material on the use of OpenAI’s GPT-3 model. Since the APIs we are using are beta they may change. I will update this chapter and the source code on GitHub if the APIs change.

Part 3: Knowledge Representation and Data Acquisition

In this part we cover:

  • Introduction to the semantic web and linked data
  • A general discussion of Knowledge Representation
  • Create Knowledge Graphs from text input
  • Knowledge Graph Explorer application

Linked Data and the Semantic Web

Tim Berners Lee, James Hendler, and Ora Lassila wrote in 2001 an article for Scientific American where they introduced the term Semantic Web. Here I do not capitalize semantic web and use the similar term linked data somewhat interchangeably with semantic web.

In the same way that the web allows links between related web pages, linked data supports linking associated data on the web together. I view linked data as a relatively simple way to specify relationships between data sources on the web while the semantic web has a much larger vision: the semantic web has the potential to be the entirety of human knowledge represented as data on the web in a form that software agents can work with to answer questions, perform research, and to infer new data from existing data.

While the “web” describes information for human readers, the semantic web is meant to provide structured data for ingestion by software agents. This distinction will be clear as we compare WikiPedia, made for human readers, with DBPedia which uses the info boxes on WikiPedia topics to automatically extract RDF data describing WikiPedia topics. Let’s look at the WikiPedia topic for the town I live in Sedona, Arizona, and show how the info box on the English version of the WikiPedia topic page for Sedona https://en.wikipedia.org/wiki/Sedona,_Arizona maps to the DBPedia page http://dbpedia.org/page/Sedona,_Arizona. Please open both of these WikiPedia and DBPedia URIs in two browser tabs and keep them open for reference.

I assume that the format of the WikiPedia page is familiar so let’s look at the DBPedia page for Sedona that in human readble form shows the RDF statements with Sedona Arizona as the subject. RDF is used to model and represent data. RDF is defined by three values so an instance of an RDF statement is called a triple with three parts:

  • subject: a URI (also referred to as a “Resource”)
  • property: a URI (also referred to as a “Resource”)
  • value: a URI (also referred to as a “Resource”) or a literal value (like a string or a number with optional units)

The subject for each Sedona related triple is the above URI for the DBPedia human readable page. The subject and property references in an RDF triple will almost always be a URI that can ground an entity to information on the web. The human readable page for Sedona lists several properties and the values of these properties. One of the properties is “dbo:areaCode” where “dbo” is a name space reference (in this case for a DatatypeProperty).

The following two figures show an abstract representation of linked data and then a sample of linked data with actual web URIs for resources and properties:

Abstract RDF representation with 2 Resources, 2 literal values, and 3 Properties
Abstract RDF representation with 2 Resources, 2 literal values, and 3 Properties
Concrete example using RDF seen in last chapter showing the RDF representation with 2 Resources, 2 literal values, and 3 Properties
Concrete example using RDF seen in last chapter showing the RDF representation with 2 Resources, 2 literal values, and 3 Properties

We will use the SPARQL query language (SPARQL for RDF data is similar to SQL for relational database queries). Let’s look at an example using the RDF in the last figure:

1     "select ?v where { <http://markwatson.com/index.rdf#Sun_ONE>
2                        <http://www.ontoweb.org/ontology/1#booktitle>
3                        ?v }

This query should return the result “Sun ONE Services - J2EE”. If you wanted to query for all URI resources that are books with the literal value of their titles, then you can use:

1     "select ?s ?v where { ?s
2                           <http://www.ontoweb.org/ontology/1#booktitle>
3                           ?v }

Note that ?s and ?v are arbitrary query variable names, here standing for “subject” and “value”. You can use more descriptive variable names like:

1     "select ?bookURI ?bookTitle where 
2         { ?bookURI
3           <http://www.ontoweb.org/ontology/1#booktitle>
4           ?bookTitle }

We will be diving a little deeper into RDF examples in the next chapter when we write a tool for using RDF data from DBPedia to find information about entities (e.g., people, places, organizations) and the relationships between entities. For now I want you to understand the idea of RDF statements represented as triples, that web URIs represent things, properties, and sometimes values, and that URIs can be followed manually (often called “dereferencing”) to see what they reference in human readable form.

Understanding the Resource Description Framework (RDF)

Text data on the web has some structure in the form of HTML elements like headers, page titles, anchor links, etc. but this structure is too imprecise for general use by software agents. RDF is a method for encoding structured data in a more precise way.

RDF specifies graph structures and can be serialized for storage or for service calls in XML, Turtle, N3, and other formats. I like the Turtle format and suggest that you pause reading this book for a few minutes and look at this World Wide Web Consortium Turtle RDF primer at https://www.w3.org/2007/02/turtle/primer/.

Frequently Used Resource Namespaces

The following standard namespaces are frequently used:

Let’s look into the Friend of a Friend (FOAF) namespace. Click on the above link for FOAF http://xmlns.com/foaf/0.1/ and find the definitions for the FOAF Core:

 1     Agent
 2     Person
 3     name
 4     title
 5     img
 6     depiction (depicts)
 7     familyName
 8     givenName
 9     knows
10     based_near
11     age
12     made (maker)
13     primaryTopic (primaryTopicOf)
14     Project
15     Organization
16     Group
17     member
18     Document
19     Image

and for the Social Web:

 1 mbox
 2 homepage
 3 weblog
 4 openid
 5 jabberID
 6 mbox_sha1sum
 7 interest
 8 topic_interest
 9 topic (page)
10 workplaceHomepage
11 workInfoHomepage
12 schoolHomepage
13 publications
14 currentProject
15 pastProject
16 account
17 OnlineAccount
18 accountName
19 accountServiceHomepage
20 PersonalProfileDocument
21 tipjar
22 sha1
23 thumbnail
24 logo

You now have seen a few common Schemas for RDF data. Another Schema that is widely used for annotating web sites that we won’t need for our examples here, is schema.org.

Understanding the SPARQL Query Language

For the purposes of the material in this book, the two sample SPARQL queries here are sufficient for you to get started using my SPARQL library https://github.com/mark-watson/SparqlQuery_swift with arbitrary RDF data sources and simple queries.

My Swift SPARQL library open in Xcode
My Swift SPARQL library open in Xcode

The Apache Foundation has a good introduction to SPARQL that I refer you to for more information.

Semantic Web and Linked Data Wrap Up

In the next chapter we will use natural language processing to extract structured information from raw text from SPARQL queries. We will be using my Swift SPARQL library https://github.com/mark-watson/SparqlQuery_swift as well as two pre-trained CoreML deep learning models.

Example Application: iOS and macOS Versions of my KnowledgeBookNavigator

I used many of the techniques discussed in this book, the Swift language, and the SwiftUI user interface framework to develop Swift version of my Knowledge Graph Navigator application for macOS. I originally wrote this as an example program in Common Lisp for another book project.

The GitHub repository for the KGN example is https://github.com/mark-watson/KGN. I copied the code from my stand-alone Swift libraries to this example to make it self contained. The easiest way to browse the source code is to open this project in Xcode.

I submitted the KGN app that we discuss in this chapter to Apple’s store and is available as a macOS app. If you load this project into Xcode, you can also build and run the iOS and iPadOS targets.

You will need to have read through the last chapter on semantic web and linked data technologies to understand this example because quite a lot of the code has embedded SPARQL queries to get information from DBPedia.org.

The other major part of this app is a slightly modified version of Apple’s question answering (QA) example using the BERT model in CoreML. Apple’s code is in the subdirectory AppleBERT. Please read the README file for this project and follow the directions for downloading and using Apple’s model and vocabulary file.

Screen Shots of macOS Application

In the first screenshot seen below, I had entered query text that included “Steve Jobs” and the popup list selector is used to let the user select which “Steve Jobs” entity from DBPedia that they want to use.

Entered query and KGN is asking user to disambiguate which "Steve Jobs" they want information for
Entered query and KGN is asking user to disambiguate which “Steve Jobs” they want information for
Showing results
Showing results

The previous screenshot shows the results to the query displayed as English text.

Notice the app prompt “Behind the scenes SPARQL queries” near the bottom of the app window. If you click on this field then the SPARQL queries used to answer the question are shown, as on the next screenshot:

Showing SPARQL queries used to gather data
Showing SPARQL queries used to gather data

Application Code Listings

I will list some of the code for this example application and I suggest that you, dear reader, also open this project in Xcode in order to navigate the sample code and more carefully read through it.

SPARQL

I introduced you to the use of SPARQL in the last chapter. This library can be used by adding a reference to the Project.swift file for this project. You can also clone the GitHub repository https://github.com/mark-watson/Nlp_swift to have the source code for local viewing and modification and I have copied the code into the KGN project.

The file SparqlQuery.swift is shown here:

 1 import Foundation
 2 
 3 public func sparqlDbPedia(query: String) -> Array<Dictionary<String,String>> {
 4     return SparqlEndpointHelpter(query: query,
 5         endPointUri: "https://dbpedia.org/sparql?query=") }
 6 
 7 public func sparqlWikidata(query: String) -> Array<Dictionary<String,String>> {
 8     return SparqlEndpointHelpter(query: query,
 9         endPointUri:
10           "https://query.wikidata.org/bigdata/namespace/wdq/sparql?query=") }
11 
12 public func SparqlEndpointHelpter(query: String,
13                                   endPointUri: String) ->
14                             Array<Dictionary<String,String>> {
15     var ret = Set<Dictionary<String,String>>();
16     var content = "{}"
17 
18     let maybeString = cacheLookupQuery7(key: query)
19     if maybeString?.count ?? 0 > 0 {
20         content = maybeString ?? ""
21     } else {
22         let requestUrl = URL(string: String(endPointUri + query.addingPercentEncodin\
23 g(withAllowedCharacters:
24           .urlHostAllowed)!) + "&format=json")!
25         do { content = try String(contentsOf: requestUrl) }
26           catch let error { print(error) }
27     }
28     let json = try? JSONSerialization.jsonObject(with: Data(content.utf8),
29                                                  options: [])
30     if let json2 = json as! Optional<Dictionary<String, Any?>> {
31         if let head = json2["head"] as? Dictionary<String, Any> {
32             if let xvars = head["vars"] as! NSArray? {
33                 if let results = json2["results"] as? Dictionary<String, Any> {
34                     if let bindings = results["bindings"] as! NSArray? {
35                         if bindings.count > 0 {
36                             for i in 0...(bindings.count-1) {
37                                 if let first_binding =
38                                 bindings[i] as? Dictionary<String,
39                                 Dictionary<String,String>> {
40                                     var ret2 = Dictionary<String,String>();
41                                     for key in xvars {
42                                         let key2 : String = key as! String
43                                         if let vals = (first_binding[key2]) {
44                                             let vv : String = vals["value"] ?? "err2"
45                                             ret2[key2] = vv } }
46                                     if ret2.count > 0 {
47                                         ret.insert(ret2)
48                                     }}}}}}}}}
49     return Array(ret) }

The file QueryCache.swift contains code written by Khoa Pham (MIT License) that can be found in the GitHub repository https://github.com/onmyway133/EasyStash. This file is used to cache SPARQL queries and the results. In testing this application I noticed that there were many repeated queries to DBPedia so I decided to cache results. Here is the simple API I added on top of Khoa Pham’s code:

 1 //  Created by khoa on 27/05/2019.
 2 //  Copyright © 2019 Khoa Pham. All rights reserved. MIT License.
 3 //  https://github.com/onmyway133/EasyStash
 4 //
 5 
 6 import Foundation
 7 
 8 //      Mark's simple wrapper:
 9 
10 var storage: Storage? = nil
11 
12 public func cacheStoreQuery(key: String, value: String) {
13     do { try storage?.save(object: value, forKey: key) } catch {}
14 }
15 public func cacheLookupQuery7(key: String) -> String? {
16     // optional DEBUG code: clear cache
17     //do { try storage?.removeAll() } catch { print("ERROR CLEARING CACHE") }
18     do {
19         return try storage?.load(forKey: key, as: String.self)
20     } catch { return "" }
21 }
22 
23 // remaining code not shown for brevity.

The code in file GenerateSparql.swift is used to generate queries for DBPedia. The line-wrapping for embedded SPARQL queries in the next code section is difficult to read so you may want to open the source file in Xcode. Please note that the KGN application prints out the SPARQL queries used to fetch information from DBPedia. The embedded SPARQL query templates used here have variable slots that filled in at runtime to customize the queries.

  1 //
  2 //  GenerateSparql.swift
  3 //  KGNbeta1
  4 //
  5 //  Created by Mark Watson on 2/28/20.
  6 //  Copyright © 2021 Mark Watson. All rights reserved.
  7 //
  8 
  9 import Foundation
 10 
 11 public func uri_to_display_text(uri: String)
 12                                      -> String {
 13     return uri.replacingOccurrences(of: "http://dbpedia.org/resource/Category/",
 14         with: "").
 15       replacingOccurrences(of: "http://dbpedia.org/resource/",
 16         with: "").
 17          replacingOccurrences(of: "_", with: " ")
 18 }
 19 
 20 public func get_SPARQL_for_finding_URIs_for_PERSON_NAME(nameString: String)
 21                                               -> String {
 22     return
 23         "# SPARQL to find all URIs for name: " +
 24         nameString + "\nSELECT DISTINCT ?person_uri ?comment {\n" +
 25         "  ?person_uri <http://xmlns.com/foaf/0.1/name> \"" +
 26         nameString + "\"@en .\n" +
 27         "  OPTIONAL { ?person_uri <http://www.w3.org/2000/01/rdf-schema#comment>\n" +
 28         "     ?comment . FILTER (lang(?comment) = 'en') } .\n" +
 29         "} LIMIT 10\n"
 30 }
 31 
 32 public func get_SPARQL_for_PERSON_URI(aURI: String) -> String {
 33     return
 34         "# <" + aURI + ">\nSELECT DISTINCT ?comment (GROUP_CONCAT(DISTINCT ?birthpla\
 35 ce; SEPARATOR=' | ') AS ?birthplace)\n  (GROUP_CONCAT(DISTINCT ?almamater; SEPARATOR\
 36 =' | ') AS ?almamater) (GROUP_CONCAT(DISTINCT ?spouse; SEPARATOR=' | ') AS ?spouse) \
 37 {\n" +
 38         "  <" + aURI + "> <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .\
 39  FILTER  (lang(?comment) = 'en') .\n" +
 40         "  OPTIONAL { <" + aURI + "> <http://dbpedia.org/ontology/birthPlace> ?birth\
 41 place } .\n" +
 42         "  OPTIONAL { <" + aURI + "> <http://dbpedia.org/ontology/almaMater> ?almama\
 43 ter } .\n" +
 44         "  OPTIONAL { <" + aURI + "> <http://dbpedia.org/ontology/spouse> ?spouse } \
 45 .\n" +
 46         "} LIMIT 5\n"
 47 }
 48 
 49 public func get_display_text_for_PERSON_URI(personURI: String) -> [String] {
 50     var ret: String = "\(uri_to_display_text(uri: personURI))\n\n"
 51     let person_details_sparql = get_SPARQL_for_PERSON_URI(aURI: personURI)
 52     let person_details = sparqlDbPedia(query: person_details_sparql)
 53     
 54     for pd in person_details {
 55         //let comment = pd["comment"]
 56         ret.append("\(pd["comment"] ?? "")\n\n")
 57         let subject_uris = pd["subject_uris"]
 58         let uri_list: [String] = subject_uris?.components(separatedBy: " | ") ?? []
 59         //ret.append("<ul>\n")
 60         for u in uri_list {
 61             let subject = uri_to_display_text(uri: u)
 62             ret.append("\(subject)\n") }
 63         //ret.append("</ul>\n")
 64         if let spouse = pd["spouse"] {
 65             if spouse.count > 0 {
 66                 ret.append("Spouse: \(uri_to_display_text(uri: spouse))\n") } }
 67         if let almamater = pd["almamater"] {
 68             if almamater.count > 0 {
 69                 ret.append("Almamater: \(uri_to_display_text(uri: almamater))\n") } }
 70         if let birthplace = pd["birthplace"] {
 71             if birthplace.count > 0 {
 72                 ret.append("Birthplace: \(uri_to_display_text(uri: birthplace))\n") \
 73 } }
 74     }
 75     return ["# SPARQL for a specific person:\n" + person_details_sparql, ret]
 76 }
 77 
 78 //     "  ?place_uri <http://xmlns.com/foaf/0.1/name> \"" + placeString + "\"@en .\n\
 79 " +
 80 
 81 public func get_SPARQL_for_finding_URIs_for_PLACE_NAME(placeString: String)
 82                                                -> String {
 83     return
 84         "# " + placeString + "\nSELECT DISTINCT ?place_uri ?comment {\n" +
 85         "  ?place_uri rdfs:label \"" + placeString + "\"@en .\n" +
 86         "  ?place_uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://sche\
 87 ma.org/Place> .\n" +
 88         "  OPTIONAL { ?place_uri <http://www.w3.org/2000/01/rdf-schema#comment>\n" +
 89         "     ?comment . FILTER (lang(?comment) = 'en') } .\n" +
 90         "} LIMIT 10\n"
 91 }
 92 
 93 public func get_SPARQL_for_PLACE_URI(aURI: String) -> String {
 94     return
 95         "# <" + aURI + ">\nSELECT DISTINCT ?comment (GROUP_CONCAT(DISTINCT ?subject_\
 96 uris; SEPARATOR=' | ') AS ?subject_uris) {\n" +
 97         "  <" + aURI + "> <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .\
 98  FILTER  (lang(?comment) = 'en') .\n" +
 99         "  OPTIONAL { <" + aURI + "> <http://purl.org/dc/terms/subject> ?subject_uri\
100 s } .\n" +
101         "} LIMIT 5\n"
102 }
103 
104 public func get_HTML_for_place_URI(placeURI: String) -> String {
105     var ret: String = "<h2>" + placeURI + "</h2>\n"
106     let place_details_sparql = get_SPARQL_for_PLACE_URI(aURI: placeURI)
107     let place_details = sparqlDbPedia(query: place_details_sparql)
108     
109     for pd in place_details {
110         //let comment = pd["comment"]
111         ret.append("<p><strong>\(pd["comment"] ?? "")</strong></p>\n")
112         let subject_uris = pd["subject_uris"]
113         let uri_list: [String] = subject_uris?.components(separatedBy: " | ") ?? []
114         ret.append("<ul>\n")
115         for u in uri_list {
116             let subject = u.replacingOccurrences(of: "http://dbpedia.org/resource/Ca\
117 tegory:", with: "").replacingOccurrences(of: "_", with: " ").replacingOccurrences(of\
118 : "-", with: " ")
119             ret.append("  <li>\(subject)</li>\n")
120         }
121         ret.append("</ul>\n")
122     }
123     return ret
124 }
125 
126 public func get_SPARQL_for_finding_URIs_for_ORGANIZATION_NAME(orgString: String) -> \
127 String {
128     return
129         "# " + orgString + "\nSELECT DISTINCT ?org_uri ?comment {\n" +
130         "  ?org_uri rdfs:label \"" + orgString + "\"@en .\n" +
131         "  ?org_uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema\
132 .org/Organization> .\n" +
133         "  OPTIONAL { ?org_uri <http://www.w3.org/2000/01/rdf-schema#comment>\n" +
134         "     ?comment . FILTER (lang(?comment) = 'en') } .\n" +
135         "} LIMIT 2\n"
136 }

The file AppSparql contains more utility functions for getting entity and relationship data from DBPedia:

  1 //  AppSparql.swift
  2 //  Created by ML Watson on 7/18/21.
  3 
  4 import Foundation
  5 
  6 let detailSparql = """
  7 PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
  8 select ?entity ?label ?description ?comment where {
  9     ?entity rdfs:label "<name>"@en .
 10     ?entity schema:description ?description . filter (lang(?description) = 'en') . f\
 11 ilter(!regex(?description,"Wikimedia disambiguation page")) .
 12  } limit 5000
 13 """
 14 
 15 let personSparql = """
 16   select ?uri ?comment {
 17       ?uri <http://xmlns.com/foaf/0.1/name> "<name>"@en .
 18       ?uri <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .
 19           FILTER  (lang(?comment) = 'en') .
 20   }
 21 """
 22 
 23 
 24 let personDetailSparql = """
 25 SELECT DISTINCT ?label ?comment
 26                      
 27      (GROUP_CONCAT (DISTINCT ?birthplace; SEPARATOR=' | ') AS ?birthplace)
 28      (GROUP_CONCAT (DISTINCT ?almamater; SEPARATOR=' | ') AS ?almamater)
 29      (GROUP_CONCAT (DISTINCT ?spouse; SEPARATOR=' | ') AS ?spouse) {
 30        <name> <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .
 31        FILTER  (lang(?comment) = 'en') .
 32      OPTIONAL { <name> <http://dbpedia.org/ontology/birthPlace> ?birthplace } .
 33      OPTIONAL { <name> <http://dbpedia.org/ontology/almaMater> ?almamater } .
 34      OPTIONAL { <name> <http://dbpedia.org/ontology/spouse> ?spouse } .
 35      OPTIONAL { <name>  <http://www.w3.org/2000/01/rdf-schema#label> ?label .
 36         FILTER  (lang(?label) = 'en') }
 37 } LIMIT 10
 38 """
 39 
 40 let placeSparql = """
 41 SELECT DISTINCT ?uri ?comment WHERE {
 42    ?uri rdfs:label "<name>"@en .
 43    ?uri <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .
 44    FILTER (lang(?comment) = 'en') .
 45    ?place <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Place\
 46 > .
 47 } LIMIT 80
 48 """
 49 
 50 let organizationSparql = """
 51 SELECT DISTINCT ?uri ?comment WHERE {
 52    ?uri rdfs:label "<name>"@en .
 53    ?uri <http://www.w3.org/2000/01/rdf-schema#comment>  ?comment .
 54    FILTER (lang(?comment) = 'en') .
 55    ?uri <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Organiz\
 56 ation> .
 57 } LIMIT 80
 58 """
 59 
 60 func entityDetail(name: String) -> [Dictionary<String,String>] {
 61     var ret: [Dictionary<String,String>] = []
 62     let sparql = detailSparql.replacingOccurrences(of: "<name>", with: name)
 63     print(sparql)
 64     let r = sparqlDbPedia(query: sparql)
 65     r.forEach { result in
 66         print(result)
 67         ret.append(result)
 68     }
 69     return ret
 70 }
 71 
 72 func personDetail(name: String) -> [Dictionary<String,String>] {
 73     var ret: [Dictionary<String,String>] = []
 74     let sparql = personSparql.replacingOccurrences(of: "<name>", with: name)
 75     print(sparql)
 76     let r = sparqlDbPedia(query: sparql)
 77     r.forEach { result in
 78         print(result)
 79         ret.append(result)
 80     }
 81     return ret
 82 }
 83 
 84 func placeDetail(name: String) -> [Dictionary<String,String>] {
 85     var ret: [Dictionary<String,String>] = []
 86     let sparql = placeSparql.replacingOccurrences(of: "<name>", with: name)
 87     print(sparql)
 88     let r = sparqlDbPedia(query: sparql)
 89     r.forEach { result in
 90         print(result)
 91         ret.append(result)
 92     }
 93     return ret
 94 }
 95 
 96 func organizationDetail(name: String) -> [Dictionary<String,String>] {
 97     var ret: [Dictionary<String,String>] = []
 98     let sparql = organizationSparql.replacingOccurrences(of: "<name>", with: name)
 99     print(sparql)
100     let r = sparqlDbPedia(query: sparql)
101     r.forEach { result in
102         print(result)
103         ret.append(result)
104     }
105     return ret
106 }
107 
108 public func processEntities(inputString: String) -> [(name: String, type: String, ur\
109 i: String, comment: String)] {
110     let entities = getEntities(text: inputString)
111     var augmentedEntities: [(name: String, type: String, uri: String, comment: Strin\
112 g)] = []
113     for (entityName, entityType) in entities {
114         print("** entityName:", entityName, "entityType:", entityType)
115         if entityType == "PersonalName" {
116             let data = personDetail(name: entityName)
117             for d in data {
118                 augmentedEntities.append((name: entityName, type: entityType,
119                     uri: "<" + d["uri"]! + ">", comment: "<" + d["comment"]! + ">"))
120             }
121         }
122         if entityType == "OrganizationName" {
123             let data = organizationDetail(name: entityName)
124             for d in data {
125                 augmentedEntities.append((name: entityName, type: entityType,
126                     uri: "<" + d["uri"]! + ">", comment: "<" + d["comment"]! + ">"))
127             }
128         }
129         if entityType == "PlaceName" {
130             let data = placeDetail(name: entityName)
131             for d in data {
132                 augmentedEntities.append((name: entityName, type: entityType,
133                     uri: "<" + d["uri"]! + ">", comment: "<" + d["comment"]! + ">"))
134             }
135         }
136     }
137     return augmentedEntities
138 }
139 
140 
141 extension Array where Element: Hashable {
142     func uniqueValuesHelper() -> [Element] {
143         var addedDict = [Element: Bool]()
144         return filter { addedDict.updateValue(true, forKey: $0) == nil }
145     }
146     mutating func uniqueValues() {
147         self = self.uniqueValuesHelper()
148     }
149 }
150 
151 
152 func getAllRelationships(inputString: String) -> [String] {
153     let augmentedEntities = processEntities(inputString: inputString)
154     var relationshipTriples: [String] = []
155     for ae1 in augmentedEntities {
156         for ae2 in augmentedEntities {
157             if ae1 != ae2 {
158                 let er1 = dbpediaGetRelationships(entity1Uri: ae1.uri,
159                                                   entity2Uri: ae2.uri)
160                 relationshipTriples.append(contentsOf: er1)
161                 let er2 = dbpediaGetRelationships(entity1Uri: ae2.uri,
162                                                   entity2Uri: ae1.uri)
163                 relationshipTriples.append(contentsOf: er2)
164             }
165         }
166     }
167     relationshipTriples.uniqueValues()
168     relationshipTriples.sort()
169     return relationshipTriples
170 }

AppleBERT

The files in the directory AppleBERT were copied from Apple’s example https://developer.apple.com/documentation/coreml/model_integration_samples/finding_answers_to_questions_in_a_text_document with a few changes to get returned results in a convenient format for this application. Apple’s BERT documentation is excellent and you should review it.

Relationships

The file Relationships.swift fetches relationship data for pairs of DBPedia entities. Note that the first SPARQL template has variable slots <e1> and <e2> that are replaced at runtime with URIs representing the entities that we are searching for relationships between these two entities:

 1 // relationships between DBPedia entities
 2 
 3 let relSparql =  """
 4 SELECT DISTINCT ?p {<e1> ?p <e2> .FILTER (!regex(str(?p), 'wikiPage', 'i'))} LIMIT 5
 5 """
 6 
 7 public func dbpediaGetRelationships(entity1Uri: String, entity2Uri: String)
 8                                       -> [String] {
 9     var ret: [String] = []
10     let sparql1 = relSparql.replacingOccurrences(of: "<e1>",
11       with: entity1Uri).replacingOccurrences(of: "<e2>",
12         with: entity2Uri)
13     let r1 = sparqlDbPedia(query: sparql1)
14     r1.forEach { result in
15         if let relName = result["p"] {
16             let rdfStatement = entity1Uri + " <" + relName + "> " + entity2Uri + " ."
17             print(rdfStatement)
18             ret.append(rdfStatement)
19         }
20     }
21     let sparql2 = relSparql.replacingOccurrences(of: "<e1>",
22         with: entity2Uri).replacingOccurrences(of: "<e2>",
23             with: entity1Uri)
24     let r2 = sparqlDbPedia(query: sparql2)
25     r2.forEach { result in
26         if let relName = result["p"] {
27             let rdfStatement = entity2Uri + " <" + relName + "> " + entity1Uri + " ."
28             print(rdfStatement)
29             ret.append(rdfStatement)
30         }
31     }
32     return Array(Set(ret))
33 }
34 
35 public func uriToPrintName(_ uri: String) -> String {
36     let slashIndex = uri.lastIndex(of: "/")
37     if slashIndex == nil { return uri }
38     var s = uri[slashIndex!...]
39     s = s.dropFirst()
40     if s.count > 0 { s.removeLast() }
41     return String(s).replacingOccurrences(of: "_", with: " ")
42 }
43 
44 public func relationshipsoEnglish(rs: [String]) -> String {
45     var lines: [String] = []
46     for r in rs {
47         let triples = r.split(separator: " ", maxSplits: 3,
48             omittingEmptySubsequences: true)
49         if triples.count > 2 {
50             lines.append(uriToPrintName(String(triples[0])) + " " +
51               uriToPrintName(String(triples[1])) + " " +
52                 uriToPrintName(String(triples[2])))
53         } else {
54             lines.append(r)
55         }
56     }
57     let linesNoDuplicates = Set(lines)
58     return linesNoDuplicates.joined(separator: "\n")
59 }

NLP

The file NlpWhiteboard provides high level NLP utility functions for the application:

 1 //
 2 //  NlpWhiteboard.swift
 3 //  KGN
 4 //
 5 //  Copyright © 2021 Mark Watson. All rights reserved.
 6 //
 7 
 8 public struct NlpWhiteboard {
 9 
10     var originalText: String = ""
11     var people: [String] = []
12     var places: [String] = []
13     var organizations: [String] = []
14     var sparql: String = ""
15 
16     init() { }
17 
18     mutating func set_text(originalText: String) {
19         self.originalText = originalText
20         let (people, places, organizations) = getAllEntities(text:  originalText)
21         self.people = people; self.places = places; self.organizations = organizatio\
22 ns
23     }
24     
25     mutating func query_to_choices(behindTheScenesSparqlText: inout String)
26           -> [[[String]]] { // return inner: [comment, uri]
27         var ret: Set<[[String]]> = []
28         if people.count > 0 {
29             for i in 0...(people.count - 1) {
30                 self.sparql =
31                   get_SPARQL_for_finding_URIs_for_PERSON_NAME(nameString: people[i])
32                 behindTheScenesSparqlText += self.sparql
33                 let results = sparqlDbPedia(query: self.sparql)
34                 if results.count > 0 {
35                     ret.insert( results.map { [($0["comment"]
36                                                 ?? ""),
37                                                 ($0["person_uri"] ?? "")] })
38                 }
39             }
40         }
41         if organizations.count > 0 {
42             for i in 0...(organizations.count - 1) {
43                 self.sparql = get_SPARQL_for_finding_URIs_for_ORGANIZATION_NAME(
44                     orgString: organizations[i])
45                 behindTheScenesSparqlText += self.sparql
46                 let results = sparqlDbPedia(query: self.sparql)
47                 if results.count > 0 {
48                     ret.insert(results.map { [($0["comment"] ??
49                       ""), ($0["org_uri"] ?? "")] })
50                 }
51             }
52         }
53         if places.count > 0 {
54             for i in 0...(places.count - 1) {
55                 self.sparql = get_SPARQL_for_finding_URIs_for_PLACE_NAME(
56                     placeString: places[i])
57                 behindTheScenesSparqlText += self.sparql
58                 let results = sparqlDbPedia(query: self.sparql)
59                 if results.count > 0 {
60                     ret.insert( results.map { [($0["comment"] ??
61                       ""), ($0["place_uri"] ?? "")] })
62                 }
63             }
64         }
65         //print("\n\n+++++++ ret:\n", ret, "\n\n")
66         return Array(ret)
67     }
68 }

The file NLPutils.swift provides lower level NLP utilities:

  1 //  NLPutils.swift
  2 //  KGN
  3 //
  4 //  Copyright © 2021 Mark Watson. All rights reserved.
  5 //
  6 
  7 import Foundation
  8 import NaturalLanguage
  9 
 10 public func getPersonDescription(personName: String) -> [String] {
 11     let sparql = get_SPARQL_for_finding_URIs_for_PERSON_NAME(nameString: personName)
 12     let results = sparqlDbPedia(query: sparql)
 13     return [sparql, results.map {
 14       ($0["comment"] ?? $0["abstract"] ?? "") }.joined(separator: " . ")]
 15 }
 16 
 17 
 18 public func getPlaceDescription(placeName: String) -> [String] {
 19     let sparql = get_SPARQL_for_finding_URIs_for_PLACE_NAME(placeString: placeName)
 20     let results = sparqlDbPedia(query: sparql)
 21     return [sparql, results.map { ($0["comment"] ??
 22         $0["abstract"] ?? "") }.joined(separator: " . ")]
 23 }
 24 
 25 public func getOrganizationDescription(organizationName: String) -> [String] {
 26     let sparql = get_SPARQL_for_finding_URIs_for_ORGANIZATION_NAME(
 27         orgString: organizationName)
 28     let results = sparqlDbPedia(query: sparql)
 29     print("=== getOrganizationDescription results =\n", results)
 30     return [sparql, results.map { ($0["comment"] ?? $0["abstract"] ?? "") }
 31         .joined(separator: " . ")]
 32 }
 33 
 34 let tokenizer = NLTokenizer(unit: .word)
 35 let tagger = NSLinguisticTagger(tagSchemes:[.tokenType, .language, .lexicalClass,
 36   .nameType, .lemma], options: 0)
 37 let options: NSLinguisticTagger.Options =
 38     [.omitPunctuation, .omitWhitespace, .joinNames]
 39 
 40 let tokenizerOptions: NSLinguisticTagger.Options =
 41     [.omitPunctuation, .omitWhitespace, .joinNames]
 42 
 43 public func getEntities(text: String) -> [(String, String)] {
 44     var words: [(String, String)] = []
 45     tagger.string = text
 46     let range = NSRange(location: 0, length: text.utf16.count)
 47     tagger.enumerateTags(in: range, unit: .word,
 48         scheme: .nameType, options: options) { tag, tokenRange, stop in
 49         let word = (text as NSString).substring(with: tokenRange)
 50         let tagType = tag?.rawValue ?? "unkown"
 51         if tagType != "unkown" && tagType != "OtherWord" {
 52             words.append((word, tagType))
 53         }
 54     }
 55     return words
 56 }
 57 
 58 public func tokenizeText(text: String) -> [String] {
 59     var tokens: [String] = []
 60     tokenizer.string = text
 61     tokenizer.enumerateTokens(in: text.startIndex..<text.endIndex) { tokenRange, _ in
 62         tokens.append(String(text[tokenRange]))
 63         return true
 64     }
 65     return tokens
 66 }
 67 
 68 let entityTagger = NLTagger(tagSchemes: [.nameType])
 69 let entityOptions: NLTagger.Options = [.omitPunctuation, .omitWhitespace, .joinNames]
 70 let entityTagTypess: [NLTag] = [.personalName, .placeName, .organizationName]
 71 
 72 public func getAllEntities(text: String) -> ([String],[String],[String]) {
 73     var words: [(String, String)] = []
 74     var people: [String] = []
 75     var places: [String] = []
 76     var organizations: [String] = []
 77     entityTagger.string = text
 78     entityTagger.enumerateTags(in: text.startIndex..<text.endIndex, unit: .word,
 79         scheme: .nameType, options: entityOptions) { tag, tokenRange in
 80         if let tag = tag, entityTagTypess.contains(tag) {
 81             let word = String(text[tokenRange])
 82             if tag.rawValue == "PersonalName" {
 83                 people.append(word)
 84             } else if tag.rawValue == "PlaceName" {
 85                 places.append(word)
 86             } else if tag.rawValue == "OrganizationName" {
 87                 organizations.append(word)
 88             } else {
 89                 print("\nERROR: unkown entity type: |\(tag.rawValue)|")
 90             }
 91             words.append((word, tag.rawValue))
 92         }
 93         return true
 94     }
 95     return (people, places, organizations)
 96 }
 97 
 98 func splitLongStrings(_ s: String, limit: Int) -> String {
 99     var ret: [String] = []
100     let tokens = s.split(separator: " ")
101     var subLine = ""
102     for token in tokens {
103         if subLine.count > limit {
104             ret.append(subLine)
105             subLine = ""
106         } else {
107             subLine = subLine + " " + token
108         }
109     }
110     if subLine.count > 0 {
111         ret.append(subLine)
112     }
113     return ret.joined(separator: "\n")
114 }

Views

This is not a book about SwiftUI programming, and indeed I expect many of you dear readers know much more about UI development with SwiftUI than I do. I am not going to list the four view files:

  • MainView.swift
  • QueryView.swift
  • AboutView.swift
  • InfoView.swift

Main KGN

The top level app code in the file KGNApp.swift is fairly simple. I hardcoded the window size for macOS and the window sizes for running this example on iPadOS or iOS are commented out:

 1 import SwiftUI
 2 
 3 @main
 4 struct KGNApp: App {
 5     var body: some Scene {
 6         WindowGroup {
 7           MainView()
 8             .frame(width: 1200, height: 770)    // << here !!
 9             //.frame(width: 660, height: 770)    // << here !!
10             //..frame(width: 500, height: 800)    // << here !!
11         }
12     }
13 }

I was impressed by the SwiftUI framework. Applications are fairly portable across macOS, iOS, and iPadOS. I am not a UI developer by profession (as this application shows) but I enjoyed learning just enough about SwiftUI to write this example application.

Book Wrap Up

I hope that you dear reader enjoyed this short book. While I enjoy programming in Swift and appreciate how well Apple has integrated machine learning capabilities in their iOS/iPadOS/macOS ecosystems, I still find myself writing most of my experimental code in Lisp languages and using Python for deep learning experiments and projects. That said, I am very happy that I have done the work to add Swift, CoreML, and SwiftUI to my personal programming tool belt.

I usually update my eBooks so if there is some topic or application domain that you would like added to future versions of this book, then please let me know. My email address is markw <at> markwatson <dot> com.