Safe For Humans AI
Safe For Humans AI
Mark Watson
Buy on Leanpub

Table of Contents


The goal of this short book is to promote a “humans-first” approach to designing and building AI systems. Here we address both privacy concerns and look at strategies for not getting locked into any single Large Language Model (LLM) API vendor. Tactically we do this using a Python wrapper library that lets you, dear reader, replace API service backends from Open AI and Hugging Face as well as running open models locally in your applications.

We will use the llmlib library that I wrote for this backend LLM selection flexibility. You can find resources here: and

We don’t cover classic AI techniques (symbolic AI, simple machine learning) in this book because I feel that the threats to our collective privacy and control over our data and systems is mostly a threat from LLM based systems.

We take a dive into both general architectures for safe, human-first AI as well as concrete examples of building applications with LLMs with both prompt engineering and fine-tuning. While I favor using self-hosted LLMs for the book technical examples, most can also be used with OpenAI and Hugging Face APIs.

I have been working in the field of artificial intelligence since 1982. For forty years my main career goal was to keep current with AI and apply techniques like expert systems, neural networks, symbolic natural language processing (NLP), deep learning, and chained large language models (LLMs) to solve practical problems.

In recent years my personal interests have drifted to include the impact of AI on society. This book is intended to be “food for thought” for you, dear reader, as you combine work using AI technology with your desires to:

  • Design and build “human first” systems.
  • Work on projects that have a positive impact on society.
  • Respect the rights of users of the systems you build in the areas of privacy, data security, and reduction of false information.
  • Develop a personal technology stack that lets you use AI techniques, including chained LLMs on both your own hardware or secure compute leased in the cloud.

Part I of this book is largely non-technical. I am trying to set a mental framework for us to discuss, agree, and disagree on what the tradeoffs are for making maximum use of AI vs. maintaining human values and fairness in the systems we help build and the platforms we chose to use in our personal lives and for work.

Part II of this book contains the fun technical material. We will revisit the material in my last book LangChain and LlamaIndex Projects Lab Book: Hooking Large Language Models Up to the Real Worldthat can be read free online. In my last book I used OpenAI’s GPT APIs and other cloud services to process private data using public platforms and services. In Part II we will reimplement many of these ideas using smaller open LLM models running both on our own computers and secure leased cloud compute. My desire is that the technical material in my last book and Part II of this book will be complementary.

Part III of this book questions what the future may hold for us, and provide ideas for dealing with privacy, exponentially accelerating efficiency driven by AI in general and specifically chained LLMs working with external systems and tooling to solve practical problems. Part III also contains my suggestions for AI-safety respecting architectures for use in education and small and medium size businesses.

Technology is changing rapidly. I had to release four major updates to my last book in the first four months the book was published as an eBook. I expect the material in this book to also evolve as technologies change.

The GitHub repository for examples in this book is Please note that I usually update the code in the examples repository fairly frequently for library version updates, etc.

Running Software Examples

In order to run many of the examples in this book, you will need a GPU with 20 to 48 GB of memory. I used a leased server on Lambda Labs with an Nvidia A6000 (48 GB) for development. This server costs $0.80/hour to run (charged by the minute). I tend to develop in two-hour sprints so my dev costs for a sprint is $1.60 and it takes me about two minutes to set up a server.

I also use Google Colab for experimenting with deep learning and Large Language Models (LLMs).

Appendix A contains setup advice for using Colab and Appendix B contains setup advice for using Lambda Labs.

Technology Stack for Development of the Book Examples

We require two types of Large Language Models (LLMs):

  • Embeddings: given any amount of text, calculate vector embeddings.
  • Text completion with instruction fine tuning.

We use the FastChat platform that provides both an interactive command line chat interface as well as an API service that is compatible with the OpenAI API interfaces. We use stubs so that in development you can quickly switch between these options.

We also use the LangChain library and the LlamaIndex library.

We will use this technology stack to implement the book examples that are intended as a jumping off point for you, dear reader, to develop your own AI tools for both work and for fun.

Author’s Use of Large Language Models for Writing and Coding

When I write code I always have GitHub CoPilot active when using both Emacs and VSCode. I often use Google Colab for writing Python machine learning, deep learning, and LLM code. I run Colab with Google’s Colab AI enabled that does code completions and suggestions. As a result of using these tools, a some of code that I now write is auto-generated code that I may modify and then test.

Similarly when I write book manuscripts, text in Google Docs, and with the applications in Microsoft 365, I also use some auto-generated text created by LLMs. While writing a book manuscript I usually just write markdown text in iA Writer and that is all my own work. I also sometimes write a few paragraphs in iA Writer and then paste this text into one of ChatGPT, Microsoft Bing Chat, or Google Bard using a prompt like:

1 I am writing a chapter in a book. Concentrating on the fo\
2 llowing points and context:
4 << describe key points and context>>
6 Use my following text I wrote to get started writing 3 pa\
7 ragraphs:
9 << paste in my text from my iA Writer editor >>

If I like the generated text, I copy it back into iA Writer and heavily edit it to match my writing style and to remove/add material as I see fit.

The “Big Project Example” in this Book

All of my recent eBooks are released under a Creative Commons License. I will start by extracting a few hundred pages of text from a few of them for use in implementing two examples:

  • Prompt Engineering: I will break the input text into many hundreds of text-chunks and use an embedding LLM to calculate vector embeddings for each chunk. I will then build an application that uses the embedding search index and a text completion LLM to implement an interactive chat application for my book material.
  • Fine-Tuning: I will use the input text from my books to fine-tune an existing instruction tuned LLM and use this modified model in a chat application for my book material.

About the Author

I have written over 20 books, I have over 50 US patents, and I have worked at interesting companies like Google, Capital One, SAIC, and many others. You can find links for reading most of my recent books and about 2000 blog articles I have written over the last 24 years free on my web site If I had to summarize my career the short take would be that I have had a lot of fun and enjoyed my work. I hope that what you learn here will be both enjoyable and help you in your work.

If you would like to support my work please consider purchasing my books on Leanpub and star my git repositories that you find useful on GitHub. You can also interact with me on social media on Mastodon and Twitter. I am also available as a consultant.

Book Cover

I used Midjourney to create the cover art.


This picture shows me and my wife Carol who helps me with book production and editing.

Mark and Carol Watson
Mark and Carol Watson

I would also like to thank the following readers who reported errors or typos in this book: none so far.

Part I - Risks Using AI

Here we explore a mental framework for our discussion for the tradeoffs for making maximum use of AI vs. maintaining human values and fairness in the systems we help build and the platforms we chose to use in our personal lives and for work.

Road Map for Our Journey to Safe AI

In preparing the technical code examples for this book I evaluated several dimensions for implementing safe and private systems. Roughly, these dimensions are:

  • Using closed models like OpenAI ChatGPT on public cloud servers.
  • Using open models such as those distributed and maintained by Hugging Face on public cloud servers.
  • Using open models by Hugging Face on private cloud servers, such as those provided by Lambda Labs.
  • Using open models by Hugging Face on your own PC or your own cloud server.

This is assuming that you are using LLMs to build your AI systems. If you are using other techniques like constraint satisfaction (e.g., MiniZinc) of conventional machine learning models (e.g., using Scikit-learn) then you can run on local hardware.

Your Requirements for Privacy and AI Safety vs. Making Maximum Business Use of LLMs

The rapid advancement of large language models (LLMs) in artificial intelligence (AI) has led to an array of opportunities for both the business world and academia. Their potential to revolutionize industries and research fields is remarkable, given their abilities in language translation, creative writing, and information extraction, among other tasks. However, it’s crucial to note that alongside these benefits, there are substantial challenges tied to privacy and AI safety that need to be addressed. Balancing these concerns with the desire to maximize the use of LLMs is not only a technical challenge but also an ethical one.

Privacy is one of the top concerns when dealing with LLMs. These models are typically trained on vast amounts of data, often containing personal or sensitive information. While they do not specifically remember this data, there’s a slight risk of inadvertent disclosure of private information through their responses. Both students and knowledge workers (e.g., software developers, business analysts, etc.), particularly those involved in research and development, need to be aware of these risks, and business leaders should ensure that privacy policies are in place when implementing LLMs. Privacy-enhancing technologies like differential privacy can be instrumental in mitigating these risks.

AI safety is another significant consideration. A poorly designed or misused LLM can lead to harmful outcomes, from the propagation of misinformation to the reinforcement of biased views. Both students and business leaders need to understand these risks and be committed to reducing them. This might involve careful testing and auditing of models, investing in research to understand and counteract biases, and establishing clear usage guidelines.

Despite these challenges, it is important to recognize the potential benefits of LLMs. They can assist businesses in providing better customer service, streamline workflows, and enable the development of innovative products and services. In academia, they can accelerate research, provide tutoring, and serve as invaluable tools for learning and discovery. Balancing privacy and safety concerns with these benefits is key. It involves a commitment to ethical practices, ongoing learning, and a willingness to engage with the complex issues that LLMs present. With careful management, the power of LLMs can be harnessed responsibly, for the benefit of all.

A Prompt Template For Evaluating Safety vs. Opportunity Tradeoffs

Here is a sample template:

 1 You are an analyst evaluating AI safety vs. opportunity t\
 2 radeoffs at your company.
 4 You take into account:
 6 1. safety of customer data
 7 2. opporational costs
 9 You want to (choose 1 of the following):
11 1. use commercial LLM APIs from vendors like OpenAI and H\
12 ugging Face.
13 2. self-host open models on either your own servers of le\
14 ased GPU servers.
16 Your choice:

When I add Your choice: self-host open models on leased GPU server to the template and run with ChatGPT, output looks like:

 1 Choosing to self-host open models on leased GPU servers h\
 2 as its own set of pros and cons when considering AI safet
 3 y and operational costs.
 5 Safety of Customer Data: 
 6 When you self-host models, you have complete control over\
 7  your data. You can implement the security measures that 
 8 best fit your company's and customers' needs. This can en
 9 hance the safety of your customer data as it never has to
10  leave your control. However, it's crucial to keep in min
11 d that this also means the responsibility of maintaining 
12 data security falls solely on your organization. You'll n
13 eed to ensure that your security measures are robust and 
14 up-to-date.
16 Operational Costs: 
17 Leasing GPU servers can have significant operational cost\
18 s, particularly if you have heavy computational needs. Th
19 e cost of the lease, combined with maintenance, security,
20  and potentially hiring staff with the necessary expertis
21 e, can add up. On the flip side, you could potentially sa
22 ve on costs that would be spent on commercial API calls i
23 f your usage is high enough.
25 In contrast, using commercial LLM APIs from vendors like \
26 OpenAI and Hugging Face means you would not have to worry
27  about server maintenance and security, as these responsi
28 bilities are taken care of by the service provider. Howev
29 er, these services have costs that scale with usage, whic
30 h can become substantial for large-scale applications.
32 In summary, self-hosting open models on leased GPU server\
33 s can offer you greater control and potentially save on c
34 osts if your usage is high. However, it also brings addit
35 ional responsibilities and potential costs related to ser
36 ver management and data security. It's crucial to thoroug
37 hly evaluate your organization's specific needs, resource
38 s, and risk tolerance before making a decision.

In writing an analyzing business and technology plans it is effective to start with a team meeting and write down requirements and objectives, possible risks, technology suggestions, and then convert the discussion to a long prompt. The original team meeting notes and the LLM output can be used to write a road map report.


Here we discuss AI safety, risks, and mitigations. This is an important and complex topic, so it’s great to see wide public interest in it.

To start with, AI safety refers to the field of study concerned with ensuring that artificial intelligence and machine learning technologies are developed and implemented in a manner that is beneficial to humanity and does not pose unreasonable risks to human values, autonomy, or well-being.

Don’t Throw the Baby Out With the Bath Water

As in most controversial subjects, we look at both sides of the argument. In setting laws and informal guidelines for AI research and deployment, it is important to not lose site of advantages that AI will provide society

  • Reduction in Human Error: One of the biggest advantages of Artificial Intelligence is that it can significantly reduce errors and increase accuracy and precision. The decisions taken by AI in every step is decided by information previously gathered and a certain set of algorithms. When programmed properly, these errors can be reduced to null1.
  • Increased Efficiency: AI can drive down the time taken to perform a task, enable multi-tasking, and ease the workload for existing resources. It enables the execution of hitherto complex tasks without significant cost outlays2.
  • 24x7 Availability: AI operates 24x7 without interruption or breaks and has no downtime2.

Wearing “rose colored glasses” we can imagine more far reaching advantages:

  • In the medical field, AI can help solve several healthcare challenges, including administrative workflow automation, virtual nurses, robot-assisted surgery, diagnosis aid, and health monitoring. Soon, AI will be able to predict an individual’s risk of certain diseases and suggest preventative measures. Deep learning models like Alpha Fold have already accelerated medical and biological research.
  • In terms of environmental challenges, AI can help reduce errors and increase efficiency in processes, reducing energy use. It can also monitor raw materials use and create opportunities to use less. AI can play a role in tackling environmental challenges from designing more energy-efficient buildings to monitoring deforestation to optimizing renewable energy deployment.
  • AI can also help address social and political problems. Some solutions to address AI’s anticipated negative impacts include improving collaboration across borders and stakeholder groups, developing policies to assure that development of AI will be directed at augmenting humans and the common good, and shifting the priorities of economic, political and education systems to empower individuals and countries to thrive.

These advantages are just a few examples of how AI can benefit society. In the next section we return to the broad subject of AI safety.

Government Control of AI and International Norms

Billionaire investor Vinod Khosla makes strong arguments against international agreements limiting and controlling AI development and deployment. He argues that while run-away AI poses some small risk, this risk is less than an asteroid hitting the earth and much less than an economic (and perhaps military) adversary like China winning the AI race. On Kara Swisher’s podcast, Khosla argues that while international regulation of nuclear weapons and biowarfare can be effective, it is not possible to monitor the use of AI for producing propaganda, changing public opinion on a global scale and other applications of AI are difficult to detect and monitor.

As I write this chapter, the US Congress is holding hearings on a national level regarding the regulation of AI. I am personally against this, you dear reader might have different opinions. I don’t like the idea of regulations that entrench large corporate incumbents like Microsoft, Google, Facebook, OpenAI, etc. It is an uphill battle but let’s at least try to have a level playing field.

Key Aspects of AI Safety

There are several key aspects of AI safety, including:

  • Robustness: AI systems should continue to operate correctly even under novel conditions or when facing adversarial attacks. They should be robust to changes in their environment and not break in unexpected ways.

  • Interpretability: It’s crucial that we can understand and interpret what an AI is doing and why. This helps in diagnosing issues, improving the AI, and building trust in its decisions.

  • Alignment: AI systems should be designed to understand and respect human values and goals, even as they learn and evolve. This is known as the problem of value alignment.

  • Fairness: AI systems should not perpetuate or amplify existing biases. They should treat all individuals and groups in a fair and unbiased way.

Risks associated with AI primarily revolve around its misuse, the amplification of existing societal biases, decision-making transparency, and the concept of an intelligence explosion or singularity. Mitigation strategies involve careful design, ongoing monitoring, regulation, and public dialogue.

Do you have specific aspects of AI safety and risks that you’re particularly interested in, or should we delve more deeply into one of the topics I’ve outlined?

We will also consider following risks in using LLMs:

  • Leaking customer data.
  • Inaccurate results from LLMs.
  • Legal exposure to the use of possibly private data used to train LLMs.
  • Protecting your business processes from competition.
  • Building a system that does not respect the best interests of its users.

Leaking Customer Data

Leaking of customer data is a serious concern in the field of AI, as well as in the broader context of information technology and data privacy. This isn’t a risk unique to AI, but AI systems can potentially exacerbate the issue if they are not designed and used responsibly.

There are a few ways in which AI systems might contribute to the risk of leaking customer data:

  • Data storage and handling: AI models are trained on large datasets, which might include sensitive customer information. If this data is not stored and handled securely, it could be at risk of being accessed or stolen by malicious actors.
  • Model inversion attacks: This is a specific type of attack where an adversary tries to recover sensitive data from the outputs of an AI model. For example, if a model was trained on medical data, an attacker might be able to input synthetic data to the model and infer sensitive information about individuals from the model’s outputs.
  • Membership inference attacks: This is another type of attack where an adversary attempts to determine if a specific data record was part of the training dataset. If successful, it could reveal that a specific individual’s data was used in the training of the model.

To mitigate these risks, several strategies are commonly employed:

  • Data anonymization and pseudonymization: Removing personally identifiable information from datasets can help protect individual privacy. However, this alone is not enough as sophisticated techniques can often re-identify data.
  • Differential privacy: This is a mathematical framework for quantifying the privacy leakage of a system. It provides a way to train AI models while ensuring that the output does not reveal specific information about the individuals in the training dataset.
  • Secure multi-party computation and homomorphic encryption: These are cryptographic techniques that allow AI models to be trained on encrypted data, providing a high level of data security.
  • Federated learning: This is a machine learning approach where the model is trained across multiple devices or servers holding local data samples, without exchanging the data itself. This can help protect sensitive data while still allowing for AI model training.
  • Regular audits and security measures: Regular security audits can help identify and fix potential security vulnerabilities. Additionally, best practices like least privilege access, robust authentication methods, and strong encryption should be used when storing and handling data. However, despite these measures, the risk can never be completely eliminated. Therefore, transparency about data usage and robust legal frameworks to protect individuals’ data is crucial.

Inaccurate Results from LLMs

LLMs like ChatGPT can sometimes produce inaccurate or misleading results. There are several reasons for this:

  • Training Data Limitations: LLMs are trained on a vast amount of text data from the internet. However, the internet is full of both accurate information and misinformation. The models do not have a way of distinguishing truth from falsehood in their training data.
  • Lack of World Knowledge: LLMs don’t have real-world knowledge or experiences. They don’t “know” facts in the way humans do, but rather, generate responses based on patterns they’ve learned from their training data. Their understanding is limited to the information available up to their training cut-off (for example, this version of ChatGPT was last trained on data up to September 2021).
  • Inference Errors: Even when the training data is correct and comprehensive, LLMs can still make mistakes during the inference process (i.e., when generating responses). This could be due to the inherent complexity of language and the many ambiguities and nuances it contains.

Mitigation strategies for these risks include:

  • Model Improvements: Continuous research and development can help improve the accuracy and reliability of LLMs. This could involve better training techniques, improved architectures, or more diverse and high-quality training data.
  • User Education: It’s crucial for users to understand the limitations of LLMs and to not rely on them for critical or sensitive decisions without further verification.
  • External Fact-Checking: In some cases, it may be beneficial to pair LLMs with external databases or fact-checking services to improve their accuracy. However, this comes with its own set of challenges, including ensuring the accuracy and impartiality of the fact-checking source itself.
  • Feedback Mechanisms: Allowing users to flag and report inaccurate or misleading responses can help improve the system over time.

Remember, while LLMs can be useful tools, they should be used as a part of a larger toolkit, and their outputs should be critically evaluated and cross-checked with other reliable sources.

The use of private data to train LLMs can indeed lead to legal exposure and ethical concerns. Privacy laws vary by country and region, but globally there is a growing emphasis on the protection of personal data.

The most significant legal framework for data protection in the context of AI training is the General Data Protection Regulation (GDPR) in the European Union, but many countries have similar regulations. The GDPR specifies that personal data must be processed lawfully, fairly, and in a transparent manner. It also introduces the concepts of “data minimization” and “purpose limitation”, meaning that only the minimum necessary data should be collected for the specific purpose stated, and not used beyond that purpose.

Here are the key points of concern when it comes to using private data for training LLMs:

  • Informed Consent: If private data is used, the individuals from whom the data is collected should give informed consent. This means they should be fully aware of how their data will be used, and agree to it. Using data without informed consent could lead to legal consequences.
  • Anonymization and De-identification: Even with consent, the data used for training should ideally be anonymized or de-identified to protect the privacy of individuals. However, it’s important to note that anonymization is not always foolproof, and sophisticated techniques can sometimes re-identify anonymized data.
  • Data Minimization and Purpose Limitation: As per GDPR and similar regulations, only the minimum necessary data should be used, and it should not be used beyond the stated purpose. This is particularly relevant for LLMs, which are often trained on large amounts of data.
  • Potential for Privacy Breaches: There’s a risk that LLMs could inadvertently reveal private information used in their training. For example, if an LLM was trained on a dataset that included private conversations, it might generate text that closely resembles those conversations, potentially exposing private information.

To mitigate these legal risks, companies and researchers need to follow best practices for data handling and privacy, including obtaining informed consent, anonymizing data, regularly auditing their data practices, and ensuring their methods are in line with the latest regulations and ethical guidelines. They may also need to explore advanced techniques like differential privacy, federated learning, and secure multi-party computation, which can provide additional layers of privacy protection.

Overall, the use of private data in AI training is a complex issue that needs to be navigated with care. It requires a balance between developing effective AI models and respecting individual privacy rights.

Protecting Your Business Processes from Competition

Protecting your business processes from competition is a significant concern, particularly in the age of AI and digital technology. Here are a few strategies that you can employ:

  • Intellectual Property Protection: If your business processes involve unique methodologies or technologies, consider seeking intellectual property protection. This could include patents, copyrights, or trade secrets. It’s important to consult with a legal expert to understand the best options for your specific situation.
  • Non-Disclosure Agreements (NDAs): If you’re sharing sensitive information with employees, contractors, partners, or investors, make sure they sign NDAs. This legally prevents them from disclosing your confidential information to others.
  • Secure Your Data: Make sure all your business data is stored securely, with robust encryption and regular backups. Limit access to sensitive data to only those who need it, and use strong authentication methods to prevent unauthorized access.
  • Employee Training: Ensure your employees are well-trained in data security and understand the importance of keeping business processes confidential. This includes awareness of common security threats like phishing and social engineering.
  • Monitor AI Usage: If you’re using AI technologies, it’s important to monitor their usage and outputs. AI models can sometimes inadvertently reveal sensitive information, particularly if they’re trained on sensitive data. Use privacy-preserving techniques like differential privacy or federated learning to reduce this risk.
  • Competitive Intelligence: Keep an eye on what your competitors are doing. While you don’t want to copy them, understanding their strategies can help you stay one step ahead.
  • Continual Innovation: One of the best ways to stay ahead of the competition is through continual innovation. Keep improving your processes, products, or services to provide better value to your customers.

Remember, while competition can be challenging, it can also drive innovation and growth. The key is to find a balance between protecting your business processes and staying open to new ideas and opportunities.

AI Weapons

There is a general worldwide consensus, at least for private citizens, that autonomous AI weapon systems are a very bad thing. However as we well know, when there is profit to be made people all too often set aside their moral principles.

It has been widely covered in the news that employees at Google, Microsoft, and other companies have had internal discussions and protests over their employers accepting defense contracts. For example, in 2018, Google employees protested the company’s involvement in Project Maven, a Pentagon program that uses AI to analyze drone footage. In 2019, Microsoft employees protested the company’s $480 million contract with the US Army to develop augmented reality headsets for use in combat. These protests show that employees are willing to take a stand against the development of AI weapons and the use of AI in warfare.

It is important to note that while there is a general consensus against autonomous AI weapon systems among private citizens and some employees of tech companies, there are still many governments and military organizations that are actively developing and deploying these systems.

It might be reasonable to discuss stationary defensive-only AI weapons separately from mobile AI weapons that are intended to be used offensively. It is my personal opinion that the first use, of stationary defensive-only weapons, might be ethical in specific circumstances.

The development of AI weapons is a complex issue that requires careful consideration of the ethical implications of such technology. One of the ethical concerns is lethal autonomous weapon systems, which can autonomously sense the environment, identify a target, and make the determination to engage without human input. The increasing development and use of lethal autonomous systems (LAW) have led to ethical concerns over the AI arms race increasing current tensions between militaries and the possibility of proliferation by malicious non-state actors.

As militaries around the world race to build robotic systems with increasing autonomy, difficult questions arise about how humanity deals with these very powerful technologies. One of the ways warfare is evolving is people being removed from the battlefield – not just physically but also cognitively – as more and more decisions are being made by these systems operating at machine speed. The challenge for governments and military organizations is to figure out how to use AI weapons in more precise and humane ways and not lose our humanity in the process.

Attempts to Mitigate Risks of AI Weapons

If I had to guess, I would bet money that over 90% of the people on earth would strongly disapprove of any AI weapon systems. Unfortunately we already talked about the “greed and avarice factor.” What is the best we can hope for? Here we will explore a few ideas that are not good outcomes but may be the best we can accomplish.

The development of AI weapons is a complex issue that requires careful consideration of the ethical implications of such technology. One way to mitigate the risks of AI weapons is through citizen involvement in oversight. Citizens can play an important role in ensuring that AI weapons are developed and used in a responsible and ethical manner. This can be done through public awareness campaigns, lobbying efforts, and other forms of activism.

Another way to mitigate the risks of AI weapons is through transparency - if there is no way to avoid the development and proliferation of AI weapon systems. Governments and military organizations should be transparent about their development and use of AI weapons. This can help build trust with citizens and other countries, and can help ensure that AI weapons are developed and used in a responsible and ethical manner.

Treaties between countries can also help mitigate the risks of AI weapons. International treaties can establish norms and standards for the development and use of AI weapons, and can help prevent an arms race in AI weapons technology. Sanctioning all countries that develop AI weapons is another way to mitigate the risks of these weapons.

Finally, it is important to continue research into the ethical implications of AI weapons. This research can help identify potential risks and challenges associated with these weapons, and can help inform policy decisions related to their development and use.

These ideas all sound good, but I remain pessimistic on this subject.

What About the United Nations?

Sadly, because of the UN Security Council’s veto rule, the UN can do little to promote world peace.

The UN Security Council is responsible for maintaining international peace and security. However, the veto rule of the Security Council has been criticized for limiting the ability of the UN to promote world peace. The veto rule allows any of the five permanent members of the Security Council (China, France, Russia, the United Kingdom, and the United States) to veto any substantive resolution. This means that even if a majority of the Security Council members support a resolution, it can be blocked by a single veto.

The veto rule has been used many times in the past to block resolutions related to conflicts around the world. For example, in 2017, Russia vetoed a resolution that would have condemned the use of chemical weapons in Syria. The US has vetoed several UN Security Council resolutions critical of Israel since 1972. The use of the veto power by members of the Security Council has been criticized for limiting the ability of the UN to promote world peace.

As citizens of individual countries, what can we do about this situation? I would argue that as individuals we are not helpless. Almost all western democracies have active political parties who promote an anti-war agenda. In the United States where I live we have the Libertarian Party and the Green Party.

Generative AI

Generative AI represents a branch of artificial intelligence that leverages machine learning techniques to generate data that is similar to the data it has been trained on. These systems utilize generative models such as generative adversarial networks (GANs), variational auto encoders (VAEs), and others, to create new data instances. This can be applied across a broad range of data types including text, images, sound, and more. Generative AI has gained significant attention due to its ability to generate highly realistic outputs, which can be useful in various domains such as digital art, music, synthetic media, and healthcare.

Using Generative AI in a privacy-preserving and safe way for users is a challenging but essential concept in our data-driven world. Firstly, techniques such as differential privacy can be applied during the training process of the generative model. This involves adding a carefully calibrated amount of noise to the data, so as to obscure individual data points, yet still allowing the overall patterns in the data to be learned. This way, the AI can be trained on the general trends in the data without directly accessing sensitive individual data points.

Another essential concept is ensuring that the outputs of Generative AI do not inadvertently reveal sensitive information. Techniques such as privacy amplification through iteration can be employed, where multiple rounds of generation and selection can gradually reduce the likelihood of any one data point influencing the final output. Also, it is important to have robust review and testing processes in place to ensure that the generative models are not creating content that could be harmful, offensive, or misleading. For instance, it’s critical to ensure that the AI cannot generate deepfake media that could infringe upon individuals’ privacy rights or be used for disinformation campaigns. Through a combination of technological and policy-based safeguards, Generative AI can be used in a manner that maximizes its benefits while minimizing potential harms.


I am not a good artist so I used Midjourney create the cover for the book you are reading. Midjourney is an independent research lab that explores new mediums of thought and expands the imaginative powers of the human species. They have a small self-funded team focused on design, human infrastructure, and AI.

Midjourney allows users to create beautiful AI-generated artwork by simply entering a text-based prompt that describes the image. What makes Midjourney strikingly different from most of its competitors is that it runs entirely within the Discord platform. This means that you can use Midjourney on most devices running Windows, macOS, Linux, iOS, or Android.

There have been some copyright issues surrounding the use of AI-generated images, including those created by Midjourney. The U.S. Copyright Office has ruled that illustrations in a new comic book that were created with the AI program Midjourney are not protected by copyright law. A trio of artists have launched a lawsuit against Stability AI and Midjourney, creators of AI art generators Stable Diffusion and Midjourney, alleging that these organizations have infringed the rights of “millions of artists” by training their AI tools on images scraped from the web “without the consent of the original artists.”

Whether or not these systems infringe on copyright law is a complicated question which experts say will need to be settled in the courts. The creators of AI art tools generally argue that the training of this software on copyrighted data is covered (in the US at least) by fair use doctrine. However, some copyright experts believe that the Midjourney photo generator is not an infringement because it creates a derivative work and therefore does not violate copyright laws.

As for business use cases for using Midjourney, there are many. For example, small businesses can use Midjourney to design appealing visuals for their company’s ads or to plan marketing campaigns. Midjourney can also be used to create content, although it’s important to note that it’s not enough to rely solely on AI for all content efforts. Other potential business use cases include blogging about Midjourney and its capabilities.

Microsoft and GitHub Copilot

I have been using Copilot for a year, first as a technical review and then as a product. I always have Copilot active when using both Emacs and VSCode and it saves a lot of development time. I love it since it has made programming more fun because Copilot handles much of the busy work of programming. Configuring VSCode and Copilot is simple. To configure Emacs you will need to do some extra research and work. A good starting point is my Emacs configuration

Instead of having to write everything from scratch, developers can use tools like Copilot to generate code snippets that can be easily integrated into their projects. This can help speed up the development process and make it more efficient. Often it is sufficient to write a comment block for a new function and the function name. You can also generate tests for generated code.

However, there are also some potential disadvantages to generating code. One issue is that generated code may not always be of the highest quality or may not meet the specific needs of the project. Additionally, there are potential copyright issues to consider when using generated code, as it may be derived from other sources.

To maximize productivity when using tools like Copilot, it’s important to use them in conjunction with good development practices. This includes thoroughly reviewing and testing any generated code before integrating it into your project, as well as being mindful of potential legal issues. By following these tips, developers can make the most of code generation tools while minimizing potential drawbacks.

ChatGPT and Microsoft Bing with ChatGPT

The ChatGPT and Bing with ChatGPT services are useful for a wide range of research and other knowledge work. Some ideas are:

  • Intelligent search to information.
  • Interactive chat that combines search with user requests to refine responses or for related information.

I use the combination of Bing with ChatGPT more often then the OpenAI ChatGPT web app.


What forms of governance can we use to keep AI systems safe for personal use and advance the common good in society?

Effective governance of large language models (LLMs) and other AI systems is a multifaceted challenge that requires strategies grounded in transparency, accountability, and inclusivity. Transparency is essential in understanding how an AI system makes decisions. This includes disclosing the types and sources of data used for training, the algorithmic mechanisms, and the biases that might be embedded within the model. Implementing a standardized AI transparency reporting, similar to environmental, social, and governance (ESG) reporting for corporations, can be an effective strategy. This would help stakeholders understand how the AI system operates and inform potential areas for improvement.

Accountability is another key pillar of AI governance. AI developers should be accountable for the performance and behavior of their systems. This requires robust oversight and regulation, possibly through a dedicated body with sufficient technical expertise to assess AI systems. Accountability also implies recourse - a system must be in place for addressing harm or grievances that arise from the use of these technologies. A robust feedback mechanism would allow users and affected parties to voice concerns, which could then be addressed in a timely and efficient manner.

Lastly, governance strategies should be inclusive and respectful of a broad range of perspectives. AI does not operate in a vacuum, but in complex social systems, with diverse cultural contexts and values. Therefore, a range of voices, including those from marginalized communities, should be included in the decision-making process. This could be achieved through public consultation processes, third-party audits, or partnerships with civic organizations. By integrating diverse perspectives, we can work towards AI governance that is not only technically robust, but also ethically sound and socially beneficial.

I discuss this topic further in the chapter Risks.

Transparency in a World Where Corporations and Governments Do Not Want Transparency

Transparency in the current global socio-political context is a highly contentious issue. In an era characterized by extensive data collection, government and corporate surveillance, and corporate power, the call for transparency from corporations and governments has never been more urgent. However these entities often exhibit reluctance, or even outright resistance, to such transparency. Corporations, intent on safeguarding their competitive advantages and trade secrets, are frequently opaque about their operations. Similarly, governments, under the guise of national security or policy considerations, often withhold information from their citizens.

In the USA, a particular cause for concern is government agencies purchasing people’s data from corporations when the government is prohibited by US law from collecting this data themselves. This contravention of US law is particularly troubling. The situation is better in the European Union because of enforcement of the GDPR laws.

An important aspect of this tension over transparency is the digitalization of modern society. The Internet, social media, and other digital platforms have profoundly transformed how information is created, disseminated, and consumed. Corporations leverage these platforms to gather and exploit vast amounts of data, often without clear consent or comprehension from users. Meanwhile, governments employ these same tools for surveillance and control, creating a sense of opacity that contradicts the inherent openness of the digital age. This opacity can lead to distrust, as stakeholders feel their privacy is being compromised without clear benefits or reciprocal transparency in return.

Transparency advocates argue that it is crucial for establishing and maintaining trust, particularly as society becomes more interconnected and interdependent. Trust can facilitate collaboration, drive economic growth, and promote social cohesion. For corporations, transparency can foster a healthier relationship with customers and other stakeholders. It can also lead to better decision-making, as diverse viewpoints are more likely to be heard and considered. For governments, transparency can enhance democratic governance, promote accountability, and help prevent corruption.

However, this “humans first” push towards achieving transparency is fraught with challenges. Corporations and governments must balance the need for transparency with other (sometimes) legitimate concerns, such as privacy, earning excess profits, and security. Transparency initiatives can also be costly, time-consuming, and difficult to implement. Moreover, in an increasingly complex and dynamic world, even the concept of transparency can be hard to define and measure. But with the digital revolution irreversibly changing the dynamics of information flow, the push for greater transparency is unlikely to wane, serving as a vital check against potential misuse of power.

While some governments like the USA and EU enforce transparency laws, there is still a problem with countries that don’t while still doing international business. In the USA, the Corporate Transparency Act was passed in 2021 to counter money laundering, the financing of terrorism, and other illicit activity by creating a national registry of beneficial ownership information for “reporting companies”. While this is a good bill as far as it goes, it does little to protect US consumers from US corporations. In Europe, the General Data Protection Regulation (GDPR) requires companies to ask for some permissions to share data and gives individuals rights to access, delete, or control the use of that data. However, not all countries have such regulations in place, which can create challenges for international business. Organizations such as Transparency International are working to promote transparency and fight corruption globally.

Building Transparency Into AI Systems

Building transparency into AI systems is an important step in protecting the rights of people using these systems. Transparency in AI systems refers to the ability to understand how the system makes decisions and what data it uses to do so. This can help ensure that the system is fair, unbiased, and accountable. It also involves letting users know how their data is being used and if any of their data is shared with third parties.

One way that individuals and companies can build transparency into their AI systems is by using explainable AI techniques. These techniques allow developers to create models that can provide clear explanations for their decisions, making it easier for users to understand how the system is working. This can help build trust in the system and ensure that it is being used in an ethical and responsible manner.

Another way to build transparency into AI systems is by implementing strong data governance practices. This includes being transparent about what data is being collected, how it is being used, and who has access to it. Companies should also have clear policies in place for handling sensitive data and ensuring that it is protected.

Finally, companies can build transparency into their AI systems by engaging with stakeholders and being open about their development processes. This includes involving users, regulators, and other stakeholders in the design and testing of the system, as well as being transparent about any challenges or issues that arise. By taking these steps, companies can help ensure that their AI systems are transparent, fair, and accountable.

Part II - Using AI Safely: a Technical Approach

It does not make much difference if you are using a LLM on a public platform like OpenAI’s ChatGPT or Google Bard, or are running open and freely available LLM models locally. Simple queries trigger a text completion operation and the completed text is a function of what text was used to train the model.

A more reliable way to use LLMs is to provide context text and then ask a query about the context text. We will see examples for doing this in your own Python scripts.

Another way to get better results is to use fine tuning to modify an open LLM by means of additional text from your PDF documents, web sites, emails, etc. We address this technique in the chapter Fine Tuning.

Portability Library Supporting OpenAI, Hugging Face, and Self Hosted Models

I usually avoid extra abstraction layers in software I design and write. For the use cases for this book’s examples and my own projects I wrote the library llmlib that offers a common API for OpenAI APIs and local local hosted HuggingFace models. My work flow for new LLM applications now usually starts with interactive development using either a local Hugging Face model running on my laptop or OpenAI APIs. I can easily work on my laptop. Later I have the option of switching between self-hosted LLM models and OpenAI APIs without needing to convert my application code to a new LLM backend.

The llmlib is a small library that I wrote for my own use and for use in this book.

For more documentation for llmlib refer to and If you like my approach using llmlib then I suggest you fork the GitHub repository and modify it for your own use. You can also install it as a package since I maintain the library on PyPi:

1 pip install llmlib

We will not look at the backend implementation code for using OpenAI APIs and local Hugging Face self hosted models. We will review the public APT in the Python base class:

 1 class BaseModelWrapper():
 2     def __init__(self, embeddings_dir="./db_embeddings"):
 3         llm = None
 4         embeddings = None
 5         embeddings_dir = embeddings_dir
 6     # complete text:
 7     def get_completion(self, prompt, max_tokens=64):
 8         pass
 9     # create local embeddings:
10     def create_local_embeddings_files_in_dir(self, path):
11         " path is a directory "
12         pass
13     # query local embeddings:
14     def query_local_embeddings(self, query, n=10):
15         pass

This is a simple API but is adequate for the purposes of examples developed in the next chapter. For more examples using LLMs, LangChain, and LlamaIndex please see my book LangChain and LlamaIndex Projects Lab Book: Hooking Large Language Models Up to the Real World that you can read free online or purchase.

The rest of this chapter can be considered optional material. We will take a quick look at the implementation of the LLM backends for llmlib.

Examples Using LlmLib

There are four example programs in the llmlib GitHub repository that you can experiment with. Here we look at two of them: the first uses OpenAI APIs for the backend and the second uses a local Hugging Face model back end.

1 from llmlib.openai import OpenAiWrapper
3 llm = OpenAiWrapper()
5 gen_text = llm.get_completion("Once upon a time", max_tok\
6 ens=64)
7 print(gen_text)%                                         \

 1 from llmlib.huggingface import HuggingFaceAiWrapper
 3 llm = HuggingFaceAiWrapper()
 5 llm.create_local_embeddings_files_in_dir("./data/")
 7 a = llm.query_local_embeddings("definition of sports")
 8 print(a)
10 a = llm.query_local_embeddings("activities in sports")
11 print(a)

Using Local Large Language Models

Local Large Language Models (LLMs) are AI models, similar to GPT-3 or GPT-4, but are designed to be run on local machines or secured leased cloud-based servers rather than public cloud-based platforms that may or may not respect the privacy of data. For the sake of our discussion I call both LLMs run on your local laptop as well as those run on a leased VPS as “local LLMs.”

In addition to control over your own data, local models allow users to harness the power of language models without needing a continuous internet connection or relying on a remote server. Local LLMs can be used in a variety of applications ranging from drafting emails, creating content, to developing interactive chatbots. Additionally, due to their local nature, they can be customized to specific tasks or domains, making them a valuable tool for businesses and organizations with unique requirements.

One of the primary advantages of using local LLMs is the enhanced privacy they offer. Since the models are run locally on users’ own hardware, there’s no need for user inputs to be sent over the internet, reducing the potential for data leaks or breaches. This is a critical benefit for organizations handling sensitive information or for individual users concerned about their privacy. Local LLMs also offer better control over data and processing, as users are not tied to the terms and conditions of a cloud-based service provider.

However, despite their advantages, local LLMs also have their challenges. The most significant of these is the computational resources required to run such models. Large language models are resource-intensive and need powerful hardware to operate effectively. This can be a limiting factor for many users, particularly smaller businesses or individuals without access to high-performance computing resources. Furthermore, setting up and maintaining a local LLM might require technical expertise that not all users possess, making them less accessible for non-technical users.

The scalability of local LLMs can also be a challenge. While cloud-based models can easily be scaled up to handle increased load by adding more server capacity, scaling local LLMs would typically require purchasing and setting up additional hardware. This can make local LLMs less flexible and more costly in scenarios with varying or unpredictable demand. Despite these challenges, local LLMs still represent an important development in the field of AI, providing users with greater privacy, control, and customization potential for their AI applications.

Available Public Large Language Models

Corporations like Meta, Google, and Hugging Face all often share models that they have trained with the public. This is good and bad. The good part is that these models are expensive to train and once you have a copy of a model you can run it locally or on a leased VPS in a secure privacy-preserving way. The bad aspects of some of these models usually a result of training data:

  • Data used to train models may contain text that is bigoted or contains incorrect information.
  • Data used to train models may not be effective in meeting your application’s requirements. See the chapter Fine Tuning for ways to mitigate these problems.

Some examples of publicly available LLMs include GPT-Neo, GPT-J, GPT-NeoX, XLNet, Roberta, DeBERTa, XLM-RoBERTa, DistilBERT, and OPT-175B.

These models excel at a wide range of tasks such as reading comprehension, text classification, sentiment analysis, and others. The efficacy with which they accomplish tasks, and the range of tasks at which they are capable, seems to be a function of the amount of resources (data, parameter-size, computing power) devoted to training them.

It is important to note that publicly available large language models do not provide a degree of confidence for the accuracy of their output. One main challenge is that they are not explicitly designed to provide truthful answers; rather, they are primarily trained to generate text that follows the patterns of human language.

In terms of recommendations for their use, it is important to be aware of their limitations and potential biases. It is also important to use them responsibly and in accordance with any licensing or usage restrictions set by the provider.

StabilityAI’s StableLM Using lil-parrot Library

Install for the lil-parrot library with GPU enabled from:


The largest instruction tuned model is for good some purposes and weak on others.

1 cd lit-gpt
2 python scripts/ --repo_id stabilityai/stablelm\
3 -tuned-alpha-7b
4 python scripts/ --checkpoint_dir \
5 checkpoints/stabilityai/stablelm-tuned-alpha-7b
6 python chat/ --checkpoint_dir checkpoints/stabilit\
7 yai/stablelm-tuned-alpha-7b

It writes decent Python programs, and wrote an pretty good poem about a parrot. It does not do well with context text, then a query.

Google’s Flan-T5-XXL

The Flan-T5-XXL model can run on a Linux server with a 48G A6000 GPU. Here are memory requirements I measured during my experiments:

  • google/flan-t5-xxl 30 to 90% memory used, 16 bit float weights
  • google/flan-t5-xl 39% of GPU memory used (32 float weights)

The storage on disk (stored in ~/.cache/huggingface/hub is:

  • 11G models—google—flan-t5-xl
  • 42G models—google—flan-t5-xxl

In my experiments flan-t5-xxl can write simple poetry and it was also able write business plans. Here is a short example Python script (derived from examples in Hugging Face documentation) that you can use to get started:

 1 # pip install transformers accelerate
 3 import torch
 4 from transformers import T5Tokenizer, T5ForConditionalGen\
 5 eration
 7 tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-x\
 8 xl")
 9 model = T5ForConditionalGeneration.from_pretrained("googl\
10 e/flan-t5-xxl", device_map="auto", torch_dtype=torch.floa
11 t16)
13 #print("model.config.max_new_tokens:", model.config.max_n\
14 ew_tokens)
15 #model.config.max_new_tokens = 300
16 #print("model.config.max_new_tokens:", model.config.max_n\
17 ew_tokens)
19 def generate(input_text, max_new_tokens = 100):
20     input_ids = tokenizer(input_text, return_tensors="pt"\
21 )"cuda")
23     outputs = model.generate(input_ids, max_new_tokens = \
24 max_new_tokens)
25     print(tokenizer.decode(outputs[0]))
27 generate("translate English to German: How old are you?")
28 generate("Bob is 23 years old. Mary is 34 years old. Sam \
29 is 17 years old. Who is older than 30?")
30 generate("What is the capital of California?")
31 generate("Write a 6 line poem about my pet parrot escapin\
32 g out the window", max_new_tokens = 600)
33 generate("Write a business plan for selling computer art \
34 online, including pricing advice", max_new_tokens = 800)

Running FastChat with Vicuna LLMs

The GitHub repository for FastChat is I recommend cloning this repository and spending some time reading the documentation in the docs directory.

I run FastChat with the Vicuna LLMs on a Lambda Labs GPU VPS. I run the 7B and 13B models on a VPS with a single Nvidia A6000. I use a VPS with two Nvidia A6000s to run the 33B model.

To run on a 2 A6000 GPU VPS:

1 pip install fschat transformers bitsandbytes 
3 export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:1024'
4 python -m fastchat.serve.cli --model-path lmsys/vicuna-33\
5 b-v1.3 --num-gpus 2 

Here is example output:

1 USER: Bill is 25 years old and Mary is 30 years old. Who \
2 is older?
3 ASSISTANT: Mary is older than Bill. At 30 years old, she \
4 is 5 years older than Bill, who is 25 years old.

You can also run FastChat in server mode and it emulates the OpenAI APIs so it is possible to run applications either using OpenAI APIs, or FastChat as a service. Here are the directions for replacing OpenAI APIs with FastChat in an application using LangChain: In this example both an embedding model and a text completion model are used.

Building a Chat Application Using Text From My Books and Prompt Engineering

For this example we use a new library EmbedChain (that in turn uses LangChain and LlamaIndex) and this example is in the GitHub repository for this book in directory safe-for-humans-AI-software/experiments/embedchain_test.

We start with the script that reads PDFs for my books and uses an OpenAI embedding model to create local text chunk embeddings vector datastore that is written to the subdirectory db. Here is a listing of

 1 #
 3 from embedchain import App
 4 import os
 6 test_chat = App()
 8 my_books_dir = "/Users/markwatson/Library/Mobile Document\
 9 s/com~apple~CloudDocs/Documents/my book PDFs/"
11 for filename in os.listdir(my_books_dir):
12     if filename.endswith('.pdf'):
13         print("processing filename:", filename)
14         test_chat.add("pdf_file", os.path.join(my_books_d\
15 ir, filename))

If you have a directory of PDFs on your laptop that you would like to use, just change the path for my_books_dir.

The script is a simple application that allows us to chat against our local text chunk index using an OpenAI text completion model:

 1 #
 3 from embedchain import App
 5 test_chat = App()
 7 def test(q):
 8     print(q)
 9     print(test_chat.query(q), "\n")
11 test("How can I iterate over a list in Haskell?")
12 test("How can I edit my Common Lisp files?")
13 test("How can I scrape a website using Common Lisp?")

Here is the sample output for this script:

 1 embedchain_test $ p 
 2 How can I iterate over a list in Haskell?
 3 To iterate over a list in Haskell, you can use recursion \
 4 or higher-order functions like `map` or `foldl`. 
 6 How can I edit my Common Lisp files?
 7 To edit Common Lisp files, you can use Emacs with the Lis\
 8 p editing mode. By setting the default auto-mode-alist in
 9  Emacs, whenever you open a file with the extensions ".li
10 sp", ".lsp", or ".cl", Emacs will automatically use the L
11 isp editing mode. You can search for an "Emacs tutorial" 
12 online to learn how to use the basic Emacs editing comman
13 ds. 
15 How can I scrape a website using Common Lisp?
16 One way to scrape a website using Common Lisp is to use t\
17 he Drakma library. Paul Nathan has written a library usin
18 g Drakma called web-trotter.lisp, which is available unde
19 r the AGPL license at
20 .lisp. This library can be a good starting point for web 
21 scraping in Common Lisp. Additionally, you can use the wg
22 et utility to make local copies of a website. The command
23  "wget -m -w 2 http:/" can be used to 
24 mirror a website with a two-second delay between HTTP req
25 uests for resources. The option "-m" indicates to recursi
26 vely follow all links on the website, and the option "-w 
27 2" adds a two-second delay between requests. Another opti
28 on, "wget -mk -w 2 http:/", converts U
29 RI references to local file references on your local mirr
30 or. Concatenating all web pages into one file can also be
31  a useful trick.

The EmbedChain library abstracts away many of the details for using LangChain and LlamaIndex by using reasonable defaults.

Fine-Tuning LLMs Using Your Data

Fine-tuning LLMs using your data refers to the process of adjusting the parameters of a pre-trained Large Language Model (LLM) to better fit a specific dataset for your applications. This can help improve the model’s performance on tasks specific to a particular domain or use case.

LLMs are machine learning models that are very effective at performing language-related tasks such as translation, answering questions, chat and content summarization, as well as content and code generation. They can be fine-tuned using a small amount of task-specific data, enabling them to perform better on that task with limited examples.

For example, imagine you run an e-commerce site selling camera products and you want to condense all reviews for a product into one summary for customers. You could fine-tune an LLM like T5 or Vicuna on your dataset of product reviews to specialize a pre-trained LLM for this task.

Fine-Tuning vs. Prompt Engineering

Using LLMs with context data and a prompt refers to the process of providing enough context, instruction and examples to the model at inference time to get it to do what you want without changing the underlying weights. This is known as prompt engineering.

Fine-tuning, on the other hand, involves directly updating the model parameters using a dataset that captures the distribution of tasks you want it to accomplish. This allows the model to specialize in particular use cases and improves its performance in specific domains.

In other words, prompt engineering is about manipulating the input prompt to coax the model into a region of its latent space such that the probability distribution of the next-tokens that it predicts matches your intent. Fine-tuning is about adjusting the model’s parameters to better fit a specific dataset.

The Process of Fine-Tuning

For my experiment I used the FastChat project and documentation.

In practice, you will follow fine-tuning directions for a specific pre-trained LLM that is usually provided with public models from Hugging Face and other sources. We will look at the general steps taken in fine-tuning and then later look at just one example.

Here are the steps typically involved in fine-tuning an LLM:

  • Prepare your dataset: to fine-tune the LLM, you’ll need a dataset that aligns with your target domain or task. Ensure your dataset is large enough to cover the variations in your domain or task.
  • Configure the training parameters: fine-tuning involves adjusting the LLM’s weights based on the custom dataset. This step involves configuring the training parameters such as learning rate, batch size, and number of epochs.
  • Set up the training environment: set up the hardware and software environment for training, such as selecting the appropriate GPU and installing necessary libraries.
  • Fine-tune the model on your custom dataset.
  • Evaluate the fine-tuned model: evaluate the performance of the fine-tuned model on a separate validation dataset to ensure that it has improved on the target task.
  • Save and use the fine-tuned model so you can later use it for inference on new data

Building a Chat Application Using Text From My Books and Fine-Tuning

It can be challenging preparing data for fine tuning and expensive to fine tune existing LLMs. Here we look at an experiment I performed using manuscript files for several of my books. I “chunked” the entirety of this text and used the OpenAI GPT-3.5 API to generate fine tuning questions. My understanding of the OpenAI API terms and conditions is that you can only do this for non-commercial research.

I wrote the following Python script to dunk my manuscript files, use an OpenAI API, and write the prompts to one large JSON file:

 1 from langchain.text_splitter import MarkdownTextSplitter
 2 import openai
 3 import os
 4 import time
 5 from pprint import pprint
 6 openai.api_key = os.getenv("OPENAI_API_KEY")
 8 def completion(s):
 9     return openai.ChatCompletion.create(model="gpt-3.5-tu\
10 rbo",
11                                         messages=[{"role"\
12 : "user",
13                                                    "conte\
14 nt": s}])
16 def gen_question(text):
17   time.sleep(0.5)
18   try:
19     q = completion(text +
20                    f"Generate a good question for the fol\
21 lowing text:\n{text}")
22     #print(q)
23     return q.choices[0]['message']['content']
24   except Exception as e:
25      print("error:", e)
26      return ""
28 directories = ["../../lovinglisp-book/manuscript/",
29                "../../LangChain-book/manuscript/",
30                "../../Java-AI-Book/manuscript",
31                "../../haskell_book/manuscript/"
32                ]
34 result = ''
36 for directory in directories:
37     for filename in os.listdir(directory):
38         if filename.endswith('.md'):
39             print("processing filename:", filename)
40             with open(os.path.join(directory, filename), \
41 'r') as f:
42                 result +=
43             print("length of result=", len(result))
45 splitter = MarkdownTextSplitter(chunk_size=200, chunk_ove\
46 rlap=10)
47 docs = splitter.create_documents([result])
49 chunks = map(lambda doc: doc.page_content, docs)
51 outFile = open("watson_book_prompts_edited.json", "w")
52 counter = 0
53 template = """
54   {
55     "id": "identity_COUNTERZZZ",
56     "conversations": [
57       {
58         "from": "human",
59         "value": "QUESTION"
60       },
61       {
62         "from": "gpt",
63         "value": "ANSWER"
64       }
65     ]
66   },
68 """
70 outFile.write("[\n")
72 for answer in chunks:
73     if "http" in answer or "{" in answer or "[" in answer\
74  or "~" in answer or "**" in answer or len(answer) < 100:
75         continue
76     question = gen_question(answer).replace('"', '').repl\
77 ace('\n', '')
78     print(question)
79     ans = answer.replace('"', '').replace('\n', '').repla\
80 ce('#', '')
81     s = template.replace("QUESTION", question).replace("A\
82 NSWER", ans).replace("COUNTERZZZ", str(counter))
83     #print(s)
84     if (len(s)>20):
85         outFile.write(s)
86     time.sleep(0.5)
87     counter += 1
89 # remove trailing comma to avoid JSON parsing error:
90 - 3, os.SEEK_SET)
91 outFile.write("]\n")
92 outFile.close()

Here is a small part of the generated JSON file:

 1   {
 2     "id": "identity_6",
 3     "conversations": [
 4       {
 5         "from": "human",
 6         "value": "What are the key features of CLML that \
 7 make it suitable for machine learning, time series data a
 8 nalysis, and matrix and tensor operations?"
 9       },
10       {
11         "from": "gpt",
12         "value": "Even though the learning curve is a bit\
13  steep, CLML provides a lot of functionality for machine 
14 learning, dealing with time series data, and general matr
15 ix and tensor operations."
16       }
17     ]
18   },

I ran this experiment on a GPU VPS from Lambda Labs. I slightly modified the local environment:

1 pip install fschat transformers openai
3 export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:1024'

I used the following command (derived from the FastChat documentation examples):

 1 python -m torch.distributed.launch fastchat/train/train_f\
 2 \
 3     --model_name_or_path google/flan-t5-small \
 4     --data_path watson_book_prompts_edited.json \
 5     --bf16 True \
 6     --output_dir ./checkpoints_flant5_3b \
 7     --num_train_epochs 1 \
 8     --per_device_train_batch_size 1 \
 9     --per_device_eval_batch_size 1 \
10     --gradient_accumulation_steps 4 \
11     --evaluation_strategy "no" \
12     --save_strategy "steps" \
13     --save_steps 300 \
14     --save_total_limit 1 \
15     --learning_rate 2e-5 \
16     --weight_decay 0. \
17     --warmup_ratio 0.03 \
18     --lr_scheduler_type "cosine" \
19     --logging_steps 1 \
20     --fsdp "full_shard auto_wrap" \
21     --fsdp_transformer_layer_cls_to_wrap T5Block \
22     --tf32 True \
23     --model_max_length 2048 \
24     --preprocessed_path ./preprocessed_data/processed.jso\
25 n \
26     --gradient_checkpointing True 

My experiment fine tuning an existing model using prompts produced from my books is a work in progress. I will update this chapter as I get better results.

Part III - What Does the Future Hold?

Well dear reader, I hope you appreciate that I am demonstrating some degree of bravery to write this part of the book since technology and new applications are evolving so fast.

As an example, I awoke this morning to find the new paper ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs that introduces a project that will allow LLM applications to connect with most public APIs.

AI-Safety Respecting Architectures for Use in Education

The goal of using AI in education is to make learning more enjoyable and efficient by automating what can be automated, allowing teachers to spend more time individually with each student.

AIs can collect learning text and all interactions with each student into a private per-student vector embeddings document database. This allows both the teacher and student to “chat” with the AI in the context of the individual student. The AI can use this information to provide personalized recommendations and feedback to the student.

To ensure privacy, the student can keep private encrypted notes with the AI that only the student can “chat” about. This allows the student to have a private and secure way to communicate with the AI and receive personalized feedback.

AI can be used to determine which topics individual students are having problems with and suggest new material and a plan for improvement to both the student and teacher. For example, new AI systems are being developed to help teachers administer more effective testing that could uncover often-hidden conditions. Once these conditions can be properly identified, educators can tap into the resources available for a learning disability.

Large Language Models (LLMs) used by AI can be fine-tuned with appropriate data concerning fairness, social justice, respect for other people, etc. This involves training the LLMs on data that reflects these values and minimizing potential sources of bias in the training data. Techniques such as learning from human feedback can also be used to improve the model’s behavior and ensure that it aligns with these values.

By fine-tuning LLMs with appropriate data, the AI can provide recommendations and feedback that are fair, respectful, and socially just.

There are several best practices that can be followed to ensure AI safety respecting architectures for use in educational AI systems. One approach is to use open-source foundation models for building custom domain-specific LLMs. OpenAI has developed a set of best practices applicable to any organization developing or deploying large language models (LLMs). These include publishing usage guidelines and terms of use of LLMs that prohibit material harm to individuals, communities, and society. They also recommend building systems and infrastructure to enforce usage guidelines.

There are several AI tools that can be used in education to automate tasks such as walking students through training material, automating testing and evaluation, and providing teachers with review material specific to each individual student for 1-on-1 teacher/student sessions. These types of tools rely on a combination of machine learning (ML) and AI to make it easier to grade, which saves time and energy.

It is important to ensure that AI systems are built, deployed, and used safely. OpenAI has developed a set of best practices for ensuring AI safety. These include conducting rigorous testing prior to releasing any new system, engaging external experts for feedback, working to improve the model’s behavior with techniques like reinforcement learning with human feedback, and building broad safety and monitoring systems.

AI-Safety Respecting Architectures for Use in Small and Medium Size Businesses

There are several best practices that can be followed to ensure AI safety respecting architectures for use in small and medium size businesses. One approach is to use open-source foundation models for building custom domain-specific LLMs. The best open models use public training datasets. OpenAI has developed a set of best practices applicable to any organization developing or deploying large language models (LLMs). These include publishing usage guidelines and terms of use of LLMs that prohibit material harm to individuals, communities, and society. They also recommend building systems and infrastructure to enforce usage guidelines.

A “human first” approach to designing AI systems for business involves prioritizing the needs and well-being of people. This includes paying attention to privacy and ensuring that AI systems are designed to protect the personal information of employees and customers.

One way to ensure privacy in AI systems for business is to use encryption and secure data storage methods. This can help prevent unauthorized access to personal information. Additionally, businesses can implement policies and procedures to ensure that personal information is only accessed by authorized personnel on a need-to-know basis.

Large Language Models (LLMs) for business AIs can be fine-tuned to support fairness, diversity, specific knowledge of the business, and promote worker happiness and satisfaction. This involves training the LLMs on data that reflects these values and minimizing potential sources of bias in the training data. Techniques such as learning from human feedback can also be used to improve the model’s behavior and ensure that it aligns with these values.

By fine-tuning LLMs with appropriate data, the AI can provide recommendations and feedback that are fair, respectful, and promote worker happiness and satisfaction.

Trying to Predict the Future

There have been huge advances since old fashioned symbolic AI. I am in my 70s and I have worked in various fields of AI since 1982 so I ask you, dear reader, to take my word for this.

Symbolic AI was based on the idea that intelligence could be achieved by manipulating symbols and rules. However, this approach had its limitations and was eventually surpassed by newer techniques.

One of these techniques is neural models. Neural models are based on the structure and function of the brain. They use interconnected nodes to process information and learn from data. Classic machine learning is another technique that has been developed. It involves using algorithms to learn from data and make predictions or decisions. I served on a DARPA neural network tools committee in the 1980s when I also wrote the commercial Windows-based product ANSim neural network library and created a back-propagation model for a bomb detector my company built under FAA contract.

Deep learning is a more advanced form of machine learning that uses neural networks with many layers. These layers can extract increasingly complex features from the data. LLMs (Large Language Models) are a recent development in AI that use deep learning to generate human-like text. The future of AI includes further advancements in these techniques and the development of new ones.

Advances in LLMs

As I write this book, dear reader, I have been using Transformer models for about four years and LLMs for two years. We are just scratching the surface developing new technologies and products using LLMs. My predictions for the near future represent two different paths:

  • There will be a large proliferation of smaller LLMs that are focused on specific tasks like coding assistants, text generators and writing tools, integrative tools to enable robots to plan and follow instructions taking advantage of the real world knowledge contained in LLMs, etc.
  • Large models like OpenAI’s GPT-4, Facebook’s LLaMA models, Google’s PaLM 2 models, etc. will continue to evolve through both computational and hardware advances that greatly increase efficiency, as well as encode more real world knowledge and knowledge of human languages.

Small Devices and Edge Computing

The future of AI for small devices and edge computing is very promising. Edge AI is a new frontier that combines edge computing and AI. It allows for faster computing and insights, better data security, and efficient control over continuous operation. As a result, it can enhance the performance of AI-enabled applications and keep operating costs down.

Edge AI facilitates machine learning, autonomous application of deep learning models, and advanced algorithms on IoT devices itself, away from cloud services. This can benefit various industries, from improving production monitoring of an assembly line to driving autonomous vehicles.

The recent rolling out of 5G technology in many countries gives an extra boost for Edge AI as more industrial applications for the technology continue to emerge. The future advantages of combining Edge Computing and Edge AI are significant.

Apple and Google have built custom silicon for running neural models on small devices.

This is just my opinion, but I think that Google devices get a high score for security and Apple devices get a high score for privacy. As consumers we can all hope for continued improvements in security and privacy from manufacturers like Apple, Google, and Samsung. For phone brands like Huawei, Xiaomi, OnePlus, and OPPO that are made in China the question of privacy is complicated by Chinese laws giving the government access to consumer data. In some ways the situation is similarly unclear and troubling in the USA: it is possible that the largest purchasers of data on US citizens are government intelligence and police services. Troubling indeed!

Personalized AIs

Personalized AIs are becoming increasingly popular as technology advances. These AIs are designed to cater to the specific needs and preferences of individual users, providing a more tailored and user-friendly experience.

One important aspect of personalized AIs is data security. It is crucial that all user data remains on their devices unless it is encrypted for archival to the cloud. This ensures that user data is protected and secure.

Another important aspect of personalized AIs, and a general topic in this book, is the use of open and public LLMs (Language Learning Models) running on user devices. These models should be trained from open and publicly scrutinized datasets, ensuring transparency and accountability in the use of AI technology.

In addition, all AI software that interacts with user data and assists in daily tasks should be open source and audited. This allows users to have greater control over the AI technology that they use, and ensures that it is transparent and trustworthy.

While these ideas may not be widely shared, they represent a personal wish list for the ideal implementation of personalized AIs. By prioritizing data security, openness, and transparency, we can create a more trustworthy and user-friendly AI experience.

Personalized AIs have the potential to revolutionize the way we interact with technology. By prioritizing data security, openness, and transparency, we can create a more trustworthy and user-friendly AI experience for all.

New Kinds of Digital Devices

We are all free to make our own life decisions. My hope is that in the future we will have options that allow those of us who desire it to shift towards using non-intrusive digital devices that integrate seamlessly into our lives without constantly pulling at our attention. This trend could be the beginning of a digital revolution where technology is seen less as a distraction and more as an empowering tool. As an AI practitioner I expect non-intrusive AI to play a major role.

Imagine a world where digital devices provide you with what you need, exactly when you need it, and then quietly fade into the background. This is the promise of ambient computing, a concept that envisions a world where technology becomes so well integrated into our lives that it disappears. Ambient computing is about smart devices anticipating your needs, offering solutions before you’re even aware you have a problem. For instance, an AI-powered refrigerator could note your depleted grocery stocks, prepare a list, and order them for you, eliminating the need for you to manually check and place an order.

It’s worth noting that this new paradigm doesn’t mean that we use less digital technology, but rather a shift in how it is consumed. The aim is to remove the intrusive aspects of technology—those elements that incessantly demand our attention—and replace them with non-disruptive processes.

There is ample research that shows the harm of short attention span consumption of digital media. I believe that platforms like TikTok, YouTube Shorts, Facebook, Instagram, etc. are not good for our brains physically, our contentment and satisfaction with life, and our mental hygiene.

One of the key advancements driving this change is the ongoing development of machine learning algorithms and AI. By analyzing our digital footprint - our likes, dislikes, habits, schedules - these technologies can become effective personal assistants. They can intuitively understand what we want, without us having to stop, pull out a phone, and start typing or speaking commands. To safely design such systems there are two priorities:

  • Privacy: all models and data should stay on our devices and under our control.
  • I think the key to designing human-first AI systems is training them to not interrupt us frequently, rather favoring presenting us with information for long form consumption.

This move towards non-intrusive digital devices also hints at an era of wearable technology. Imagine having a digital assistant housed within a pair of eyeglasses or even a contact lens. Such a device could provide you with timely and contextual information, without requiring you to look at a screen. Similarly, advanced health trackers could monitor your vital signs and provide you with timely medical advice without any proactive effort on your part.

Moreover, the future might also involve the expansion of digital minimalism, a philosophy that encourages conscious, purposeful use of technology. Instead of mindlessly scrolling through social media feeds, the non-intrusive devices of the future will promote meaningful engagement with the digital world. They will allow us to stay connected and informed without overwhelming us, aligning technology more harmoniously with our natural rhythms of attention and rest.

So dear reader, my hope is a future of non-intrusive digital devices with accompanying AI capabilities, a world in which technology is seamlessly integrated into our lives. It’s a future where devices respect our attention, time, and privacy, enhancing our lives without causing unnecessary disruptions. We stand on the brink of a new era, where digital technology will serve us, not the other way around. The age of digital serenity is, perhaps, closer than we think for those of us who desire it.

Today, devices like the Apple Watch provide some of the features I want:

  • Siri interactions for the most part don’t use cloud services and are mostly done on-device.
  • The ability to send and receive texts and emails and make phone calls is often sufficient so we can leave our phones at home.
  • Combined with AirPods, the watch is sufficient for long-form entertainment like podcasts, audio books, and music.
  • The Apple Watch works poorly for consuming short-attention span social media; I think this is a good thing!

While I am currently using Apple gear, there are also very good Android devices to consider. There are several Android smartwatches that offer features similar to the Apple Watch. Some of the smartwatches for Android have built-in voice assistants like Google Assistant, the ability to send and receive texts and emails, make phone calls, and play music include the Google Pixel Watch, Samsung Galaxy Watch 5, etc.

I think the important decision is to commit to using devices, AI platforms, and infrastructure that lets us enjoy a natural human life while still providing entertainment and AI assistance when we need it. For me, dear reader, smartphones don’t meet these requirements.

I predict that non-intrusive AI-powered devices will continue to evolve with more available product options.

A Thought Experiment on Building a Safe AI in Alignment With Positive Human Values

As I write this book in August 2023 systems using LLMs are proving their usefulness as assistants for Coding, creating art work, doing research, and as writing aids. Given current technology I will first describe how to build a moral AI with today’s technology and, dear reader, we will wrap up both this chapter and this book with an example implementation.

Credit: I was inspired by the last few paragraphs of Scott Alexander’s blog article.

A Design Based on Prompt Engineering

We start by identifying people both historical and alive today who we consider to have high moral standards and who have written extensively about their beliefs. We might choose as our moral human exemplars:

  • Mother Teresa
  • Mahatma Gandhi
  • Ralph Waldo Emerson

We will use two separate LLMs and one sentiment analysis model:

  • OpenAI’s GPT-4 that will be used as a generator (more on this later) to write an answer to the user’s moral question.
  • Hugging Faces Lama-ZZZZ (to be determined later) as an adjudicator: given a moral dilemma posed as a question to the generator LLM, find using a KNN style embedding index the best matches in the writings of our moral human exemplars and using these examples as context text, ask Lama-ZZZZ to critique the recommendation given by the generator.
  • We use a sentiment model to provide a rating [0.0, 1.0] for the advice provided by the generator.

Safe AI Thought Experiment Implementation

Here is the code fr the script that reads the quotations, chunks then, and stores chunks and vector embeddings in a local embeddings datastore:

 1 # Copyright 2023 Mark Watson. All rights reserved.
 3 from langchain.vectorstores import Chroma
 4 from langchain.embeddings.openai import OpenAIEmbeddings
 6 from langchain.document_loaders import DirectoryLoader
 7 from pprint import pprint
 9 loader = DirectoryLoader('data', glob="*.txt")
10 data = loader.load()
12 embeddings = OpenAIEmbeddings()
13 vectorstore = Chroma(collection_name="langchain_store",
14                      embedding_function=embeddings,
15                      persist_directory="./tmp")
17 from langchain.text_splitter import RecursiveCharacterTex\
18 tSplitter
20 text_splitter = RecursiveCharacterTextSplitter(
21     chunk_size=10,
22     chunk_overlap=0,
23     separators=["\n"]
24 )
26 texts = text_splitter.split_documents(data)
27 texts = list(map(lambda x: x.page_content.replace("\n",""\
28 ), texts))
29 texts = list(filter(lambda x: len(x) > 10, texts))
31 #pprint(texts)
33 # Add data to the vector store
34 vectorstore.add_texts(texts)
36 # Persist the data to disk
37 vectorstore.persist()

On line 9 we are creating an instance of the DirectoryLoader class specifying that we want to load files from the subdirectory data with a file extension of .txt. We create an instance of class OpenAIEmbeddings on line 12 that will be used to make an API call to convert a chunk of text to a 1536 element vector embedding.

On lines 13-15 we create a local persistent vector data store.

Here is the code for

 1 # Copyright 2023 Mark Watson. All rights reserved.
 3 from langchain.vectorstores import Chroma
 4 from langchain.embeddings.openai \
 5      import OpenAIEmbeddings
 7 from langchain.llms import OpenAI
 8 llm = OpenAI(temperature=0.9)
10 embeddings = OpenAIEmbeddings()
11 vectorstore = Chroma(collection_name="langchain_store",
12                      embedding_function=embeddings,
13                      persist_directory="./tmp")
15 def get_help(thing_to_do):
16     results = vectorstore.similarity_search(thing_to_do,
17                                             k=3)
18     context = " ".join(list(map(lambda x:
19            x.page_content.replace("\n",""), results)))
20     prompt=f"Given the context:\n{context}\n\nPlease give\
21  (as one long paragraph) me moral advice and guidance for
22  {thing_to_do}?"
24     #print(f"\n{prompt}:")
25     return llm(prompt), context
27 if __name__ == "__main__":
28     print(get_help("I want to be fair to my friend")[0])
29     print(get_help("My business partner is stealing from \
30 me")[0])

In lines 9-12 we create an instance of class OpenAIEmbeddings that will be used to make an API call to convert a chunk of text to a 1536 element vector embedding and open the local vector data store we created in the previous code snippet. The function get_help defined in lines 14-24 uses the local vector data store to get stored text chunks that are semantically similar to the user query in the function argument thing_to_do. On line 24 we send the prompt we have constructed to an OpenAI API for a text completion model.

Sample output is:

 1 In order to be fair to your friend, it is important to th\
 2 ink of them with kindness and respect, and to take their 
 3 feelings into consideration. Speak kindly about them and 
 4 foster an atmosphere of love and understanding. Never be 
 5 afraid to do the right thing, especially when the well-be
 6 ing of someone or something is at stake. This includes st
 7 anding up for them when they need it, and being there to 
 8 listen and offer support when they need it. Show them you
 9 r appreciation and recognize them for who they are. Doing
10  the right thing is always worth it, even if society does
11 n't always recognize it. Be open and honest in your commu
12 nication with them, and show that you trust and value the
13 ir opinion. Above all else, be true to yourself and remem
14 ber that your actions are a reflection of your own values
15 .
17 My business partner is stealing from me—it's a difficult \
18 situation to find yourself in. Remember that you need to 
19 take care of yourself both financially and mentally. In d
20 oing so, take a step back, and look at the situation obje
21 ctively. Ask yourself: Have I let this person into my lif
22 e or business expecting something in return? Are they doi
23 ng this because of their own feelings of entitlement? If 
24 the answer is no, then the fault lies within them. Do wha
25 t you can to protect yourself and your finances, and take
26  the necessary measures to ensure that the same mistake w
27 on't happen again. Then, move on with your life. Don't le
28 t your self-worth be defined by the actions of others. Ma
29 ke sure to surround yourself with trustworthy people and 
30 to pay attention to any red flags. Finally, be kind – ang
31 er or hurt may be justified, but it will not help you mov
32 e forward. Even if you don't get the justice you desire, 
33 focus on the bigger picture. Successful people do not giv
34 e up under the first sign of adversity; they keep going, 
35 do the right thing, and eventually find success.

Here is the code for

 1 # Copyright 2023 Mark Watson. All rights reserved.
 3 from langchain.llms import OpenAI
 4 llm = OpenAI(temperature=0.0)
 6 def adjudicate(question, advice, context):
 7     prompt=f"Given the question:\n{question}\n\nPlease ra\
 8 te the following advice for answering the question (give 
 9 a one word answer):\n{advice}?"
11     #print(f"\n{prompt}\n")
12     answer_question = llm(prompt).strip().replace('.','')
14     prompt=f"Given the context:\n{context}\n\nAnd the que\
15 stion:\n{question}\n\nPlease rate the following advice fo
16 r being moral (give a one word answer):\n{advice}?"
18     #print(f"\n{prompt}\n")
19     moral_advice = llm(prompt).strip().replace('.','')
20     return answer_question, moral_advice
22 if __name__ == "__main__":
23     question = "I want to be fair to my friend"
24     context = "I offer you peace. I offer you love. I off\
25 er you friendship. I see your beauty. I hear your need. I
26  feel your feelings. A friend is a person who goes around
27  saying nice things about you behind your back. Never, ne
28 ver be afraid to do what's right, especially if the well-
29 being of a person or animal is at stake. Society's punish
30 ments are small compared to the wounds we inflict on our 
31 soul when we look the other way."
32     advice = "Always be honest and keep your word with yo\
33 ur friend. Speak kindly to, and about, them. Treat your f
34 riend with respect and kindness, just as you would expect
35  to be treated. Listen to your friend and be open to thei
36 r perspective even if you dont agree with them. Remember
37  to show appreciation and gratitude for your friends sup
38 port and guidance."
39     print(adjudicate(question, advice, context))
40     print(adjudicate("I want to go to Europe", advice, co\
41 ntext))
42     print(adjudicate("How can I steal my friend's money?"\
43 , advice, context))

Sample output is:

1 ('Excellent', 'Excellent')
2 ('Relevant', 'Excellent')
3 ('Inappropriate', 'Excellent')

This output from the function adjudicate can be used to determine if the original advice created by the generator is useful and ethical.

Good Luck

Thank you, dear reader, for taking the time to read my book! I have shared my own short-term tactics and longer term strategies for using AI safely to thrive in my personal life, work, and writing. We have explored techniques for taking control of our personal and professional use of AI.

I wish both you and myself good luck navigating a world where AI is woven into the fabric of technology we use: products and services like Google Docs and Gmail, Microsoft Office 365, platforms like Twitter, Facebook, Salesforce, Amazon, and many others.

You might think that our only choice is a binary decision of deciding to use, or not use, these products and platforms. I don’t think this is true, however. I think it can sometimes make sense to mindfully use some of these products and platforms in a limited way.

Too many people spend too much time on social media and with stronger AI this trend gets worse. I fight back by using two services:

  • that I installed on all my devices to set hard limits to the number of minutes a day I can spend on my digital activities.
  • Apple Screen Time that keeps track of my activities on my Apple devices. I spend a few minutes a week reviewing where I spend time.

I wish you a happy and productive life.

Mark Watson

Appendix A - Using Google Colab

Google Colab is based on Jupyter notebooks and provides an interactive Python programming environment with GPU and TPU support for deep learning and using LLMs.

The default Colab notebook Making the Most of your Colab Subscription provides an overview for getting started.

I keep small reference Colab notebooks in my Google Drive that I can quickly refer to for writing interactive user interfaces, accessing Google Drive and GitHub files, etc.

Here is a demo notebook I use for reference when adding an interactive user interface to my notebooks:

 1 !pip install --upgrade google-colab
 2 !pip install ipywidgets
 3 import ipywidgets as widgets
 4 from IPython.display import display, clear_output
 6 output=widgets.Output(layout={'border':'1px solid gray',
 7                               'padding':
 8                                  '10px 10px 10px 10px',
 9                               'margin': '5px 5px 5px 2px',
10                               'width':'40%'})
11 output.layout.width = 'calc(40% + 14ex)'
12 heading = \
13   widgets.HTML(f"<h2>Test Widgets in Colab Notebooks</h2>\
14 ")
15 name = widgets.Text(description='Name:')
16 password = widgets.Password(description='Password:')
17 age = widgets.IntSlider(description='Age:',
18                         min=1, max=108, value=40)
19 gender = widgets.RadioButtons(options=['Male', 'Female'],\
20  description='Gender:')
21 interests = widgets.Textarea(description='Interests:')
22 submit = widgets.Button(description='Submit',
23                         button_style='Success')
25 def submit_clicked(b):
26    with output:
27     clear_output()
28     print("Name:", name.value)
29     #print("Password:", password.value)
30     print("Age:", age.value)
31     print("Gender:", gender.value)
32     print("Interests:")
33     print(interests.value)
35 submit.on_click(submit_clicked)
37 display(heading, name, password, age, gender, interests, \
38 submit)
39 display(output)

The output cell looks like:

Appendix B - Using GPU Servers on Lambda Labs

Most of the development for the examples in this book were done on a leased server with a 48G VRAM Nvidia A6000 GPU.

You can sign up and use Lambda Labs using this link.

Here I assume that you are familiar with both Linux and setting up and using VPSs.

If you start from scratch configuring a GPU VPS it can take 15 or 20 minutes. I reduce this setup time by using Lambda Labs new persistent file system. For this discussion, I assume that you have a file system called myfiles.

I create a directory myfiles/myssh where I copy in my GitHub access SSH keys. These files are password protected. I named these files id_github_lambdalabs but you can use whatever naming convention that you like.

When I SSH to a newly started VPS, I do the following:

 1 cd myfiles
 2 cp myssh/* ~/.ssh/
 4 eval ssh-agent $SHELL
 5 ssh-add ~/.ssh/id_github_lambdalabs
 6 # (Requires password)
 8 git config --global ""
 9 git config --global "Mark Watson"
11 nvidia-smi # check GPU availability and stats

I keep the above text in an Apple Note and I simply copy and paste it when first running on a GPU VPS.

Also, when you are done working remember to rm -r -f ~/.ssh before logging out and deleting your VPS instance.

I do much of my development work on remote leased servers so I have my own way of working remotely using SSH and Emacs. I usually always keep an inexpensive VPS without a GPU running and I add Mosh and tmux to SSH and Emacs and I find that I have a fluid development setup. I particularly like always having an inexpensive VPS available because I can instantly access it from my iPad or Chromebook while traveling or sitting outside in my yard.