The Ultimate Guide to Fine-Tuning Large Language Models with PyTorch and HuggingFace

The Ultimate Guide to Fine-Tuning Large Language Models with PyTorch and HuggingFace

About the Book

This book is a practical guide to fine-tuning Large Language Models (LLMs), offering both a high-level overview and detailed instructions on how to train these models for specific tasks. It covers essential topics, from loading quantized models and setting up LoRA adapters to properly formatting datasets and selecting the right training arguments. The book focuses on the key details and "knobs" you need to adjust to effectively handle LLM fine-tuning. Additionally, it provides a comprehensive overview of the tools you'll need along the way, including HuggingFace's Transformers, Datasets, and PEFT packages, as well as BitsAndBytes, Llama.cpp, and Ollama. The final chapter includes troubleshooting tips for common error messages and exceptions that may arise.

  • Share this book

  • Categories

    • Artificial Intelligence
    • Machine Learning
    • Python
  • Feedback

    Email the Author(s)

About the Author

Daniel Voigt Godoy
Daniel Voigt Godoy

Daniel has been teaching machine learning and distributed computing technologies at Data Science Retreat, the longest-running Berlin-based bootcamp, for more than three years, helping more than 150 students advance their careers.

He writes regularly for Towards Data Science. His blog post "Understanding PyTorch with an example: a step-by-step tutorial" reached more than 220,000 views since it was published.

The positive feedback from the readers resulted in an invitation to speak at the Open Data Science Conference (ODSC) Europe in 2019. It also motivated him to write the book "Deep Learning with PyTorch Step-by-Step", which covers a broader range of topics.

Daniel is also the main contributor of two python packages: HandySpark and DeepReplay.

His professional background includes 20 years of experience working for companies in several industries: banking, government, fintech, retail and mobility.

Table of Contents

  • Frequently Asked Questions (FAQ)
    • 100% Human Writing
    • Why Fine-Tune LLMs?
    • How Difficult It Is to Fine-Tune an LLM?
    • Why This Book?
    • Who Should Read This Book?
    • What Do I Need to Know?
    • What Setup Do I Need?
    • How to Read This Book
  • Chapter 0: TLDR;
    • Loading a Quantized Base Model
    • Setting Up Low-Rank Adapters (LoRA)
    • Formatting Your Dataset
    • Fine-Tuning with SFTTrainer
    • Querying the Model
  • Chapter 1: Pay Attention to LLMs
    • Language Models, Small and Large
    • Transformers
    • Attention is All You Need
      • No Such Thing As Too Much RAM
      • Flash Attention and SDPA
    • Types of Fine-Tuning
      • Self-Supervised
      • Supervised
      • Instruction
      • Preference
  • Chapter 2: Loading a Quantized Base Model
    • Quantization in a Nutshell
    • Half-Precision Weights
    • The Brain Float
    • Loading Models
    • Mixed Precision
    • BitsAndBytes
      • 8-bit Quantization
      • 4-bit Quantization
      • The Secret Lives of Dtypes
  • Chapter 3: Low-Rank Adaptation (LoRA)
    • Low-Rank Adaptation in a Nutshell
    • Parameter Types and Gradients
    • PeFT
      • target_modules
      • The PEFT Model
      • modules_to_save
      • Embeddings
      • Managing Adapters
  • Chapter 4: Formatting Your Dataset
    • Formatting in a Nutshell
    • Applying Templates
      • Supported Formats
      • BYOFF (Bring Your Own Formatting Function)
      • BYOFD (Bring Your Own Formatted Data)
    • The Tokenizer
    • Data Collators
    • Packed Dataset
    • Advanced - BYOT (Bring Your Own Template)
      • Chat Template
      • Custom Template
      • Special Tokens FTW
  • Chapter 5: Training with HuggingFace
    • Training in a Nutshell
    • Fine-Tuning with SFTTrainer
    • SFTConfig
      • Memory Usage Arguments
      • Mixed-Precision Arguments
      • Dataset-Related Arguments/li>
      • Typical Training Arguments
      • Environment and Logging Arguments
    • The Actual Training (For Real!)
    • Attention
      • Flash Attention 2
      • PyTorch's SDPA
    • Studies, Ablation-Style
  • Chapter 6: Deploying It Locally
    • Deploying in a Nutshell
    • Querying the Model
    • Llama.cpp
      • GGUF File Format
      • Converting Adapters
      • Converting Full Models
    • Serving Models
      • Ollama
      • Llama.cpp
  • Chapter -1: Troubleshooting
  • Appendix A: Setting Up Your GPU Pod
  • Appendix B: Data Types' Internal Representation

The Leanpub 60 Day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.

You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!

So, there's no reason not to click the Add to Cart button, is there?

See full terms...

Earn $8 on a $10 Purchase, and $16 on a $20 Purchase

We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.

(Yes, some authors have already earned much more than that on Leanpub.)

In fact, authors have earnedover $14 millionwriting, publishing and selling on Leanpub.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub