A Hands-On Guide to Fine-Tuning Large Language Models with PyTorch and Hugging Face
A Hands-On Guide to Fine-Tuning Large Language Models with PyTorch and Hugging Face
About the Book
Are you ready to fine-tune your own LLMs?
This book is a practical guide to fine-tuning Large Language Models (LLMs), combining high-level concepts with step-by-step instructions to train these powerful models for your specific use cases.
Who Is This Book For?
This is an intermediate-level resource—positioned between building a large language model from scratch and deploying an LLM in production—designed for practitioners with some prior experience in deep learning.
If terms like Transformers, attention mechanisms, Adam optimizer, tokens, embeddings, or GPUs sound familiar, you’re in the right place. Familiarity with Hugging Face and PyTorch is assumed. If you're new to these concepts, consider starting with a beginner-friendly introduction to deep learning with PyTorch before diving in.
What You’ll Learn:
- Load quantized models using BitsAndBytes.
- Configure Low-Rank Adapters (LoRA) using Hugging Face's PEFT.
- Format datasets effectively using chat templates and formatting functions.
- Fine-tune LLMs on consumer-grade GPUs using techniques such as gradient checkpointing and accumulation.
- Deploy LLMs locally in the GGUF format using Llama.cpp and Ollama.
- Troubleshoot common error messages and exceptions to keep your fine-tuning process on track.
This book doesn’t just skim the surface; it zooms in on the critical adjustments and configuration—those all-important "knobs"—that make or break the fine-tuning process.
By the end, you’ll have the skills and confidence to fine-tune LLMs for your own real-world applications. Whether you’re looking to enhance existing models or tailor them to niche tasks, this book is your essential companion.
Table of Contents
- Frequently Asked Questions (FAQ)
- Who Should Read This Book?
- What Do I Need to Know?
- Why Fine-Tune LLMs?
- How Difficult Is It to Fine-Tune an LLM?
- What About RAG?
- Why This Book?
- What Setup Do I Need?
- How to Read This Book
- Chapter 0: TL;DR
- Loading a Quantized Base Model
- Setting Up Low-Rank Adapters (LoRA)
- Formatting Your Dataset
- Fine-Tuning with SFTTrainer
- Querying the Model
- Chapter 1: Pay Attention to LLMs
- Language Models, Small and Large
- Transformers
- Attention is All You Need
- No Such Thing As Too Much RAM
- Flash Attention and SDPA
- Types of Fine-Tuning
- Self-Supervised
- Supervised
- Instruction
- Preference
- Chapter 2: Loading a Quantized Base Model
- Quantization in a Nutshell
- Half-Precision Weights
- The Brain Float
- Loading Models
- Mixed Precision
- BitsAndBytes
- 8-bit Quantization
- 4-bit Quantization
- The Secret Lives of Dtypes
- Chapter 3: Low-Rank Adaptation (LoRA)
- Low-Rank Adaptation in a Nutshell
- Parameter Types and Gradients
- PeFT
- target_modules
- The PEFT Model
- modules_to_save
- Embeddings
- Managing Adapters
- Chapter 4: Formatting Your Dataset
- Formatting in a Nutshell
- Applying Templates
- Supported Formats
- BYOFF (Bring Your Own Formatting Function)
- BYOFD (Bring Your Own Formatted Data)
- The Tokenizer
- Data Collators
- Packed Dataset
- Advanced - BYOT (Bring Your Own Template)
- Chat Template
- Custom Template
- Special Tokens FTW
- Chapter 5: Fine-Tuning with SFTTrainer
- Training in a Nutshell
- Fine-Tuning with SFTTrainer
- SFTConfig
- Memory Usage Arguments
- Mixed-Precision Arguments
- Dataset-Related Arguments/li>
- Typical Training Arguments
- Environment and Logging Arguments
- The Actual Training (For Real!)
- Attention
- Flash Attention 2
- PyTorch's SDPA
- Studies, Ablation-Style
- Chapter 6: Deploying It Locally
- Deploying in a Nutshell
- Querying the Model
- Llama.cpp
- GGUF File Format
- Converting Adapters
- Converting Full Models
- Serving Models
- Ollama
- Llama.cpp
- Chapter -1: Troubleshooting
- Appendix A: Setting Up Your GPU Pod
- Appendix B: Data Types' Internal Representation
The Leanpub 60 Day 100% Happiness Guarantee
Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.
Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.
You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!
So, there's no reason not to click the Add to Cart button, is there?
See full terms...
Earn $8 on a $10 Purchase, and $16 on a $20 Purchase
We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.
(Yes, some authors have already earned much more than that on Leanpub.)
In fact, authors have earnedover $14 millionwriting, publishing and selling on Leanpub.
Learn more about writing on Leanpub
Free Updates. DRM Free.
If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).
Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.
Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.
Learn more about Leanpub's ebook formats and where to read them