A Short Guide to Fine-Tuning Large Language Models with PyTorch and HuggingFace

A Short Guide to Fine-Tuning Large Language Models with PyTorch and HuggingFace

About the Book

This book is a practical guide to fine-tuning Large Language Models (LLMs), offering both a high-level overview and detailed instructions on how to train these models for specific tasks. It covers essential topics, from loading quantized models and setting up LoRA adapters to properly formatting datasets and selecting the right training arguments. The book focuses on the key details and "knobs" you need to adjust to effectively handle LLM fine-tuning. Additionally, it provides a comprehensive overview of the tools you'll need along the way, including HuggingFace's Transformers, Datasets, and PEFT packages, as well as BitsAndBytes, Llama.cpp, and Ollama. The final chapter includes troubleshooting tips for common error messages and exceptions that may arise.

  • Share this book

  • Categories

    • Artificial Intelligence
    • Machine Learning
    • Python
  • Feedback

    Email the Author(s)

About the Author

Daniel Voigt Godoy
Daniel Voigt Godoy

Daniel has been teaching machine learning and distributed computing technologies at Data Science Retreat, the longest-running Berlin-based bootcamp, for more than three years, helping more than 150 students advance their careers.

He writes regularly for Towards Data Science. His blog post "Understanding PyTorch with an example: a step-by-step tutorial" reached more than 220,000 views since it was published.

The positive feedback from the readers resulted in an invitation to speak at the Open Data Science Conference (ODSC) Europe in 2019. It also motivated him to write the book "Deep Learning with PyTorch Step-by-Step", which covers a broader range of topics.

Daniel is also the main contributor of two python packages: HandySpark and DeepReplay.

His professional background includes 20 years of experience working for companies in several industries: banking, government, fintech, retail and mobility.

Table of Contents

  • Frequently Asked Questions (FAQ)
    • Why Fine-Tune LLMs?
    • Why This Book?
    • Who Should Read This Book?
    • What Do I Need to Know?
    • How to Read This Book
  • Chapter 0: TLDR;
    • Loading a Quantized Model
    • Setting Up Low-Rank Adapters
    • Formatting Your Dataset
    • Training with HuggingFace
    • Querying the Model
  • Chapter 1: Pay Attention to LLMs
    • Language Models, Small and Large
    • Attention is All You Need
    • No Such Thing As Too Much RAM
    • Flash Attention
    • Types of Fine-Tuning
      • Self-Supervised
      • Supervised
      • Instruction
  • Chapter 2: Quantizing Models
    • Overview of Data Types
    • BitsAndBytes
    • 8-bit Quantization
    • 4-bit Quantization
    • Loading a Quantized Model
  • Chapter 3: Low-Rank Adaptation (LoRA)
    • Fine-Tuning as Adaptation
    • Going Low-Rank
    • PeFT
    • Setting Up Low-Rank Adapters
  • Chapter 4: Formatting Your Dataset
    • Tokenization
    • Special Tokens Galore
      • Padding or Packing
      • Chatterbox Models and the EOS Token
    • Chat Templates
      • ChatML
    • Formatting Functions
    • Text Fields
  • Chapter 5: Training with HuggingFace
    • SFTTrainer
    • Trainer
    • TrainingArguments
      • Optimizers
      • Gradient Checkpointing
    • Training Models
      • Self-Supervised (#1)
      • Instruction (#2)
      • Supervised (#3)
  • Chapter 6: Deploying It Locally
    • Querying Your Model
    • Llama.cpp
    • The GGUF Format
      • Conversion
      • Quantization
    • Ollama
      • Custom Model
  • Chapter -1: Troubleshooting
  • Appendix: Setting Up Your GPU Pod

The Leanpub 60 Day 100% Happiness Guarantee

Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

Now, this is technically risky for us, since you'll have the book or course files either way. But we're so confident in our products and services, and in our authors and readers, that we're happy to offer a full money back guarantee for everything we sell.

You can only find out how good something is by trying it, and because of our 100% money back guarantee there's literally no risk to do so!

So, there's no reason not to click the Add to Cart button, is there?

See full terms...

Earn $8 on a $10 Purchase, and $16 on a $20 Purchase

We pay 80% royalties on purchases of $7.99 or more, and 80% royalties minus a 50 cent flat fee on purchases between $0.99 and $7.98. You earn $8 on a $10 sale, and $16 on a $20 sale. So, if we sell 5000 non-refunded copies of your book for $20, you'll earn $80,000.

(Yes, some authors have already earned much more than that on Leanpub.)

In fact, authors have earnedover $13 millionwriting, publishing and selling on Leanpub.

Learn more about writing on Leanpub

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers) and EPUB (for phones, tablets and Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses!

Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks.

Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. (Or, if you are producing your ebook your own way, you can even upload your own PDF and/or EPUB files and then publish with one click!) It really is that easy.

Learn more about writing on Leanpub