Deep Learning with PyTorch Step-by-Step
Deep Learning with PyTorch Step-by-Step
A Beginner's Guide
About the Book
UPDATE (February 23rd, 2022): The paperback edition is available now (the book had to be split into 3 volumes for printing). For more details, please check pytorchstepbystep.com.
UPDATE (February 13th, 2022): The latest revised edition (v1.1.1) was published today to address small changes to Chapters 9 and 10 that weren't included in the previous revision.
UPDATE (January 23rd, 2022): The revised edition (v1.1) was published today - better graphics, improved formatting, larger page size (thus reducing page count from 1187 to 1045 pages - no content was removed!). If you already bought the book, you can download the new version at any time!
If you're looking for a book where you can learn about Deep Learning and PyTorch without having to spend hours deciphering cryptic text and code, and that's easy and enjoyable to read, this is it :-)
The book covers from the basics of gradient descent all the way up to fine-tuning large NLP models (BERT and GPT-2) using HuggingFace. It is divided into four parts:
- Part I: Fundamentals (gradient descent, training linear and logistic regressions in PyTorch)
- Part II: Computer Vision (deeper models and activation functions, convolutions, transfer learning, initialization schemes)
- Part III: Sequences (RNN, GRU, LSTM, seq2seq models, attention, self-attention, transformers)
- Part IV: Natural Language Processing (tokenization, embeddings, contextual word embeddings, ELMo, BERT, GPT-2)
This is not a typical book: most tutorials start with some nice and pretty image classification problem to illustrate how to use PyTorch. It may seem cool, but I believe it distracts you from the main goal: how PyTorch works? In this book, I present a structured, incremental, and from first principles approach to learn PyTorch (and get to the pretty image classification problem in due time).
Moreover, this is not a formal book in any way: I am writing this book as if I were having a conversation with you, the reader. I will ask you questions (and give you answers shortly afterward) and I will also make (silly) jokes.
My job here is to make you understand the topic, so I will avoid fancy mathematical notation as much as possible and spell it out in plain English.
In this book, I will guide you through the development of many models in PyTorch, showing you why PyTorch makes it much easier and more intuitive to build models in Python: autograd, dynamic computation graph, model classes and much, much more.
We will build, step-by-step, not only the models themselves but also your understanding as I show you both the reasoning behind the code and how to avoid some common pitfalls and errors along the way.
I wrote this book for beginners in general - not only PyTorch beginners. Every now and then I will spend some time explaining some fundamental concepts which I believe are key to have a proper understanding of what's going on in the code.
Maybe you already know well some of those concepts: if this is the case, you can simply skip them, since I've made those explanations as independent as possible from the rest of the content.
Reader Testimonials

Mahmud Hasan
Machine Learning Engineer at Micron Technology, Smart Manufacturing and AI
I am usually really picky in choosing books about ML/DL but I have to tell you, this book was one of the best books I have ever invested in. I cannot thank you enough for writing a book that gives so much clarity on the explanations of the inner workings of many DL techniques. Thank you so much and I hope you come up with even better books on other ML topics in the future.

Nipun Nayan Sadvilkar
Lead Data Scientist & Author, DL & NLP Workshop
As an author myself who've co-authored two books in Deep Learning & NLP space, I'm extremely impressed by Daniel's step-by-step pedagogical approach. Starting with a toy problem and gradually building abstractions on top of each other massively helps beginner to understand the nuts and bolts of each models and neural architectures be it basic or advanced! Daniel has justified "step-by-step" part from the title in a true sense. Highly recommended! 💯
Table of Contents
- Preface
- About the Author
- Frequently Asked Questions (FAQ)
- Why PyTorch?
- Why this book?
- Who should read this book?
- What do I need to know?
- How to read this book?
- What’s Next?
- Setup Guide
- Official Repository
- Environment
- Google Colab
- Binder
- Local Installation
- Moving On
- Part I: Fundamentals
- Chapter 0: Visualizing Gradient Descent
- Visualizing Gradient Descent
- Model
- Data Generation
- Step 0: Random Initialization
- Step 1: Compute Model’s Predictions
- Step 2: Compute the Loss
- Step 3: Compute the Gradients
- Step 4: Update the Parameters
- Step 5: Rinse and Repeat!
- Chapter 1: A Simple Regression Problem
- A Simple Regression Problem
- Data Generation
- Gradient Descent
- Linear Regression in Numpy
- PyTorch
- Autograd
- Dynamic Computation Graph
- Optimizer
- Loss
- Model
- Chapter 2: Rethinking the Training Loop
- Rethinking the Training Loop
- Dataset
- DataLoader
- Evaluation
- TensorBoard
- Saving and Loading Models
- Chapter 2.1: Going Classy
- Going Classy
- The Class
- The Constructor
- Training Methods
- Saving and Loading Methods
- Visualization Methods
- The Full Code
- Classy Pipeline
- Model Training
- Making Predictions
- Checkpointing
- Resuming Training
- Going Classy
- Chapter 3: A Simple Classification Problem
- A Simple Classification Problem
- Data Generation
- Data Preparation
- Model
- Loss
- BCELoss
- BCEWithLogitsLoss
- Imbalanced Dataset
- Model Configuration
- Model Training
- Decision Boundary
- Classification Threshold
- Confusion Matrix
- Metrics
- Trade-offs and Curves
- Part II: Computer Vision
- Chapter 4: Classifying Images
- Classifying Images
- Torchvision
- Data Preparation
- Dataset Transforms
- SubsetRandomSampler
- Data Augmentation Transforms
- WeightedRandomSampler
- Seeds and more (seeds)
- Putting It Together
- Pixels as Features
- Shallow Model
- Deep-ish Model
- Activation Functions
- Deep Model
- Bonus Chapter: Feature Space
- Two-Dimensional Feature Space
- Transformations
- A Two-Dimensional Model
- Decision Boundary, Activation Style!
- More Functions, More Boundaries
- More Layers, More Boundaries
- More Dimensions, More Boundaries
- Chapter 5: Convolutions
- Spoilers
- Jupyter Notebook
- Convolutions
- Filter/Kernel
- Convolving
- Moving Around
- Shape
- Convolving in PyTorch
- Striding
- Padding
- A REAL Filter
- Pooling
- Flattening
- Dimensions
- Typical Architecture
- A Multiclass Classification Problem
- Data Generation
- Data Preparation
- Loss
- Classification Losses Showdown!
- Model Configuration
- Model Training
- Visualizing Filters and More!
- Static Method
- Visualizing Filters
- Hooks
- Visualizing Feature Maps
- Visualizing Classifier Layers
- Accuracy
- Loader Apply
- Putting It All Together
- Recap
- Chapter 6: Rock, Paper, Scissors
- Rock, Paper, Scissors...
- Data Preparation
- ImageFolder
- Standardization
- The Real Datasets
- Three-Channel Convolutions
- Fancier Model
- Dropout
- Model Configuration
- Model Training
- Learning Rates
- Finding LR
- Adaptive Learning Rate
- Stochastic Gradient Descent (SGD)
- Momentum
- Nesterov
- Flavors of SGD
- Learning Rate Schedulers
- Adaptive vs Cycling
- Chapter 7: Transfer Learning
- Transfer Learning
- ImageNet
- ImageNet Large Scale Visual Recognition Challenge (ILSVRC)
- Transfer Learning in Practice
- Pre-Trained Model
- Model Configuration
- Data Preparation
- Model Training
- Generating a Dataset of Features
- Top Model
- Auxiliary Classifiers (Side-Heads)
- 1x1 Convolutions
- Inception Modules
- Batch Normalization
- Running Statistics
- Evaluation Phase
- Momentum
- BatchNorm2d
- Other Normalizations
- Small Summary
- Residual Connections
- Learning the Identity
- The Power of Shortcuts
- Residual Blocks
- Putting It All Together
- Fine-Tuning
- Feature Extraction
- Extra Chapter: Vanishing and Exploding Gradients
- Vanishing Gradients
- Initialization Schemes
- Batch Normalization
- Exploding Gradients
- Gradient Clipping
- Value Clipping
- Norm Clipping
- Clipping with Hooks
- Part III: Sequences
- Chapter 8: Sequences
- Sequences
- Data Generation
- Recurrent Neural Networks (RNNs)
- RNN Cell
- RNN Layer
- Shapes
- Stacked RNN
- Bidirectional RNN
- Square Model
- Visualizing the Model
- Transformed Inputs
- Hidden States
- The Journey of a Hidden State
- Gated Recurrent Units (GRUs)
- Long Short-Term Memory (LSTM)
- Variable-Length Sequences
- Padding
- Packing
- Unpacking (to padded)
- Packing (from padded)
- Variable-Length Dataset
- Collate Function
- 1D Convolutions
- Shapes
- Multiple Features or Channels
- Dilation
- Chapter 9: Sequence-to-Sequence
- Sequence-to-Sequence
- Encoder-Decoder Architecture
- Teacher Forcing
- Attention
- "Values"
- "Keys" and "Queries"
- Computing the Context Vector
- Scoring Method
- Attention Scores
- Scaled Dot Product
- Attention Mechanism
- Source Mask
- Decoder
- Encoder + Decoder + Attention
- Multi-Headed Attention
- Self-Attention
- Encoder
- Cross-Attention
- Decoder
- Subsequent Inputs and Teacher Forcing
- Target Mask
- Positional Encoding (PE)
- Chapter 10: Transform and Roll Out
- Transform and Roll Out
- Narrow Attention
- Chunking
- Multi-Headed Attention
- Stacking Encoders and Decoders
- Wrapping "Sub-Layers"
- Transformer Encoder
- Transformer Decoder
- Layer Normalization
- Batch vs Layer
- Projections or Embeddings
- The Transformer
- The PyTorch Transformer
- Vision Transformer
- Patches
- Special Classifier Token
- Part IV: Natural Language Processing
- Chapter 11: Down the Yellow Brick Rabbit Hole
- Down the Yellow Brick Rabbit Hole
- Building a Dataset
- Sentence Tokenization
- HuggingFace's Dataset
- Word Tokenization
- Vocabulary
- HuggingFace's Tokenizer
- Before Word Embeddings
- One-Hot Encoding (OHE)
- Bag-of-Words (BoW)
- Language Models
- N-grams
- Continuous Bag-of-Words (CBoW)
- Word Embeddings
- Word2Vec
- Global Vectors (GloVe)
- Using Word Embeddings
- Model I - GloVe + Classifier
- Model II - GloVe + Transformer
- Contextual Word Embeddings
- ELMo
- BERT
- Document Embeddings
- Model III - Preprocessed Embeddings
- BERT
- Tokenization
- Input Embeddings
- Pretraining Tasks
- Model IV - Classifying using BERT
- Fine-Tuning with HuggingFace
- Sequence Classification (or Regression)
- Trainer
- Pipelines
- GPT-2
The Leanpub 60-day 100% Happiness Guarantee
Within 60 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.
See full terms
Do Well. Do Good.
Authors have earned$11,595,069writing, publishing and selling on Leanpub, earning 80% royalties while saving up to 25 million pounds of CO2 and up to 46,000 trees.
Learn more about writing on Leanpub
Free Updates. DRM Free.
If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).
Most Leanpub books are available in PDF (for computers), EPUB (for phones and tablets) and MOBI (for Kindle). The formats that a book includes are shown at the top right corner of this page.
Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.
Learn more about Leanpub's ebook formats and where to read them
Top Books
C++20 - The Complete Guide
Nicolai M. JosuttisAll the new language and library features of C++20 (for those who know previous versions).
The book presents all new language and library features of C++20. Learn how this impacts day-to-day programming, to benefit in practice, to combine new features, and to avoid all new traps.
Buy early, pay less, free updates.
Other books:
Jetpack Compose internals
Jorge CastilloJetpack Compose is the future of Android UI. Master how it works internally and become a more efficient developer with it. You'll also find it valuable if you are not an Android dev. This book provides all the details to understand how the Compose compiler & runtime work, and how to create a client library using them.
SignalR on .NET 6 - the Complete Guide
Fiodar SazanavetsLearn everything there is to learn about SignalR and how to integrate it with the latest .NET 6 and C# 10 features. Learn how to connect any type of client to SignalR, including plain WebSocket client. Learn how to build interactive applications that can communicate with each other in real time without making excessive calls.
OpenIntro Statistics
David Diez, Christopher Barr, Mine Cetinkaya-Rundel, and OpenIntroA complete foundation for Statistics, also serving as a foundation for Data Science.
Leanpub revenue supports OpenIntro (US-based nonprofit) so we can provide free desk copies to teachers interested in using OpenIntro Statistics in the classroom and expand the project to support free textbooks in other subjects.
More resources: openintro.org.
R Programming for Data Science
Roger D. PengThis book brings the fundamentals of R programming to you, using the same material developed as part of the industry-leading Johns Hopkins Data Science Specialization. The skills taught in this book will lay the foundation for you to begin your journey learning data science. Printed copies of this book are available through Lulu.
The easiest way to learn design patterns
Fiodar SazanavetsLearn design patterns in the easiest way possible. You will no longer have to brute-force your way through each one of them while trying to figure out how it works. The book provides a unique methodology that will make your understanding of design patterns stick. It can also be used as a reference book where you can find design patterns in seconds.
Ansible for DevOps
Jeff GeerlingAnsible is a simple, but powerful, server and configuration management tool. Learn to use Ansible effectively, whether you manage one server—or thousands.
CCIE Service Provider Version 4 Written and Lab Exam Comprehensive Guide
Nicholas RussoThe service provider landscape has changed rapidly over the past several years. Networking vendors are continuing to propose new standards, techniques, and procedures for overcoming new challenges while concurrently reducing costs and delivering new services. Cisco has recently updated the CCIE Service Provider track to reflect these changes; this book represents the author's personal journey in achieving that certification.
Cronache di Domain-Driven Design
Francesco Strazzullo, Matteo Baglini, Gianluca Padovani, Emanuele DelBono, Marco Consolaro, Alessandro Colla, Uberto Barbini, Alberto Acerbis, Julie Camosseto, DDD Open, and Alberto BrandoliniCronache di Domain-Driven Design: un libro corale in italiano fatto di storie indipendenti tra loro, che sono il risultato dell’applicazione di Domain-Driven Design su progetti reali.
Functional event-driven architecture: Powered by Scala 3
Gabriel VolpeExplore the event-driven architecture (EDA) in a purely functional way, mainly powered by Fs2 streams in Scala 3!
Leverage your functional programming skills by designing and writing stateless microservices that scale, powered by stateful message brokers.
Top Bundles
- #1
Practical FP in Scala + Functional event-driven architecture
2 Books
Practical FP in Scala (A hands-on approach) & Functional event-driven architecture, aka FEDA, (Powered by Scala 3), together as a bundle! The content of PFP in Scala is a requirement to understand FEDA so why not take advantage of this bundle!? - #2
Software Architecture for Developers: Volumes 1 & 2 - Technical leadership and communication
2 Books
"Software Architecture for Developers" is a practical and pragmatic guide to modern, lightweight software architecture, specifically aimed at developers. You'll learn:The essence of software architecture.Why the software architecture role should include coding, coaching and collaboration.The things that you really need to think about before... - #3
All the Books of The Medical Futurist
6 Books
We put together the most popular books from The Medical Futurist to provide a clear picture about the major trends shaping the future of medicine and healthcare. Digital health technologies, artificial intelligence, the future of 20 medical specialties, big pharma, data privacy, digital health investments and how technology giants such as Amazon... - #4
CCIE Service Provider Ultimate Study Bundle
2 Books
Piotr Jablonski, Lukasz Bromirski, and Nick Russo have joined forces to deliver the only CCIE Service Provider training resource you'll ever need. This bundle contains a detailed and challenging collection of workbook labs, plus an extensively detailed technical reference guide. All of us have earned the CCIE Service Provider certification... - #6
Modern C++ Collection
3 Books
Get All about Modern C++C++ Standard Library, including C++20Concurrency with Modern C++, including C++20C++20Each book has about 200 complete code examples. Updates are included. When I update one of the books, you immediately get the updated bundle. You can expect significant updates to each new C++ standard (C++23, C++26, .. ) and also... - #7
Pattern-Oriented Memory Forensics and Malware Detection
2 Books
This training bundle for security engineers and researchers, malware and memory forensics analysts includes two accelerated training courses for Windows memory dump analysis using WinDbg. It is also useful for technical support and escalation engineers who analyze memory dumps from complex software environments and need to check for possible...