Linear Algebra: A Cookbook Approach
A ground-up introduction to linear algebra that starts with nothing beyond high school algebra and ends with a complete, mathematically rigorous derivation of backpropagation in neural networks.
The Idea
Most linear algebra textbooks are written for math majors. They prioritize generality and proof over computation and intuition. This book takes a different path: it teaches you to do linear algebra before asking you to abstract it. Every concept is introduced through worked examples with every step shown and every answer boxed. You build understanding the same way you build any skill — by doing the thing, repeatedly, until the patterns become second nature.
What Makes This Book Different
It's a cookbook, not an encyclopedia. Each chapter follows a consistent recipe: a concept introduction, a stack of fully worked examples, verification steps, common mistakes to watch for, and exercises. No hand-waving, no "it can be shown that," no proofs left as exercises. If the book claims a result, it shows you the computation.
It has one destination. The entire book is structured as a single arc from "what is a vector?" to "how does backpropagation work?" Every chapter includes a neural network callout that shows exactly how that chapter's math appears in deep learning. These callouts build a continuous thread: vectors become inputs, dot products become neuron activations, matrices become weight layers, the chain rule becomes backpropagation. By Chapter 19, every idea in the book converges.
It pairs algebra with geometry. Formulas tell you what to compute; pictures tell you why it works. The book consistently develops both perspectives — component-wise calculation and geometric intuition — so that each reinforces the other.
Who It's For
Anyone who wants to understand the mathematics behind modern AI and machine learning, starting from scratch. No calculus, no prior linear algebra, no programming required. If you can solve \(2x + 3 = 7\), you can read this book.
It is especially useful for:
- Self-learners preparing for machine learning courses
- Software engineers who want to understand what their models actually compute
- Students looking for a computational companion to a more theoretical course
- Anyone who has bounced off a linear algebra textbook and wants a second approach
What It Covers
The 19 chapters span five parts:
- Vectors — what they are, how to add and scale them, lengths, angles, and independence
- Matrices — multiplication, systems of equations, determinants, inverses, and linear transformations
- Deeper structure — eigenvalues, orthogonality, and the singular value decomposition
- Calculus — derivatives, gradients, and gradient descent, developed from scratch
- Neural networks — the single neuron, multilayer networks, and backpropagation
Two appendices provide a notation reference and solutions to selected exercises.
How to Read It
Sequentially. Each chapter depends on the ones before it. Keep a pencil handy — the exercises are not optional decoration. They are where the learning happens.
Part I: Vectors
Part II: Matrices
Part III: Deeper Structure
Part IV: Calculus for Linear Algebra
Part V: The Mathematics of Neural Networks
Appendices