Unlock the Black Box of Artificial Intelligence.
Are you ready to move beyond simply calling API functions? Neural Networks & Deep Learning with Python Programming is the definitive guide for developers who want to understand the "why" and "how" behind the most powerful AI architectures in the world.
This is not just another theoretical textbook. This is a code-first masterclass. Volume 11 takes you on a journey from the mathematical roots of the artificial neuron to the cutting-edge of Generative AI. You won't just learn how to use PyTorch and TensorFlow; you will learn how to build neural networks from scratch using nothing but NumPy, ensuring you master the calculus of backpropagation and the mechanics of optimization.
What’s Inside This Volume?
Through rigorous theoretical explanations, detailed "from-scratch" implementations, and advanced application scripts, you will master:
- Foundations of Intelligence: Build Perceptrons and Multi-Layer Networks manually to internalize weights, biases, and the Chain Rule.
- Computer Vision Mastery: Architect Convolutional Neural Networks (CNNs) to mimic the visual cortex, mastering padding, pooling, and transfer learning with ResNet and VGG.
- Sequence Modeling: Conquer time-series data with RNNs, LSTMs, and GRUs, solving the vanishing gradient problem.
- The Transformer Revolution: dissect the "Attention Is All You Need" architecture, building Encoders, Decoders, and Multi-Head Attention mechanisms from the ground up.
- Generative AI & Art: Train Generative Adversarial Networks (GANs) and dive deep into the math behind Diffusion Models to understand how engines like Stable Diffusion function.
- Production Deployment: Learn how to bridge the gap between research and production by exporting models to ONNX and serving them with TorchServe.
Perfect For: Python developers, data scientists, and ML engineers who are tired of "black box" tutorials and want to possess the deep architectural knowledge required to innovate, debug, and deploy state-of-the-art AI systems.
You can read this book as a standalone.
All the source code is on GitHub.
Master the math. Write the code. Build the future.
Table of contents
Chapter 1: The Artificial Neuron - Weights, Biases, and Perceptrons
Chapter 2: The Learning Process - Loss Functions and Gradient Descent
Chapter 3: Backpropagation Explained - The Chain Rule in Action
Chapter 4: Building a Neural Net from Scratch (No Frameworks)
Chapter 5: Introduction to PyTorch - Tensors and Autograd
Chapter 6: The Visual Cortex - Convolutional Neural Networks (CNNs)
Chapter 7: Pooling and Padding - Architecture of Modern CNNs
Chapter 8: Image Classification - Building a Classifier for CIFAR-10
Chapter 9: Data Augmentation - expanding Datasets Artificially
Chapter 10: Transfer Learning - Using Pre-trained Models (ResNet/VGG)
Chapter 11: Sequence Data - Recurrent Neural Networks (RNNs)
Chapter 12: The Memory Problem - LSTMs and GRUs
Chapter 13: The Attention Mechanism - 'Attention Is All You Need'
Chapter 14: The Transformer Architecture - Encoders and Decoders
Chapter 15: Tokenization and Embeddings - Representing Words as Math
Chapter 16: Autoencoders - Compressing and Regenerating Data
Chapter 17: Generative Adversarial Networks (GANs) - Creating Art
Chapter 18: Diffusion Models - Understanding How Stable Diffusion Works
Chapter 19: Model Serving - Exporting to ONNX and Serving with TorchServe
Chapter 20: Deep Learning Mastery - Building a Custom Image Captioning AI
If printed, this ebook would span over 400 pages. Each chapter is structured into theoretical foundations, an annotated basic example, an annotated advanced example, and five coding exercises based on real-world scenarios with complete solutions.
Check also the other books in this series