Kick off your book project in 3 hours! Live workshop on Zoom. You’ll leave with a real book project, progress on your first chapter, and a clear plan to keep going. Saturday, May 16, 2026. Learn more…
A clear, illustrated guide to large language models, covering key concepts and practical applications. Ideal for projects, interviews, or personal learning.
It's never been easier to build an AI agent—and never been harder to make one that actually works. This book takes you from language model foundations to production-ready multi-agent systems, with the depth to understand what you're building and why it fails.
Master language models through mathematics, illustrations, and code―and build your own from scratch!
Build GPT-2, Llama 3, and DeepSeek from scratch in PyTorch. Every chapter has runnable end-to-end code and loads real pretrained weights. Goes well past where most LLM tutorials stop.
Most books about ChatGPT explain the magic. This one shows you the math. Inside Large Language Models, Volume I takes a curious beginner from "what is an LLM" to a complete, trained GPT, with nothing more than high-school algebra, a working laptop, and a willingness to read carefully. Every formula is walked through by hand. Every line of code comes with a plain-English explanation. By the end you will have built, trained, and run your own transformer from scratch, and you will know exactly what is happening inside. No PhD or Data Science required. No prior machine learning needed. Just curiosity and a calculator.
Büyük dil modellerine dair ana kavramları ve pratik uygulamaları kapsayan, açık ve görsellerle desteklenmiş bir rehber. Projeler, mülakatlar ve kişisel öğrenme için idealdir.
I wanted to understand how ChatGPT and other large language models (LLMs) really work, so I read a lot of books, watched YouTube videos, asked hundreds of questions, and wrote it all down. This book is the result. If you want to understand how large language models like ChatGPT actually work, from tokens and vectors to transformers and training, this book will explain it in a clear, approachable way.
이 책은 면접 준비, 프로젝트 진행, 또는 순수한 지적 호기심을 위해 대규모 언어 모델의 내부 구조와 작동 원리를 이해하고 싶은 모든 분들을 위한, 그림으로 설명하는 핵심 가이드입니다.
Revised for PyTorch 2.x! In 2019, I published a PyTorch tutorial on Towards Data Science and I was amazed by the reaction from the readers! Their feedback motivated me to write this book to help beginners start their journey into Deep Learning and PyTorch. I hope you enjoy reading this book as much as I enjoy writing it.
Learn how to build your own AI application step-by-step. A hands-on guide to AI development with local LLM inference
This book is a quick foray into the world of deep learning-based computer vision and abnormal equipment sound detection. The readers are introduced to the ease with which powerful equipment and product quality monitoring solutions can be built using sound and visual data.
Most books about ChatGPT explain the magic. This one shows you the math. Inside Large Language Models, Volume I takes a curious beginner from "what is an LLM" to a complete, trained GPT, with nothing more than high-school algebra, a working laptop, and a willingness to read carefully. Every formula is walked through by hand. Every line of code comes with a plain-English explanation. By the end you will have built, trained, and run your own transformer from scratch, and you will know exactly what is happening inside. No PhD or Data Science required. No prior machine learning needed. Just curiosity and a calculator.
From quantum and molecules to cells, individuals, organizations, nations, and civilizations—why do clearly defined hierarchical structures emerge? Why does evolution often manifest as a repeating rhythm of "oscillation—stability—aggregation—re-stability"? Why do many systems fail not because of insufficient power or lack of information, but because the "pace of doing things" is wrong?
Master the future of technology with this definitive guide to Modern Data Science. Unlock actionable insights through Analytics, Machine Learning, and Big Data strategies. Perfect for beginners and pros wanting a logic-first approach to data-driven decision making.
AI engines are booming, and the more we work with agentic systems, the more we see that we need something to make them work at the enterprise level. We're quite active in exploring ideas around context graphs, decision traces, and supporting explainability—giving agents the ability to make more aware and company-aligned decisions.But this makes sense not only for enterprises, but for users and individuals building personal agents as well. Unfortunately, we have zero-to-none inclination on how to actually build a context graph.I'll try to explain how to build something like a context graph—but go beyond it. I deeply believe that to make this work, we need specific agentic memory and a set of cognitive processes that truly help agents use this memory and learn from experience and data.That's why this is the Book: Beyond Context Graphs—with a focus on real-life enterprise tasks and how to make agents make better decisions and, let's say, hallucinate less.