Introduction
This book uses Artificial Intelligence as a training ground for you, the reader. Why?
How we form judgement when results are irreversible is becoming lost to time. This skill persists in elite military environments, but no longer does in software engineering or computer science education. We assume decisions are reversible. Rollbacks and do-overs are feasible. Decisions are abstracted away. “Best practices” replace reasoning from first principles. This book is for people who know their decisions matter.
This book creates the conditions of an elite environment for you to experience. Not read about, experience. What this book does is so unusual that you will feel uncomfortable from the start. Mastery comes from deliberate practice. Mastery is never easy. Mastery, here, becomes responsibility and judgement. This route has been traveled and awaits your decision to step inside.
My Cray Research colleagues would feel right at home here, while they were creating unsurpassed world class computing systems. The difference is that you have something world class at your fingertips: Large Language Models. What is missing is your understanding of LLMs as systems to observe. Your AI skills will sharply improve, but only as a side effect. You will need to think for yourself, and even think about your thinking.
The Wizard’s Lens is your training ground. You will make decisions without knowing the answers. You will practice thinking about systems whose internals are unknown, but whose behavior must be observed and changed.
Post-Soviet Russian scientific traditions have a stronger background here, calling this “formation of thinking” (формирование мышления). This way of thinking was common during the Cold War, including within Cray Research, but was never passed to younger generations.
If you are looking for affirmations, founder mythology, tips, bullet points, or quotable takeaways, this book is not for you. But if you relate to any of these questions:
- Why do I understand systems but not how experts think?
- Why do I only see successful results and not how engineering decisions were actually made?
- Why does AI suddenly make sense when framed as a system with constraints and observable behavior?
You are in the right place.