AI coding tools can write most of the code now. What they can't do is tell you whether the code is actually right. For a lot of engineers, the first reaction is anxiety. If the code writes itself, what is my job now exactly?
I decided to write the book I've been looking for and couldn't find.
I'm an engineer who builds systems where reliability isn't optional and over the past year I've integrated AI agents into my daily work, running parallel sessions, reviewing output, catching the patterns they get wrong, and figuring out where I still make the difference. I've also worked alongside colleagues using AI and witnessed firsthand some of the pitfalls. This book is everything I learned, written for engineers who want to use these tools seriously without losing the skills that make them valuable.
What you'll learn:
- The technical stack behind AI coding agents. Context windows, MCP, skills, model selection, and how to set up a workspace that actually works. Inspiration from my own setup so you can build one that adapts to your preferences.
- How to direct agents effectively. The verification loop that turns a one-shot code generator into a self-correcting system. How corrections compound into your instruction files so the same mistake never happens twice. And when to take back the controls and dive deeper into the details (not necessarily by writing code!).
- Why fundamentals matter more now, not less. AI gets the syntax right most of the times but isn't aware of all underlying concepts. Memory management, network behavior, system design: understanding the level under your daily work is the real differentiator in your work.
- Novel workflows most engineers haven't tried. Automated code review in your CI pipeline. Using your agent as a codebase explorer. Browser MCPs that give your agent eyes on the actual UI. Learning with your agent, not just building.
- The economics nobody talks about. Current pricing is heavily discounted. The productivity data is more mixed than the marketing suggests. What companies are actually doing, and how to think about the cost honestly.
- How to hire and get hired in this new world. I think take-home exercises should be 10x harder than three years ago, and candidates should use whatever AI tools they want. What matters is whether the result is clean, well-architected, and whether they can walk you through the tradeoffs. I cover both sides of the table: how to assess whether someone can direct these tools well, and how to build a portfolio that shows engineering judgment instead of prompt output.
I also wrote about what doesn't get talked about enough. Knowledge debt: the understanding that never gets built when engineers ship code they haven't fully read. The version lock-in problem, where AI models trained on old frameworks steer you away from newer, better patterns. And the open questions nobody has answers to yet, like who owns AI-generated bugs when they hit production (this is likely you!).
None of this exists in one place anywhere else. It's first-hand, from daily practice. The tools change fast, the engineering practices, the verification habits, the way you think about your career, those are what last.
The engineers who will do best are the ones who stay curious about how things work, even when the AI gives them a working answer. This book is for them.