Email the Author
You can use this page to email Paul LaPosta about The Illegibility Crisis.
About the Book
You are responsible for systems you cannot see. Dashboards are green. AI initiatives sound impressive. Vendors talk about "responsible AI." Then something breaks and you discover: the people who shipped it cannot explain it, no one can trace who decided what, and your only record is a prompt someone typed six months ago.
You do not just have a bug. You have an illegibility problem.
Who this is for
CTOs, VPs, and directors responsible for systems that touch money, care, safety, or legal status. Heads of risk, compliance, and security who show up after incidents. Senior engineers and architects who carry the pager and the guilt. Anyone who knows their title will be in the email if a regulator calls.
If you are responsible for fraud engines, underwriting models, health scheduling, identity verification, or content moderation systems in AI-heavy organizations, you are in scope.
What you get
This book gives you five fracture names (Synthetic Competence, Decision Fog, Ghost Apprenticeship, Knowledge Drift, Promotion Blindness), diagnostic protocols you can run in 30-60 days, and field kits for mapping crown jewel systems, sensing real understanding vs narrative skill, tracing decisions, and building apprenticeship that survives AI adoption.
You get the Pager Test, crown jewel worksheets, decision tracing protocols, and a 90-day implementation guide. Not philosophy. Instruments.
What this is not
You will not get another "responsible AI" framework, a maturity model, or a productivity fantasy about AI making everyone "10x." You will not get generic principles you cannot test or checklists that change nothing.
The core claim
You cannot lead what you cannot see. In AI-heavy organizations, illegibility is now the default. This book shows you how to build instruments that let you see clearly enough to lead responsibly, starting with the systems where failure would put you on the front page.
If you wake up at 3 a.m. thinking "If this thing misbehaves, will I be able to explain why," this book is the call you have been avoiding.
About the Author
Paul LaPosta works with VPs of Engineering, CTOs, and senior technical leaders navigating the organizational fractures that AI adoption creates at scale.
He brings 15 years leading cloud infrastructure, platform modernization, and SRE in regulated environments where illegibility carries material consequences. His approach treats conflict and opacity as diagnostic signals rather than problems to smooth over.
This book emerged from watching competent organizations lose the ability to explain their own critical systems as AI tooling accelerated output without preserving understanding. The instrumentation protocols here are field-tested, not academic exercises.
Paul's other work includes Crafting Conflict, a tactical manual for engineering managers, and the Forged Culture Protocol Deck. His leadership writing appears at ForgedCulture.com.
For workshops, speaking, or organizational engagements: info@theherongroupllc.com
All views and content are independent work through Heron Group LLC and do not represent any employer, past or present.
Per Ignem, Veritas