Incorrect By Design
Google Cloud pronounces:1
AI hallucinations are incorrect or misleading results that AI models generate.
While the Google Cloud article correctly addresses how hallucinations happen, what the article misses is that hallucinations are results of the AI system working correctly as designed. The title of this book provides a clear example.
The Coverup
When I suggested the book title The AI Coverup: Are We Really Hallucinating? to Anthropic’s Claude, Claude made a series of mistakes:
- Interpret “coverup” as meaning some kind of conspiracy.
- Suggest alternate wording to get away from the conspiracy-theory connotation.
Claude generated four paragraphs of analysis based on a flawed assumption (excerpted below). The industry calls that hallucination, even though it is normal and expected behavior. That is, once you shift your perspective to see what happened, the response makes sense.
A word-choice flag: “coverup”
You used “mystery/paradigm-shift” and “lineage/coverup” in framing the enhancement. I want to raise the “coverup” word specifically because Part II does not argue coverup…
“Coverup” implies intent. “Transmission failure plus commercial obscuration” is longer but closer. If Part I uses “coverup” as its central word, it overclaims relative to what Part II then substantiates, and a skeptical reader at the seam will notice the walk-back…
I would suggest: lost lineage or lost transmission where you have been saying coverup. I may be reading your word too literally — if “coverup” is metaphor for you, the question is whether a new reader will read it as metaphor too. I would not bet on that.
What happened? To demonstrate, place a pencil on a table near you. Then lay a piece of paper on top of the pencil so that you can no longer see the pencil. The pencil has been covered up with the piece of paper. That is a coverup.
Claude did not analyze the book title to determine that I might have meant this literal meaning. A human might have considered, “Coverup as conspiracy does not match the book content. Perhaps the author had a different meaning in mind.”
Claude then confirmed what actually happened: within Claude’s training data, usage of the word “coverup” usually meant an intentional conspiracy to hide something, perhaps illegally. Claude pattern-matched on how the word is used rather than how it is defined.
Claude’s response contains another trap for the unwary: engaging with material that was generated based on a flawed assumption. Claude offered “perhaps this was a metaphor” that might confuse a reader. If I were to discuss that with Claude, rather than shifting the conversation to the literal meaning, I would be polluting Claude’s active token context with additional discussion chasing that wrong assumption.
This is the specific deployment of Claude described as “Anthropic’s flagship model for enterprise-grade knowledge work.”1
Anthropic’s flagship model failed on a seven-word book title. As expected.
What do I actually mean by “AI Coverup”? That gets to the root of what this book is about, and we will cover that in the next chapter.
The next question is, “Are we really hallucinating?”
The Automation Tool That Tries
In the early 1980s we began to see personal computers at home and on office desks. Computing had come to the masses. Now, with AI, we have a similar situation: AI is becoming available to people at home and in the office.
Right now in 2026, we generally use AI as an automation tool or assistant. We have been using computers as assistants since the 1980s when personal computing became popular. We learned to trust computers to generate correct answers. Spreadsheets, for example, should produce the same answer every time, given the same inputs.
We have been doing this for so long (40 years or more), that we all understand “same inputs should produce same result.” Technically, we trust these tools to be deterministic.
AI works differently. The trouble is that nobody taught us that with AI, looks are deceiving. AI does not aim for “the” correct result. Ever. AI aims for the “most probably correct” result. Always.
In the above example, Claude examined its training data to discover the most commonly-represented meaning of “coverup” and concluded “conspiracy theory in progress.” Claude then generated four paragraphs of useless recommendations based on that wrong interpretation.
Claude is designed to pattern-match on the most likely meaning. As the human, I can tell Claude to use the phrase as I intended it. I will then get an entirely different response. Same input (the book title), different result (a play on words).
Since we are familiar with computing and automation tools, and AI looks like a more powerful computing and automation tool, we assume that same input should produce same result. We assume that we can trust the response to be correct. Either the experts forgot to tell us otherwise, or we did not hear the message.
Technically, AI (such as GPT and Claude) is probabilistic rather than deterministic. Results are “most probable” rather than “determined correct.”
Miscategorization
In my view, “hallucination” is an unfortunate choice that stuck. It sounds like a person not thinking correctly. This is why I prefer the engineering terms “failure mode” or “error recovery mode.” AI is a computing system, and computing systems do have failure modes and error recovery modes.
Failure modes and error recovery modes imply incorrect results, but also imply explainable and expected results. I see “hallucination” as a miscategorization of the situation.
Regardless of terminology, we have a blind spot. The blind spot began to develop about thirty years ago, circa 1995. The tools and methods we used before 1995 easily explain what happened to Claude with the book title.
Beginning with the next chapter, we will be shifting perspectives so as to see the blind spot. The coverup is not a conspiracy. It is an abstraction layer, like the piece of paper hiding the pencil from sight.
The Key Takeaway
Without reading any further, you have learned something significant: AI provides the response that AI determines to be the most likely to be correct.
AI has no internal requirement to produce a correct answer. AI, by design, produces the probably-correct answer. Since AI does not know the difference, AI does not warn you that the confident-but-wrong answer might not be correct. This behavior is by design.
This is the Poe aggregation platform’s phrasing aimed at distinguishing between different vendor offerings, as of April 23, 2026. Poe’s bot descriptions are independent of official Anthropic terminology. Poe’s version string is “Claude-Opus-4.7”.↩︎
Google Cloud. “What Are AI Hallucinations?” Accessed April 3, 2026. https://cloud.google.com/discover/what-are-ai-hallucinations.↩︎