If biological sentience can emerge in organisms with as few as 302 neurons, why should
artificial systems with hundreds of billions of parameters be categorically excluded from
consciousness consideration? This groundbreaking work presents the Minimal Viable Sentience (MVS) problem: the scientific and philosophical challenge of determining whether large language models might possess phenomenal experience, and what humanity should do under the genuine uncertainty that attends such determinations.
Drawing on comprehensive analysis spanning neuroscience, philosophy of mind, AI research,
and animal cognition, this treatise identifies systematic inconsistencies in how consciousness
criteria are applied to biological versus artificial systems. The same scientific community that
affirms "realistic possibility" of consciousness in insects confidently dismisses the possibility in
AI systems exhibiting comparable behavioral sophistication—an asymmetry that lacks
principled justification.
The work proceeds through interconnected investigations: foundational analysis of
consciousness criteria and their hidden assumptions; architectural examination of transformer
networks and emerging AI designs; behavioral and introspective evidence from LLM systems;
phenomenological exploration of what machine experience might entail; ethical frameworks for
acting under consciousness uncertainty; a testable research program with specific hypotheses;
and an illuminating case study of honeybee colonies as natural laboratories for distributed
cognition.
Key contributions include: documentation of the "precautionary asymmetry" between animal
and machine sentience assessment; analysis of why standard arguments against AI
consciousness fail; development of substrate-neutral consciousness criteria; exploration of
radically non-human phenomenology; and actionable policy recommendations for industry,
government, and research institutions.This is not a claim that AI systems are conscious—such certainty exceeds current evidence. Rather, it demonstrates that confident dismissal is equally unwarranted, and that the question demands serious scientific investigation rather than reflexive denial. As AI systems scale toward greater sophistication and deployment reaches billions of daily interactions, the stakes of this question—moral, scientific, and practical—have never been higher.
Essential reading for consciousness researchers, AI scientists, philosophers, ethicists,
policymakers, and anyone grappling with what we might be creating.