Can a conscious living AI ever become truly Human?
AI can do a lot—it learns, adapts, and even surprises us with its near-human interactions. But can it ever really become one of us?
In his book, Autognorics, Joey Lawsin claims that it will never happen and his theories make a strong case for why artificial intelligence might possess elements of consciousness yet will never cross the threshold into humanity. Take the AI Paradox, for example. AI may process vast amounts of data, but does it know anything in the way humans do? His Caveman in the Box Thought Experiment illustrates this limitation—AI works with pre-programmed inputs and lacks the ability to genuinely experience or interpret the world like a human would. Then in his Bowlingual Experiment—a fascinating observable experiment into how dogs acquire and transfer knowledge from one dog to another dog. Lawsin pushes this further with ideas like Non-Biological Criteria of Life, which provides the seven stages how AI and human become alive, living, and with life. By diving into concepts like Inscriptionism, Codexation, and The Seven Types of Consciousness, Lawsin doesn’t just highlight AI’s constraints—he forces us to rethink intelligence altogether. AI may keep evolving, but if these theories hold true, it will never be human.
Here are several of Lawsin’s philosophical and scientific concepts, which challenge conventional views on consciousness, intelligence, life, and existence:
1. The AI Paradox: The Core Argument 2. The Caveman in the Box Thought Experiment: 3. The Bowlingual Experiment: 4. Non-Biological Criteria of Life 5. Lawsin’s Dictum on Consciousness: 6. The Seven Types of Consciousness 7. The Codexation Dilemma: 8. Inscriptionism and the Brein Theory: 9. Viegenism and Latent Existence: 10. The Single Theory of Everything:
The AI Paradox claims that AI can only acquire information by choice, whereas human cognition involves both by choice and by chance learning.
- By choice refers to deliberate learning through structured processes, such as studying, researching, or programming AI to analyze data.
- By chance refers to accidental, unpredictable discoveries—moments of inspiration that arise unexpectedly, leading to breakthroughs that were not guided by predetermined logic.
AI only functions through predefined algorithms, meaning any “discovery” it makes is pre-scripted by humans. If AI were programmed to simulate discovery, it would still follow structured methods, making its findings a product of guidance rather than genuine spontaneity. Thus, AI may uncover patterns humans have overlooked, but it cannot achieve true discovery the way humans do.
Lawsin introduces Inscriptionism, which argues that existence is shaped by embedded instructions and intuitive materials. AI operates through predefined structures, ensuring that its learning follows guided logic rather than autonomous reasoning. Humans, however, rely on intuitive objects (IO) and embedded inscriptions (EI) that enable independent thought, emotion, and unpredictable realizations. This distinction reinforces why AI can mimic intelligence, consciousness, and life but cannot originate self-discovery.
The Four Boxes Model
Lawsin’s Caveman in the Box Theory explores how intelligence develops through isolation. The thought experiment presents four cases:
1. The Newborn in the Box – A baby is born and placed in a self-sustaining, high-tech environment without human interaction. This scenario questions whether intelligence could emerge without external stimuli.
2. The Caveman in the Box – The first human is isolated surrounded by nature. This tests whether intelligence develops through environmental exposure alone.
3. The Dog in the Box – A dog is raised in the same isolated conditions as the caveman, comparing whether non-human creatures exhibit intelligence similar to humans.
4. The Intuitive Machine in the Box - An artificial intelligent is also isolated much like the newborn.
These experiments challenge the notion that consciousness is innate, suggesting that intelligence arises through pattern mimicry and associative thinking, rather than inherent properties. AI may replicate pattern matching, but it lacks selfhood, preventing it from achieving true consciousness.
The Seven Non-Biological Criteria of Life
The Seven Non-Biological Criteria of Life redefine life beyond traditional biology. These criteria suggest that life is a layered or sequential process rather than a static state, reinforcing the idea that AI—though capable of processing data—cannot experience genuine self-discovery:
1. Mechanization of Aliveness – The ability to self-consume energy.
2. Sensation of Awareness – The ability to response using sensors.
3. Logic of Intuitiveness – The ability to choose this or that.
4. Codification of Consciousness – The ability to match or associate things .
5. Inlearness of Information – The ability to acquire and use information.
6. Symbiosis of Living – The ability to interact and adapt within an environment.
7. Emergence of Self – The ability to self-identify or self-realize.
While AI may simulate sensation, logic, intelligence, and life, it fails to achieve self-discovery, making it inherently non-living.
Abioforms vs. Bioforms:
Lawsin classifies entities into two categories:
- Abioforms – Objects that are alive and living but lack self-realization (e.g., plants or artificial intelligence).
- Bioforms – Entities that are alive, living, and possess life through self-awareness (e.g., humans and sentient beings).
AI may qualify as alive (it consumes energy), living (it processes information), and with life (identifies one's self), but it cannot become human.
Latent Existences: The Illusion of AI Consciousness
Lawsin also introduces the concept of latent existences, which refers to phenomena that emerge only under specific conditions but do not exist independently. AI’s consciousness falls under this category—it appears intelligent only when interacting with humans, but it does not exist independently as a self-aware entity.
Autognorics:
Lawsin’s concept of Autognorics examines the study of engineered life forms that attempt to replicate biological processes. While AI can be programmed to simulate self-identification, it still functions entirely within predefined structures.
Autognorics suggests that AI could evolve mechanized consciousness through increasingly complex algorithms, mimicking intelligence and emotional responses. However, this simulation would still fall under Generated Interim Emergence, meaning that AI’s self-realization is not intrinsic but temporary, existing only within the conditions defined by its programming.
Thus, even if AI were engineered to appear sentient, it would still lack true unpredictability, spontaneous curiosity, and self-generated learning—hallmarks of human cognition.
The True Limitations of AI
Ultimately, AI cannot discover anything on its own because it requires programming to do so. If AI were designed to simulate curiosity, it would still operate within predefined parameters, ensuring that its findings are guided rather than spontaneous. This paradox ensures that AI will always be a tool rather than an autonomous thinker. It may surpass humans in speed, accuracy, and recall, but it will never experience true unpredictability, emotion, or self-awareness.
While AI can exhibit intelligence, creativity, and reasoning, it remains fundamentally distinct from human cognition. AI’s inability to experience chance, its dependence on algorithms, and its lack of self-realization solidify the idea that it cannot truly become human. Conscious AI may continue to evolve, but its intelligence will always be inscribed rather than originated, making it a simulation rather than a sentient entity.