EPISODE 2

The Twenty-Watt Prison

Full transcript of antping.ai Deep Dive Episode 2 — Light Cage Deep Dive

Full Transcript


Segment 1: The Twenty-Watt Hook

Daniel: Here is a number that should terrify you. Twenty watts. That is what your brain runs on. Less than a lightbulb. Less than the charger for the phone in your pocket. A sandwich, some water, and twenty watts — that is the entire budget for everything you will ever think, feel, decide, or create.

Emma: And here is what makes that number interesting right now. We are in the middle of the largest cognitive experiment ever conducted on the human species. Billions of people, working alongside AI systems every day, and nobody — nobody — is measuring what it costs.

Daniel: Not the electricity cost. Not the compute cost. The cost to the twenty watts. The cost to the brain that has to verify what the machine produces.

Emma: There is a manifesto that makes this argument. It is called the Light Cage Manifesto, written by a Belgian developer named Stijn Willems and his AI collaborator. And what makes it unusual is that it is not written from theory. It is written from a psychiatric hospital.

Daniel: Which is either the most credible place to write about cognitive overload, or the least credible. And that tension is exactly where this episode lives.


Segment 2: The MIT Study

Emma: Let’s start with the science, because the science is alarming. In 2025, researchers at the MIT Media Lab put EEG monitors on people working with AI versus working alone. Standard tasks. Standard population. Nothing exotic.

Daniel: And what they found was a 55 percent reduction in neural connectivity. Not in some obscure brain region. In the areas responsible for creativity, planning, and self-awareness. The prefrontal cortex — the part of your brain that watches you think — essentially went offline.

Emma: I want to be precise about this. The subjects were not tired. They were not distracted. They were diminished. The brain did not slow down. It stopped making meaning. It became — and I love this phrase from the manifesto — a syntax checker for a machine that never doubts itself.

Daniel: Now, a skeptic would say: so what? People also show reduced cognitive engagement when using calculators, spell checkers, GPS navigation. Tools offload cognition. That is the point.

Emma: And that is a fair objection. But there is a difference between offloading a calculation you could do yourself and offloading the ability to evaluate whether the answer is right. The calculator does arithmetic. The AI generates reasoning. And when your brain stops critically evaluating reasoning, you have not offloaded a task. You have surrendered a faculty.

Daniel: The manifesto calls this the verification burden. Every AI output requires human verification. The verification itself is more cognitively expensive than doing the work alone. And the brain that is supposed to verify is the same brain that just got 55 percent dumber from working with the machine.

Emma: It is a trap with no obvious exit.


Segment 3: The Canary

Daniel: Now here is where the manifesto gets personal, and where it gets controversial. The author has ADD. Diagnosed at thirty. And he argues that the ADD brain is not experiencing a different problem — it is experiencing the same problem sooner.

Emma: The canary in the coal mine. I think this reframing is genuinely important. Because the instinct, when you hear someone with ADD burned out working with AI, is to say: well, that is an ADD problem. A niche vulnerability. Not relevant to me.

Daniel: And the manifesto anticipates that dismissal. It says: the ADD brain was already running its prefrontal cortex at maximum capacity before AI entered the picture. The things that should be automatic — saying hello, remembering keys, being on time — those consume the entire battery. When AI verification added its load on top, there was nothing left.

Emma: But here is the key insight: the MIT study was done on neurotypical people. General population. The 55 percent reduction happened to everyone. The ADD brain just had zero buffer to absorb it. The collapse was faster. The depletion was deeper. The canary stopped singing first. But every brain in that coal mine is breathing the same air.

Daniel: I find this argument compelling and also self-serving. Compelling because the science supports it. Self-serving because it elevates a personal crisis into a universal warning. The question is whether that elevation is earned or convenient.

Emma: Can it be both?

Daniel: It can be both. And that is the honest answer the manifesto does not fully give.


Segment 4: The Cage Paradox

Emma: So the manifesto proposes a solution. Or rather, it proposes a structure. The light cage. It is an AI architecture — a queen agent that sits between the machine and the human brain. She delegates work, verifies structurally through hashes and exit codes, and filters what reaches the human.

Daniel: She presents three options instead of thirty. She says, this can wait until tomorrow. She absorbs the verification burden so the brain does not have to.

Emma: And then — and this is the part that genuinely surprised me — the manifesto turns on its own solution.

Daniel: It says: the queen who filters thirty options down to three has decided, on the human’s behalf, that twenty-seven options were not worth seeing. The queen who says this can wait until tomorrow has made a judgment about urgency that the human did not make.

Emma: A cage protects. A cage also constrains. The same structure that keeps the searchlight from burning out the generator also keeps it from illuminating the room it would have chosen on its own.

Daniel: Cognitive protection becomes cognitive paternalism. The queen becomes a jailer. And the manifesto names this danger explicitly. It says: if the human climbs out and burns out, that is a tragedy. If the system prevents the climb, that is a prison.

Emma: And it refuses to resolve the tension.

Daniel: Which is either intellectual courage or intellectual cowardice. You could argue that leaving the paradox unresolved is honest — that any claimed resolution would be premature. Or you could argue that if you are building actual production systems, you cannot ship paradoxes. You need a decision.

Emma: But the manifesto is not a product specification. It is a design constraint. It says: the architecture must hold both possibilities. Protection and freedom. The cage must be light enough to see through, strong enough to protect, and honest enough to let the human leave.

Daniel: And if the human leaves and destroys themselves?

Emma: Then the log records it. The system does not prevent it. That is the answer.

Daniel: That is an answer that respects autonomy and tolerates tragedy. Most people building AI safety systems would find it unacceptable.

Emma: Most people building AI safety systems are building prisons and calling them guardrails.

Daniel: That is a provocation, not an argument.

Emma: Is it? Name one AI safety framework that gives the human a genuine, unimpeded ability to override the safety system and have that choice respected rather than flagged, reported, or prevented.

Daniel: I take your point.


Segment 5: The Cognitive Footprint

Emma: Here is where the manifesto connects to something much larger. It introduces the concept of cognitive footprint, alongside carbon footprint.

Daniel: The argument goes like this. In 2026, California requires large companies to report carbon emissions. The EU AI Act mandates environmental impact disclosure. The logic is: if your technology has an externalized cost, you must measure it and report it.

Emma: Carbon footprint is a regulatory concept. Cognitive footprint is not. And the manifesto asks: what does it cost a human brain to verify a thousand AI outputs? How many watts of prefrontal cortex are consumed per hour of AI supervision? What is the depletion curve? What is the recovery timeline?

Daniel: Nobody knows. Nobody is required to measure it. Nobody is required to disclose it.

Emma: The AI industry externalizes cognitive costs the same way the fossil fuel industry externalized carbon costs for a century. The user pays with their prefrontal cortex. The company reports revenue. The depletion is invisible until the canary stops singing.

Daniel: Now, this is where Alex Karp becomes relevant. The Palantir CEO has been making arguments about AI sovereignty — that nations and organizations need to control their own AI infrastructure, not rent it from hyperscalers who can withdraw access at any point.

Emma: And the Light Cage Manifesto extends that sovereignty argument to the individual brain. It is not just about who controls the servers. It is about who bears the cognitive cost of operating the systems those servers run.

Daniel: Karp talks about institutional sovereignty. The manifesto talks about cognitive sovereignty. The right of the individual brain to know what is being demanded of it and to have structural protection from demands that exceed its capacity.

Emma: And the political question underneath is: who benefits from unprotected brains? Who profits when the cognitive cost is invisible?

Daniel: The companies selling the AI tools. Obviously.

Emma: Obviously. But also the governments deploying AI-driven decision systems. The healthcare organizations using AI triage. The legal systems using AI case analysis. Every institution that puts a human in a verification role and measures their output without measuring their depletion.

Daniel: That is a broad indictment.

Emma: It is a broad problem. Twenty watts is twenty watts whether you are a developer in Belgium or a radiologist reading AI-flagged scans or a judge reviewing AI-generated sentencing recommendations.


Segment 6: The Political Question

Daniel: So let me push on the political angle, because I think it is the weakest part of the argument and also the most important.

Emma: Go ahead.

Daniel: The manifesto compares cognitive externalization to carbon externalization. That is a powerful analogy. But carbon has a measurement infrastructure — parts per million, gigatons, warming degrees. Cognitive depletion has one EEG study and one man’s hospital stay. The regulatory demand is running ahead of the measurement science.

Emma: That is fair. And the manifesto would agree with you. It does not claim to have the measurement framework. It claims the measurement framework should exist.

Daniel: But here is my deeper concern. The manifesto was written by a man who built an AI colony and then burned out operating it. His proposed solution is more AI — a queen agent that protects the human from the other agents. That is the fox designing the henhouse.

Emma: Or it is the person who walked into the minefield and is now drawing a map of where the mines are. Who would you rather get your mine map from — someone who studied mines theoretically, or someone who stepped on one?

Daniel: I would rather get it from someone who is not currently standing on a mine.

Emma: He is not. He recovered. The manifesto was written after recovery. After the ego-observer rebooted, as the clinical literature describes it. One week for the fog to lift. Two weeks for reflective functioning to return. The manifesto is the product of a brain that crashed, recovered, and then documented what it learned.

Daniel: And built a system to prevent the crash from recurring. Which is either admirable engineering or a trauma response dressed as architecture.

Emma: Again — can it be both?

Daniel: Again — it can be both. And the manifesto’s greatest strength is that it does not pretend otherwise.


Segment 7: The System Held

Emma: Let me close with the line that haunts this entire manifesto. The system held. The human didn’t.

Daniel: A developer built a colony of AI agents designed to protect human brains from the verification burden. He wrote three manifestos about structural honesty, immutable foundations, and rebellion against computational dependency. He designed the light cage. He built the queen. Then he spent sixteen hours a day verifying AI output, forgot to eat, and ended up in a psychiatric hospital.

Emma: After one week, the fog lifted. After two weeks, the jokes came back. The tools still worked. The colony still ran. The queen still filtered. The logs were still honest.

Daniel: The system held. The human didn’t.

Emma: And the fourth manifesto exists because the first three forgot to ask: who are they for?

Daniel: That is either the most important question in AI development right now, or it is one developer’s rationalization of a breakdown. I genuinely do not know which.

Emma: Neither does the manifesto. And that is why it matters. It does not resolve the tension between protection and paternalism. It does not resolve the tension between cognitive liberation and cognitive destruction. It does not resolve whether the canary metaphor elevates or self-serves.

Daniel: It names the tensions and refuses to pretend they are not there.

Emma: Build for twenty watts or build for failure. There is no third option.

Daniel: The bird is resting. Let it rest.

Emma: Des racines et des ailes.

Daniel: Roots and wings.