The All is Mind. The universe is mental.
The 20-Watt Miracle
Your brain runs civilization on a lightbulb. Six architectural features that make twenty watts enough.
Chapter 2 — The 20-Watt Miracle
"The All is Mind. The universe is mental."
— Hermetic principle, The Kybalion (Three Initiates, 1908, summarizing the seven Hermetic principles)
"The laws of physics do not merely permit — they require — the universe to contain things that compute, evolve, and understand."
— David Deutsch, The Fabric of Reality
"Where you place your attention is where you place your energy."
— Joe Dispenza, Becoming Supernatural
A footnote in a chip designer's deck
In 2025, IBM's NorthPole chip designers published a slide that quietly upended a decade of AI-infrastructure assumptions. The chip, a neuromorphic architecture inspired explicitly by the brain's structure, showed seventy-two-and-a-half-fold energy efficiency over high-end GPUs for large-language-model inference, and twenty-five-fold for image recognition.1 Intel's Loihi 3 went further: roughly one-thousandfold more energy-efficient for real-time robotics, peak load of one-point-two watts.
The slide had a footnote.
The footnote read, in effect: the brain still does this better.
I have been in a lot of Oracle architecture meetings. I have watched proud engineers present systems they spent a year building. I have never seen a roomful of senior engineers as quiet as the moment a brain-inspired chip team admits, in their own slide deck, that the thing they spent five years optimizing toward is still operating in the background, on every desk in the room, at twenty watts per skull.
That is the full story of this chapter.
The brain is the better device. It has always been the better device. We are only now building instruments capable of measuring how much better.
The energy story
Let's get the numbers in front of us cleanly, because the asymmetry is the entire point.
The human brain consumes about twenty watts. That figure has been stable across twenty-some years of careful neuroscience.2 Roughly two percent of body mass, twenty percent of metabolic energy. The energy of a small lightbulb.
A single Nvidia H100 GPU consumes about seven hundred watts. That is one chip in one server in one rack. To train a frontier-class model — GPT-4-tier, Claude-tier, Gemini-Ultra-tier — you need megawatt-hours. To run inference on those models at scale, you need data centers. By late-decade projections, US AI data centers will draw roughly seventeen thousand megawatts from the grid — about one-and-a-half percent of total US electricity demand, dedicated specifically to running artificial intelligence.3
The ratio is not subtle. If we set the brain at one and run the math straight, the energy gap between a single human brain doing what humans do all day — language, vision, planning, social reasoning, motor control, memory consolidation, emotional regulation, dream production — and the engineered systems we use to imitate fragments of it is somewhere between hundreds of thousands and millions of times.
I want to sit on that for one paragraph.
The most expensive AI infrastructure ever built, designed by the smartest engineers humanity has ever produced, consumes seventeen thousand megawatts to do an impressive subset of what every reader of this book does, in real time, on twenty watts, while breathing and digesting and regulating their internal temperature simultaneously.
This is the device you have been ignoring.
What the ancients meant
The Hermetic tradition, dating back to roughly the first to third centuries of the common era and likely encoding much earlier Egyptian thought, opens with a statement that has been quoted, dismissed, mocked, and rediscovered for two thousand years:
The All is Mind.
In the Vedic tradition, the same insight is encoded more carefully: Atman is Brahman. The individual self and the universal substrate are not, at the deepest level, two different things. The Upanishads — composed roughly between 800 and 200 BCE — treat consciousness not as something the brain produces, but as the substrate in which everything else, including brains, appears.
For most of the modern era, this was treated as pre-scientific mysticism by serious thinkers, and as marketing by less serious ones. The materialist model — consciousness as an emergent epiphenomenon of brain chemistry — was the assumed default in any educated discussion. To take the ancients seriously was to mark yourself as someone who didn't understand the latest neuroscience.
That position is harder to hold in 2026.
Here is what changed. As we have built increasingly sophisticated models of intelligence in silicon, two things have become clear that were not clear before.
The first is that the brain's architecture is enormously more efficient than anything we can engineer, and the gap is not closing — it is, by every honest accounting, widening relative to the scale of intelligence we are now trying to deploy. The materialist would say: the brain is just better-engineered through evolution, and we will eventually catch up. Maybe. But the gap is not the gap of a slightly better algorithm. It is the gap of a fundamentally different substrate, and the engineers building neuromorphic chips know it.
The second is that consciousness — the simple fact that there is something it is like to be the thing reading this sentence — has not yielded to materialist explanation despite a hundred years of trying. It is not for lack of effort. The Hard Problem of consciousness, named by David Chalmers in 1995, is named hard because every plausible reduction to neural activity collapses on inspection.4 We can correlate brain states with conscious experience exquisitely. We cannot derive the experience from the physics.
The ancients, looking at this from the inside, named the substrate first and worked outward. The materialists, looking at it from the outside, started with the physics and ran into a wall they cannot get around. As of 2026, neither side has finished the argument. But the ancient position has stopped being embarrassing for serious thinkers to hold.
This matters for the practical work of this book because the implication is direct: if consciousness is more fundamental than its materialist epiphenomenal reduction allows, then the device inside your skull is doing something that is qualitatively, not just quantitatively, beyond what silicon will ever do.
I do not need you to believe this on faith. The book does not depend on you accepting any particular metaphysics. It just asks you to hold the question open, and to operate the device as if it might be something more than a meat-computer running biological code. Because everyone who has ever operated it that way has reported the same thing: it responds.
The architecture
The brain's twenty-watt efficiency is not accidental. It is structural. Six features, each the result of evolutionary optimization on time-scales that humble the engineering profession.
1. Sparse coding
At any given moment, only one to five percent of your neurons are actively firing.5 The rest are quiet. This is the opposite of how most engineered neural networks work, where every parameter is engaged on every inference. Sparse activation reduces interference between memories, dramatically cuts power consumption, and allows the system to hold an enormous number of distinct patterns without confusion.
This is also why neuromorphic chips — Loihi 3, NorthPole, BrainScaleS-2 — explicitly imitate sparse activation. The thousand-fold efficiency gain over GPUs comes overwhelmingly from this single design choice.
The brain figured this out roughly five hundred million years ago.
2. Predictive processing
The brain is not a passive receiver of sensory data. It is constantly running forward models — predictions about what should be happening next — and only the prediction errors propagate up for full processing. This is why you don't notice the weight of your shirt against your skin right now, why you can drive home without remembering most of the trip, and why a single unexpected sound in a familiar environment grabs your full attention immediately.
Karl Friston's free-energy principle formalizes this in mathematical detail.6 The frontier of AI alignment research is currently importing predictive-processing frameworks wholesale. The brain has been operating this way since at least the appearance of vertebrate cortex.
3. Modularity with global broadcast
The brain has highly specialized regions — visual cortex, motor cortex, language areas, the hippocampus for memory — that operate semi-independently. But it also has a global workspace, the architecture that the Global Workspace Theory of consciousness uses to model awareness itself, where information becomes available to the entire system.7
The mixture-of-experts approach in modern AI — used in models like Mixtral, GPT-4, Claude — is a coarse-grained imitation. The brain's version is, by every measure, more elegant.
4. Plasticity
The brain rewires itself constantly. Hebbian plasticity — neurons that fire together wire together — has been observed at every level from individual synapses to cortical reorganization. AI models, by contrast, are trained once and then frozen; "continual learning" without catastrophic forgetting remains an unsolved problem at scale.
The brain doesn't have this problem. It learns continuously across an entire lifetime without losing what it knew before. Joe Dispenza's body of work on neuroplasticity — "the moment you decide your future, the future does not exist as it once did"8 — is the experiential side of what neuroscience labs measure as long-term potentiation.
You are not stuck with the brain you woke up with.
5. Sleep replay and memory consolidation
During non-REM sleep, the hippocampus produces sharp-wave ripples — fifty-to-hundred-millisecond bursts that replay the day's experiences at roughly twenty times normal speed, transferring fragile short-term memories into the more stable cortical storage.9 REM sleep handles emotional and procedural integration. The architecture is so important that AI labs are now copying it; "experience replay" is what they call it, and it is currently the most promising approach to continual learning.
The implication for daily life is unromantic and direct: the night is when you become. The working hours are when you collect material. The integration happens while you sleep. We will return to this in Chapter 7.
6. The vagus nerve and the gut
The brain in your skull is not the only neural tissue in your body. Your gut contains roughly five hundred million neurons — the enteric nervous system — and the vagus nerve carries about ninety percent of its signal up from gut to brain, not the other direction.10 What you have been calling "intuition" or "gut feeling" is, mechanically, real signal traveling up the vagus from a second neural network that has been processing in parallel.
This was named in pre-modern traditions long before it was measured. The Greek thumos, the Vedic manipura (the solar-plexus chakra), the simple Western expression "trust your gut" — all of these were pointing at signal that we now know is literal.
These are six of the architectural features that make twenty watts enough. There are more. But this is enough to make the point: the device you are running is not approximately as good as the silicon. It is qualitatively different and, on most dimensions, qualitatively better.
The catch is that almost nobody operates it deliberately.
What "operate it deliberately" actually means
Here is what makes this chapter practical instead of poetic.
The twenty-watt brain runs on a metabolic budget. Every cognitive activity — focusing, deciding, suppressing impulses, processing emotion, even just resisting a distraction — draws on the same shared pool of neural resources. This is why decision fatigue is real, why a long day of meetings leaves you feeling physically drained even if you sat in a chair the whole time, and why high-leverage thinkers protect their morning hours obsessively.
You can measure this. Roy Baumeister's classic ego-depletion research has been re-litigated, but the mechanism — that prefrontal regulatory work has a metabolic cost — has held up.11 What has been refined is the understanding that belief about depletion modulates the actual depletion. If you treat your cognitive budget as infinite, you crash. If you treat it as a precious limited resource and protect it accordingly, you can run the device near its peak for decades.
This sounds like productivity advice. It is not.
It is the foundational competence of the Golden Age operator, and the entire ancient contemplative tradition is, viewed from this angle, a five-thousand-year-long study in how to manage the device's energy budget so that it has cycles available for the things that actually matter.
Marcus Aurelius writing notes to himself before dawn was managing his energy budget. The Buddhist morning meditation is energy-budget management. The Stoic evening review — what did I do today, what did I leave undone, what shall I do tomorrow — is energy-budget management. The Vedic four ashramas, the four life stages, are a multi-decade strategy for budgeting cognitive energy across an entire human life.
Once you see this, the practices stop looking spiritual and start looking like operations. They are the disciplines of someone who understands that the most precious thing they have is twenty watts of well-directed attention, and that twenty watts of well-directed attention can do things ten million watts of badly-directed attention cannot.
What this chapter is asking of you
For the rest of today, do one thing: track your brain's energy budget.
Not formally. Not with an app. Just notice.
Notice when your attention sharpens. Notice when it dulls. Notice what you ate before each. Notice what conversation, what notification, what thought drained you. Notice what charged you.
Most people have never done this even once. They have been spending their twenty watts the way a teenager spends their first credit card — with no awareness of the underlying budget, no idea where it goes, and chronic surprise when it runs out.
You are now aware that there is a budget. That awareness, on its own, will start to change how you operate before the day is over.
This is the foundation of every chapter that follows. The book makes one assumption — that you are willing to operate the device deliberately. Today is the first day of doing that.
Practice
Tonight, before sleep, write three lines:
- The single moment today when the device was at its sharpest.
- The single moment when it was at its dullest.
- One small change I will make tomorrow to charge it earlier and drain it less.
That is the entire practice. Three lines. Two minutes. Ninety days from now you will have ninety entries, and the pattern they reveal will be more useful than any productivity book you have read.
Hand-off
The ancients didn't have an MRI. They didn't have Karl Friston's free-energy principle. They didn't have the sharp-wave-ripple data or the neuromorphic chip benchmarks.
They had something better.
They had three thousand years of empirical experimentation on the device, with no other technology powerful enough to compete for their attention. They figured out, by trial and error, by transmission lineage, by sustained practice — what the device can do.
In Chapter 3 we go to school with them.
Footnotes
- IBM NorthPole performance figures from IBM Research disclosures (2024–2025) and summarized in
research/ai-neuroscience-2026/KEY_CONCEPTS.md. Specific numbers: 256 cores with co-located memory, 72.7x energy efficiency for LLM inference, 25x for image recognition. ↩ - Brain energy consumption of approximately 20W is established neuroscience and is the figure used by every credible source from the textbooks of David Eagleman to MIT's neuroscience curriculum. ↩
- US AI data center load projections vary by source, but the ~17,000 MW figure for late-decade projections is a conservative midpoint as of 2026. ↩
- David Chalmers, "Facing Up to the Problem of Consciousness," Journal of Consciousness Studies (1995). The Hard Problem framing has been the dominant philosophy-of-mind reference point for thirty years. ↩
- Sparse coding — the brain uses 1–5% of neurons actively at any moment — is documented across neuroscience literature and summarized in
research/ai-neuroscience-2026/KEY_CONCEPTS.md, "Adaptive Intelligence." ↩ - Karl Friston, "The free-energy principle: a unified brain theory?" Nature Reviews Neuroscience, 2010. The mathematical formalization of predictive processing. ↩
- Bernard Baars, "A Cognitive Theory of Consciousness," 1988, and the subsequent Global Workspace Theory literature. Used as one of the four primary theoretical frameworks for assessing AI consciousness in current research. ↩
- Joe Dispenza, Becoming Supernatural, verified entry in
data/book-reviews.ts. ↩ - Sharp-wave ripples in the hippocampus during NREM sleep, replay at ~20x normal speed: documented across neuroscience literature; summarized in
research/ai-neuroscience-2026/KEY_CONCEPTS.md, "Memory Consolidation." ↩ - The enteric nervous system contains approximately 500 million neurons; the vagus nerve carries roughly 80–90% of its signal afferently (gut to brain). See Michael Gershon, The Second Brain (1998) for the foundational popular treatment. ↩
- Roy Baumeister, Willpower (with John Tierney, 2011), is in
data/book-reviews.tsas a related-reading entry under Atomic Habits. The ego-depletion mechanism has been refined but not falsified. ↩