- NanoBits
- Posts
- Human vs. AI. Who's winning?
Human vs. AI. Who's winning?
Your brain runs on 20 Watts. The AI chatbot answering you needs gigawatts.

EDITOR’S NOTE
Dear Nanobit Readers,
One fine Wednesday evening, a neuroscientist told a room full of people that we still don't truly understand how brains or AI work. Not as a disclaimer. As a starting point. Prof. Upinder Bhalla, one of India's leading computational neuroscientists, has spent 30 years studying the brain mathematically, and that was his opening position. I found that oddly reassuring.
This edition is my attempt to think through what he said, starting with the brain, landing on AI, and sitting with the questions neither field has answered yet. Here's what we'll cover:
Why Turing's 70-year-old argument about AI and human intelligence still holds
How AI got unreasonably good, then hit a ceiling
What neuroscience and AI are still teaching each other
Before that, how does the brain work?
Here is one fact worth sitting with. The protein molecules in your synapses0 , the physical structures that store your memories, last only a few days. The memories themselves can last a lifetime. Nobody has fully explained how a system built from such temporary parts holds onto something so permanent. That is the kind of problem researchers, like the professor, have spent years on. Hence, we will approach both the brain and AI with the same posture: deep respect for what we don't yet know.
0 Synapses are the specialized junctions in the brain where neurons (nerve cells) communicate with each other by transmitting chemical or electrical signals. They act as the fundamental functional unit of the nervous system, enabling information processing, learning, and memory by connecting billions of neurons into complex circuits. |
The uncomfortable logic of Turing
I will start with the question: What does it actually mean for something to be intelligent?
Alan Turing answered this in 1950 with uncomfortable simplicity. Have a conversation with “something”. If you can't tell whether it's human or machine, the thing at the other end is intelligent. No qualifications. That's exactly how you and I decide whether another person is worth listening to. We talk to them. We lead them down twisting lines of logic and see if they can follow. Turing just formalized what we were already doing.
His second contribution was the Turing Machine, a proof that any computation can be replicated by a simple rule-following system. And here is the claim: the brain performs computation, so in principle, a computer can do what a brain does.
Descartes had a different answer. The mind, he said, is a thinking substance with no fixed location or size. The body is just matter. They are separate things.
Let’s counter this with a simple thought experiment: if you replace one neuron at a time with a silicon equivalent, at what point does the mind detach? He has never found a satisfying answer in dualism, and neither has anyone else.John Searle spent 50 years arguing otherwise.
Picture this: you're locked in a room. People outside slide pieces of paper with Chinese writing through a slot. You don't understand Chinese, but you have a thick rulebook that tells you exactly which symbols to write back. To everyone outside, you appear fluent. Searle's point was that you're manipulating symbols without understanding any of them, and that a computer does exactly the same thing. His conclusion: syntax is not the same as meaning. |
I have to disagree with Searle here, though: synapses do computation too, and the whole system can be intelligent even when no single part is. A computer on its own isn't intelligent. A program on its own isn't either. But when the program runs on the computer, the whole system might be.
I think Searle's argument is already dead. What replaced it is what we're here to talk about.
How AI got unreasonably good
So if the brain computes, and computers can in principle do what brains do, why did it take us so long to get here?
The first artificial neurons were embarrassingly simple. A perceptron1, invented in the 1950s, was just an input layer and an output layer with weights2 in between. It could play noughts and crosses. That was about it.
1 A computer model or computerized machine devised to represent or simulate the ability of the brain to recognize and discriminate. 2 Weights are just numbers that tell the network (a chain of artificial neurons passing information layer by layer) how much attention to pay to each input, and the network adjusts them as it learns. |

Think of it like estimating a house price. Size and location are your inputs, each carrying a weight that reflects how much it matters. The network combines them, applies one final weight, and arrives at a predicted price. Change the weights, change the answer.
Then in the 1980s, backpropagation3 came along, a training algorithm that let networks learn from their mistakes across multiple layers. Things got interesting. Networks could read English text and produce reasonable phonetic approximations. Not because anyone told them the rules of pronunciation. Because they learned from examples.
3 a key algorithm used to train neural networks by minimizing the difference between predicted and actual outputs. |
This means you don't program the computer with rules. You show it examples, and it figures the rules out itself. That felt almost human. Almost felt human, didn’t it?
But then things stalled. Support vector machines4 arrived in the 1990s and did most of what early neural networks could do, more cleanly. The excitement faded.
4 A support vector machine is a simpler mathematical method that could sort and classify data by drawing the clearest possible boundary between categories, without needing multiple layers of neurons to do it |
What changed everything was data. Google and companies like it had accumulated more text, images, and human behavior than anyone had ever seen. Pair that with serious computing power, and you can train networks with dozens, then hundreds, of layers. Deep networks. And then strange things started happening.
Google trained a network on retinal scans, originally to detect diabetic retinopathy5 . But since it was Google, they fed it everything they could find. The network learned to predict age, gender, smoking status, blood pressure, and whether someone had suffered a heart attack, all from a photograph of the back of the eye. Nobody told it to look for any of that. It found those signals on its own.
5 Diabetic retinopathy is a condition where high blood sugar damages the tiny blood vessels at the back of the eye and can eventually cause blindness. |
Then came style transfer. Take a photograph. Take a painting by Van Gogh. A network can now apply Van Gogh's exact visual style onto your photograph, not as a filter, but by analyzing the correlations between layers of the painting and replicating them onto your image. The content stays yours. The style becomes his.

Image style transfer. The style image from Van Gogh’s Starry Night (b) was transferred to the content image of Golden Gate (a), and the generated image is (c)
These are examples of "unreasonable effectiveness." The phrase is borrowed from a lecture by physicist Eugene Wigner, who once marvelled at how well mathematics describes the physical world. The same logic applies here. These networks were not designed to do what they ended up doing. Nobody built the retinal scan network to detect heart attacks. Nobody taught the style transfer network what makes Van Gogh's brushwork distinct from Klimt's. They found those patterns on their own, buried inside the data.
The military ran into this, too. A neural network was trained to identify tanks in photographs. It worked perfectly in testing. Then someone noticed it was actually classifying sunny day photos as tanks and cloudy day photos as non-tanks because the soldier taking training pictures had gone out on a sunny day for tanks and a cloudy day for everything else. The network had learned the right answer for entirely the wrong reason. It was unreasonably effective at finding a pattern, just not the one anyone intended.That gap between what we ask these networks to do and what they actually learn to do is both the most exciting and the most unsettling thing about them.
And then, in 2017, everything changed again. The transformer architecture, introduced in a paper that has since gathered nearly a quarter million citations, pushed this further than anyone expected. It gave networks the ability to hold long contexts, focus attention on the relevant parts, and generate language, code, images, and more with a fluency that continues to unsettle people who study this for a living.
We are in a very exciting era right now! The question is how much longer that excitement holds before we hit the ceiling.
Twenty Watts vs. a billion dollars
Here is the number that should bother you. A modern AI data center burns gigawatts of power. Your brain runs on 20 watts. That is roughly the same as a dim bedside lamp. And yet your brain has more synapses than the best AI systems have memory units, learns a new skill from a handful of examples, and has never once needed to ingest the entire internet to hold a conversation.
So what is actually going on?
The brain is not faster than a computer. It is dramatically slower. Electrical signals in neurons fire in milliseconds. Modern chips operate in nanoseconds. The brain loses that race by a factor of a million. And yet it wins almost everything else. It recognizes a face in a crowd, catches a cricket ball mid-flight, recalls a half-forgotten memory from twenty years ago, and does all of this simultaneously, on 20 watts, without crashing.
The difference is not raw speed. It is how the brain uses time. Where computers fight against delay, the brain builds delay into its computation. Timing is not a bug. It is part of the calculation.But the more noteworthy gap is this one. AI systems have now consumed every book, article, forum post, and webpage that humans have ever put online. The training cost for a single large model has crossed the billion-dollar mark. And still, a five-year-old can learn what a chair is from seeing three examples. An AI needs thousands of labeled images to get close.
And even when we do learn, we don't decide rationally. Man is a rational animal who is always annoyed when called upon to act according to his reason. We decide based on emotion, pressure, and whatever is in front of us. Rational thinking is just one tool, and apparently an optional one. |
Yann LeCun, one of the architects of modern deep learning, has been making this point loudly. We do not need more data. We need smarter ways of learning from less. Humans do not become intelligent by reading the entire internet. We read a handful of textbooks, talk to a few people, make mistakes, and adjust. The internet is a tiny fraction of what humans have ever experienced, and experience itself, touch, smell, movement, and failure are barely represented in any training set at all.
To put it simply, the next frontier is not more data. It is better architecture. Systems that can learn the way a child/human does, quickly, from sparse input, and carry that forward.
AI has eaten everything on the table. The question now is what comes next.
Where the wheels come off
AI has eaten everything on the table, I mean, the internet. So what happens when it starts eating itself?
Researchers have already looked at this. When AI models train on content generated by other AI models, the quality collapses. Not gradually. The models degrade fast, losing the texture and variation that came from genuine human experience. It’s called a death spiral. The internet is already filling with AI-written text, AI-generated images, and AI-summarized articles. The next generation of models will train on that. Then the generation after will train on the generation before. At some point, you are no longer learning from the world. You are learning from a photocopy of a photocopy of a photocopy.
The scaling problem compounds this. Sam Altman and others built their entire bet on a simple idea: more data, more compute, more layers, better results. That held for years. It is holding less now. The latest models are improving, but the returns are shrinking relative to the resources being thrown at them. I believe that we are approaching saturation, not there yet, but close enough to feel it.
So where does that leave us?
The honest answer is that nobody fully knows. But there are directions worth watching. Spiking neural networks6 are one. Where standard networks pass continuous numbers between neurons, spiking networks communicate the way biological neurons do, in occasional discrete pulses. They are theoretically far more efficient. On smaller scales, they have already shown stronger recall and higher precision than standard networks for certain tasks. The problem is that they have never been scaled up anywhere near the levels that transformers have. The engineering gap is enormous.
6 Standard neural networks pass continuous streams of numbers between neurons, like water flowing through a pipe. Spiking neural networks work differently. They communicate the way biological neurons do, firing occasional sharp pulses and staying quiet the rest of the time, which makes them far more efficient and much closer to how your brain actually operates. |
The more immediate bet, and the one I find more promising, is algorithms. Silicon has fifty years of engineering behind it. It is not going anywhere. What needs to change is how we think about learning itself. Not systems that inhale the internet, but systems that read a textbook, ask a question, get something wrong, and adjust. Systems that treat sparse data as a feature, not a limitation.
That is a hard problem. It is also a wide-open one.
The brain and AI spent decades fighting. Now they need each other more than ever.
Here is the strange thing about this whole conversation. Neuroscience and AI have spent decades talking past each other. In the early days, neural network researchers and AI experts were openly at war. Rosenblatt, who built the perceptron, and Minsky, who tore it down, were not colleagues with a disagreement. They were rivals with a grudge. Rosenblatt took it badly enough that he eventually took his own life.
And yet, somewhere along the way, the two fields stopped fighting and started borrowing from each other. What we now call AI is, almost entirely, neural networks. The expert systems and symbolic logic that defined early AI are mostly gone. The thing that won was inspired by the brain.
The borrowing goes both ways, and that is what makes this interesting.
AI gave neuroscientists a new language. The concept of attention, developed for transformers, turned out to map surprisingly well onto how the brain retrieves relevant information from deep memory. The idea that you can build a working model of the world by accumulating enough weighted connections, something neural networks demonstrated clearly, gave neuroscientists a plausible mechanism for how the brain generalizes from experience to things it has never encountered before. Higher mathematics, for instance, is not something the brain evolved to do. And yet it manages. Weighted connections across neurons might be part of why.
The brain does not establish causality the way science does. It builds predictive models of the world from everything it senses. It works from correlations, shaped by billions of years of evolution into something that roughly reflects reality. That works well enough most of the time. It also explains why humans believe in astrology. LLMs approximate this through statistical patterns in language alone. Language turns out to cover a surprisingly wide range of human experience, wide enough to build a rough world model from. But touch, smell, movement, and failure are barely in there at all. |
Neuroscience, in turn, is pointing AI toward something it badly needs. The brain does not wire every neuron to every other neuron. It uses sparse, structured connections. It builds in timing. It hardwires certain things, like recognizing facial expressions, through millions of years of evolution, and learns everything else quickly from very little data. AI systems that try to replicate this, spiking neural networks, architectures that learn from sparse input, and models that separate hardwired priors from learned experience are still in their early stages. But the direction is clear.
Neither field has the full picture. But the fact that each field keeps finding useful things in the other is itself a signal. The brain and AI are not the same thing. They are more like signposts pointing at each other, each one saying, "Look over here, this might be how it works."
End Note
Professor Bhalla ended the evening with the thought: We are at the point where AI can already beat you on a narrow Turing test on a specific topic. Its grasp of facts is unbelievable. And yet you can still do a bit better than the AIs. Use it while you've got it.
I'm not sure whether that was reassuring or a countdown.
If that made you uncomfortable, here is something about free will. The brain is not deterministic. At the molecular level, individual chemical events are governed by probability, not fixed outcomes. But that does not prove free will exists either. Daniel Dennett [a prominent American philosopher and cognitive scientist known for his materialist views on consciousness, free will, and evolution] calls it an illusion, a story we construct after the fact to convince ourselves that what we did was our own idea. I do not disagree.
But the question I keep turning over is this: if the brain is just computation, and computation can run on silicon, at what point does the difference between the two stop mattering? And if it never stops mattering, what exactly is the thing that makes it matter?
The professor didn't answer that. I don't think anyone can yet. But I suspect the people who sit with that question longest will be the ones who figure out what comes next.
If you liked our newsletter, share this link with your friends and request them to subscribe too.
Check out our website to get the latest updates in AI
Reply