• NanoBits
  • Posts
  • I met Karen Hao, author of Empire of AI, and here's what happened next...

I met Karen Hao, author of Empire of AI, and here's what happened next...

Why AI’s current path is NOT inevitable, and who benefits from believing it is.

EDITOR’S NOTE

Two rooms, one question

Dear future-proof humans,

Last to last weekend, I walked into the Bangalore Literary Festival expecting two unrelated conversations about AI. One was a panel called Aye, Aye, AI. The other was a discussion around Karen Hao’s book Empire of AI. Different rooms. Different tones.

By the end of the weekend, it was clear they were asking the same question, from opposite ends.

The first room stayed close to ideas. What do we mean when we say artificial intelligence? Why does AGI keep changing its definition? Why do we confuse scale for intelligence? Why do narrow systems quietly work while sweeping promises dominate headlines?

It was technical and reflective, the kind of discussion that makes you realize we may not even agree on what problem we are trying to solve.

Then I walked into the second room.

Karen Hao was not talking about what AI might become. She was talking about what it already is. Who funds it? Who builds it? Who absorbs the costs? Where the data comes from. Where the labor sits. Where power concentrates.

The language shifted from models and benchmarks to empires, extraction, and belief systems.

One session showed how the idea of AI gets stretched, blurred, and marketed. The other showed what happens when that blur is weaponized by capital, geopolitics, and myth-making.

We spend a lot of time asking whether AI is intelligent. We spend far less time asking who decides what gets built, at what scale, and for whose benefit.

This edition of Nanobits asks one question. Not to argue that AI is good or bad. Not to predict doomsday futures. But to slow the conversation down just enough to ask a more uncomfortable question.

What are we actually building, and when did we stop choosing?

TL;DR: What’s in it for you

  • The two sessions at the Bangalore Literary Festival said the same thing about AI: AI looks very different when you stop listening to promises and start tracing impact.

  • Much of today’s AI progress comes from scaling known techniques, not from a breakthrough in understanding human intelligence.

  • AGI remains deliberately vague, shifting meaning across research, business, and marketing to justify continued expansion.

  • The modern AI industry increasingly behaves like an empire, extracting data, labor, energy, and legitimacy at scale.

  • India is not just a market for AI, but a stress test where the environmental and civic costs of infrastructure become visible.

  • The idea that AI’s current trajectory is inevitable is a narrative, reinforced by capital, not a law of technology.

  • The real question is not what AI can do next, but who gets to decide what should be built and at what cost.

What AI claims to be?

The Aye, Aye, AI panel began where most public conversations about AI eventually land, with a definition that refuses to stay put.

As Anil Ananthaswamy explained, AI has never been a single idea. Historically, there were two broad approaches. One tried to encode human knowledge into machines using symbols, rules, and logic. It worked in controlled settings and failed once the world became messy. The other approach, machine learning, flipped the problem. Instead of telling machines how the world works, it asked them to learn patterns directly from data.

Most of what we rely on today comes from this second path. Systems that recognize faces, transcribe speech, or recommend routes are extremely good at one task and poor outside it. This is narrow intelligence.

The recent surge in attention comes from generative AI. Once systems learn statistical patterns at scale, they can produce text, images, or audio that resemble what they have seen before. The behavior feels new, but the underlying logic has not changed as much as the scale has.

This is where AGI enters, and where clarity dissolves.

AGI is usually described as a system that can match human capability across many tasks. The problem is that we do not agree on what human intelligence actually is. Depending on whether we emphasize reasoning, learning, creativity, or adaptability, AGI can feel either close or impossibly distant.

That ambiguity makes AGI a powerful narrative. It shifts meaning depending on context, research, business, or marketing. It becomes a moving horizon that justifies constant expansion.

The panel did not try to resolve this tension. It simply named it. When definitions stretch, accountability does too.

What AI is actually becoming

If the panel discussion stayed with ideas, Karen Hao’s book discussion shifted the focus to incentives.

Her account of the AI boom begins well before ChatGPT, in a period when ambition ran far ahead of clarity. OpenAI positioned itself as a moral alternative to Big Tech. Open research. Shared benefits. Humanity first.

What Hao observed early on was a familiar pattern. Grand claims paired with vague answers to basic questions. Why this technology. Why this scale. Why now. When pressed on the risks they were racing to prevent, executives gestured toward abstract futures rather than concrete harms.

That gap shaped everything that followed.

Once scale came to be seen as destiny, OpenAI stopped functioning like a research lab and started operating as something else. A capital aggregation engine. Success became less about scientific breakthroughs and more about raising unprecedented amounts of money to fund unprecedented compute.

This is where Hao’s central frame comes into focus. Empire is not a metaphor. It is a description of behavior.

AI companies extract resources they do not own, drawing training data from public and private work without consent or compensation. They rely on invisible labor, with large numbers of workers performing difficult moderation and annotation tasks so systems appear seamless. They also concentrate knowledge production, with most advanced research now housed inside or funded by the same firms building these models.

Workers in Kenya earned starvation wages to filter out violence and hate speech from OpenAI’s technologies, including ChatGPT.

Karen Hao. Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI (2025, Penguin Publishing Group)

Like all empires, they justify expansion through moral narratives. We must build it first. We must move fast. Humanity depends on it.

What makes this unsettling is not malice. Many involved genuinely believe they are doing good. But belief combined with unchecked power carries its own risks.

In this room, AI stopped feeling abstract. It became physical and political.

And the earlier question grew heavier. It is one thing to debate what AI might become. It is another to confront what it is already becoming.

India as the quiet test case

The point where both sessions quietly converged was geography.

India appeared only briefly in the panel discussion, almost in passing. In Karen Hao’s session, it became impossible to ignore. Not because India dominates AI headlines, but because it sits beneath the infrastructure that makes large-scale AI possible.

As AI systems scale, they demand three things relentlessly. Compute. Energy. Water. In the United States, those limits are already visible. Land is constrained. Power grids are stretched. Water tables are under stress. Expansion, inevitably, looks outward.

One cost that rarely gets discussed is water. Large data centers overheat at a scale far beyond personal devices, so they rely on intensive cooling systems. Most of that cooling uses freshwater, often drinking quality water, because anything less pure risks corrosion or bacterial growth in sensitive equipment.

Karen Hao, Author of Empire of AI

India enters this story less as a design partner and more as a hosting ground.

This is not a future risk. It is already playing out.

These facilities are increasingly built in regions that are already water stressed. India, for instance, has nearly 18% of the world’s population but only about 4% of its freshwater resources. In cities like Mumbai, rising data center demand has even delayed the retirement of coal plants (2 coal plants in Maharashtra, one owned by Tata and the other by Adani), worsening air pollution in communities already facing severe environmental strain.

Karen Hao, Author of Empire of AI

When hyperscale data centers arrive, they compete directly with local needs. The costs rarely appear on corporate balance sheets. They surface instead in water access, air quality, and public health.

What makes this uncomfortable is how absent this context is from celebratory narratives about AI investment. Jobs and growth are highlighted. Tradeoffs remain off stage.

Here, AI stops being theoretical. It becomes civic.

India is not just a market or a talent pool in this story. It is a stress test. And how we respond to that role will say far more about the future of AI than any model benchmark ever could.

The illusion of inevitability

One of the most persuasive stories in the AI moment is that what is happening cannot be stopped.

If systems keep improving, if compute gets cheaper, if competition intensifies, then surely someone will build artificial general intelligence. If not one company, then another. If not one country, then a rival. The conclusion feels obvious, so obvious that it rarely gets questioned.

Both sessions challenged that assumption.

From the panel came a technical reminder. Recent breakthroughs are not the result of a sudden leap in understanding intelligence. They come from scaling known techniques. Bigger models, more data, more compute. The results look dramatic, but scale is a strategy, not proof of destiny.

Karen Hao added the political and economic lens. Scale did not win because it was the only path forward. It won because capital made it the easiest one.

The contrast with China makes this clear. Faced with limits on advanced chips, Chinese firms were forced to optimize. Models like DeepSeek reached comparable performance using far fewer resources, not through magic, but through efficiency techniques that already existed.

The implication is unsettling. The waste was optional.

US companies could have pursued efficiency earlier. They did not, because abundance rewards size and speed. Restraint looks weak when money is easy and narratives celebrate dominance.

This reframes the AGI debate. If most researchers do not believe current systems are on a path to general intelligence, inevitability begins to look less like fact and more like a story we tell to avoid responsibility.

Inevitability is not physics. It is choice.

The question we keep avoiding

Across both sessions, one question kept surfacing without ever being asked directly.

Do we actually need this?

Not in the abstract. Not as a thought experiment. But in the concrete sense of what we are choosing to build, fund, and normalize.

During the panel, a tension lingered. On one hand, these systems are described as brittle, narrow, and far from human intelligence. On the other, they are framed as forces powerful enough to reshape labor, culture, and geopolitics. We move between dismissal and awe, rarely stopping to ask why both narratives coexist.

Karen Hao’s work sharpens this contradiction. When companies market systems as universal problem solvers, they invite misuse they cannot govern. When they frame themselves as humanity’s last defense, they demand trust without accountability. And when they insist that progress requires ever more scale, they quietly move the costs onto communities that never agreed to carry them.

The question, then, is not whether AI can do more. It is whether this particular version of AI, built around concentration and opacity, aligns with what we actually want.

Both sessions pointed toward a quieter alternative. Narrow systems. Clear boundaries. Accountability by design. Tools meant to assist, not replace.

This is not a rejection of ambition. It is a demand for choice.

Because when we avoid the question, someone else answers it for us. And by the time the answer becomes visible, it may already be embedded in infrastructure, contracts, and habits that are hard to undo.

END NOTE

Walking out of the event that weekend, I did not feel alarmed. I felt recalibrated.

Both sessions stripped away a convenient illusion. That AI is something happening to us, rather than something being built through a series of very human choices. Choices about money. About speed. About who gets heard and who absorbs the cost.

What stayed with me was not a fear of machines becoming too intelligent. It was a discomfort with how quickly we outsource judgment. When we accept narratives of inevitability, we stop asking who benefits, who decides, and who pays. We stop noticing when abstraction hides impact.

Neither session argued for rejecting AI. In fact, both suggested the opposite. That care, constraint, and specificity are signs of seriousness, not resistance. That tools designed with limits can be more powerful than systems designed to do everything. That progress does not always look like acceleration.

I keep coming back to a simple idea from the panel discussion. Intelligence is not just about doing more tasks. It is about knowing which ones matter. That applies to humans as much as it does to machines.

If there is a takeaway from holding these two rooms together, it is this. The future of AI will not be decided by benchmarks or roadmaps alone. It will be shaped by what we normalize, what we question, and what we quietly allow to pass as inevitable.

Slowing down to ask better questions is not anti-technology. It is how we stay involved in the story rather than becoming footnotes to it.

That, to me, feels like the real work ahead.

Share the love ❤️ Tell your friends!

If you liked our newsletter, share this link with your friends and request them to subscribe too.

Check out our website to get the latest updates in AI

Reply

or to participate.