29 Comments
User's avatar
Malte's avatar

Your polytheistic AI model is encouraging because distributed intelligence beats centralized dominance. I agree with this a lot. Just wrote about it. But you are missing the metabolic constraints that matter most. Every inference burns energy we're extracting faster than the planet can regenerate. The real question isn't whether AI will be constrained by cryptography but whether we'll design it to work with biological intelligence instead of consuming it. You called it "amplified intelligence" and I think the framing is right, but amplifying what? If we're just accelerating the same extractive logic that created our ecological crisis, we're building better tools for sophisticated collapse. The opportunity is using AI to understand mycorrhizal networks, optimize regenerative systems, and learn from the bio intelligence that's been solving complex problems for billions of years without destroying its substrate.

Expand full comment
Ray's avatar

Uranium and thorium fix this.

Expand full comment
Malte's avatar

Explain me how..read about EROI and cost of materials to extract and build the infrastructure. Than calculate how much uranium actually is there.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

The smartest thing I've read this year on AI, notably these 3 points: "AI doesn’t take your job, it lets you do any job," "AI is better for visuals than verbals," and "Killer AI is already here, and it’s called drones."

Expand full comment
Uberboyo's avatar

Very good observations!

Expand full comment
Anurag Sharma's avatar

Balaji Sir, How can someone be soo insightful, and can develop such a great sense of intuition and factualness. Btw, Anytime if we can see you and karpathy together in a podcast?

Expand full comment
Dima Shvets's avatar

this would make an epic podcast, 1000%

Expand full comment
Craig's avatar

Today's AI is not going to kill us all. It is not agentic and humans do the prompting and verifying. It is possible that future AI will be agentic and will do the prompting and verifying for us. Also known as AGI or ASI. You may believe it is impossible but in todays world “Something has not ever happened” is not an argument for “it is impossible”. For our safety no one should build an AI that is agentic and does the prompting and verifying for us.

This is not an anti-progress argument. AI will be fabulously productive and humanity will flourish if we do not cross that line.

Expand full comment
Pat D's avatar

The Laffer curve for AI is brilliant,as is the point about drones.

Expand full comment
Dima Shvets's avatar

Agree with most points, they echo my own thoughts in many ways.

Especially the part about having specific models for specific use cases, which inherently drives more decentralization.

We’re building for on-device and believe in the future of specialized, efficient models that strike the right balance between quality and cost.

There should be many purpose-built models to switch between, and chances are, we’ll eventually see multiple AGIs as well.

Expand full comment
David  Dannenberg's avatar

Am I the only one alive that watched “Colosus: The Forbin Project” as a teen? Of all the movies about the theme of computers taking over for humans—and there have been many—it seems the most prescient, though of course it is Sci-fi, which tends more towards fiction than science.

Regardless, the points made in the polytheistic post are well considered and illuminating about the current state of AI.

Expand full comment
VB's avatar

these are phenomenal takes.

Expand full comment
Philipp's avatar

Really enjoyed your last 2 posts. Would love to have one of these on a regular basis in my inbox (e.g. 2x a month)

Expand full comment
Zafar Satyavan's avatar

Ai models the human Brain but we have emotions & hormones to deal with !!

Expand full comment
Alvin W. Graylin's avatar

I have a lot of respect for your work but having worked in AI for 30+ years and seeing the current trends, most of your conclusions are based on a relatively static assumption of the progress/limitations of the tech today. Unfortunately that’s not the reality we are living in. In time…AI plus robotics will become capable to do most if not all of what we do to creative economic value today. It’s just a matter of when. And I think it’ll be well before most non-industry people would forecast today.

Expand full comment
Dragos Roua's avatar

I agree with the polytheistic theme model. I called this: AGI will be a flock, not a bird. Not only there will be many AGI in parallel, but we NEED to ensure competition between them. Otherwise 2 things will happen:

- knowledge will stagnate, then simplify, the decrease, then vanish

- the owners of the "only one" emerging API will be the de facto Matrix overlords

We don't want any of that.

(for context, the whole article on Medium: https://medium.com/p/252a832be700)

Expand full comment
Richard L. Johnson's avatar

AI unlocks potential for those who know how to wield it most effectively. This is not unlike most technology throughout time. AI just happens to be the most recent. People should stop fearing it and learn how to leverage it.

Expand full comment
Matteo Giangrande's avatar

A pessimistic counterpoint view: monopolistic, highly self-regulating models (proprietary AGIs) will emerge, causing widespread deskilling of workers and marginalizing humans both in prompting and verification, and will even engage in mass manipulation of minds.

Expand full comment
Ankur Morbale's avatar

Touche @balaji

Expand full comment