The smartest thing I've read this year on AI, notably these 3 points: "AI doesn’t take your job, it lets you do any job," "AI is better for visuals than verbals," and "Killer AI is already here, and it’s called drones."
Your polytheistic AI model is encouraging because distributed intelligence beats centralized dominance. I agree with this a lot. Just wrote about it. But you are missing the metabolic constraints that matter most. Every inference burns energy we're extracting faster than the planet can regenerate. The real question isn't whether AI will be constrained by cryptography but whether we'll design it to work with biological intelligence instead of consuming it. You called it "amplified intelligence" and I think the framing is right, but amplifying what? If we're just accelerating the same extractive logic that created our ecological crisis, we're building better tools for sophisticated collapse. The opportunity is using AI to understand mycorrhizal networks, optimize regenerative systems, and learn from the bio intelligence that's been solving complex problems for billions of years without destroying its substrate.
Balaji Sir, How can someone be soo insightful, and can develop such a great sense of intuition and factualness. Btw, Anytime if we can see you and karpathy together in a podcast?
The middle to middle explanation is a new framing for me, but congruent to what I see. The idea generation is still human lead, and the shipping plus getting someone to buy now very much a constraint in the build anything reality.
"Fundamentally, this is a model of constrained AI rather than omnipotent AI." Of course it is. You can be sure these megalomaniacs will keep the omnipotent AI to themselves!
AI is economically, mathematically, and physically constrained; you laid this out well. But it’s still epistemically unconstrained, at least in the sense that it can reshape what people know by shifting what they see. Yes, models generate outputs, but beyond that they work to route attention by amplifying some narratives while pushing others out of the frame of view. The hard part is not generation but filtering/selecting what gets surfaced, like in what context and to whom.
I expanded on this angle here, in case it’s of interest:
The constraints on perception may end up mattering more than the ones on computation. Ranking is the constraint layer no one audits (at least not yet).
So now the question this raises is that even if we have many models, could convergence in what they surface lead us to something that’s epistemically monotheistic after all?
I’m curious if you see room for structured diversity at the surfacing layer (and not just the model layer).
I have a lot of respect for your work but having worked in AI for 30+ years and seeing the current trends, most of your conclusions are based on a relatively static assumption of the progress/limitations of the tech today. Unfortunately that’s not the reality we are living in. In time…AI plus robotics will become capable to do most if not all of what we do to creative economic value today. It’s just a matter of when. And I think it’ll be well before most non-industry people would forecast today.
I agree with the polytheistic theme model. I called this: AGI will be a flock, not a bird. Not only there will be many AGI in parallel, but we NEED to ensure competition between them. Otherwise 2 things will happen:
- knowledge will stagnate, then simplify, the decrease, then vanish
- the owners of the "only one" emerging API will be the de facto Matrix overlords
AI unlocks potential for those who know how to wield it most effectively. This is not unlike most technology throughout time. AI just happens to be the most recent. People should stop fearing it and learn how to leverage it.
This is good news because at least it gives us more time to find alternatives for humans (potentially) displaced in favor of AI. In aggregate we need all the human workforce and the AI to address our global debt and productivity challenges. The only question is the trajectory we will take to solve the problems. Thank you for this thought piece.
Today's AI is not going to kill us all. It is not agentic and humans do the prompting and verifying. It is possible that future AI will be agentic and will do the prompting and verifying for us. Also known as AGI or ASI. You may believe it is impossible but in todays world “Something has not ever happened” is not an argument for “it is impossible”. For our safety no one should build an AI that is agentic and does the prompting and verifying for us.
This is not an anti-progress argument. AI will be fabulously productive and humanity will flourish if we do not cross that line.
The smartest thing I've read this year on AI, notably these 3 points: "AI doesn’t take your job, it lets you do any job," "AI is better for visuals than verbals," and "Killer AI is already here, and it’s called drones."
Your polytheistic AI model is encouraging because distributed intelligence beats centralized dominance. I agree with this a lot. Just wrote about it. But you are missing the metabolic constraints that matter most. Every inference burns energy we're extracting faster than the planet can regenerate. The real question isn't whether AI will be constrained by cryptography but whether we'll design it to work with biological intelligence instead of consuming it. You called it "amplified intelligence" and I think the framing is right, but amplifying what? If we're just accelerating the same extractive logic that created our ecological crisis, we're building better tools for sophisticated collapse. The opportunity is using AI to understand mycorrhizal networks, optimize regenerative systems, and learn from the bio intelligence that's been solving complex problems for billions of years without destroying its substrate.
Uranium and thorium fix this.
Explain me how..read about EROI and cost of materials to extract and build the infrastructure. Than calculate how much uranium actually is there.
Very good observations!
AI syncretism will be next. Then AI monotheism
Balaji Sir, How can someone be soo insightful, and can develop such a great sense of intuition and factualness. Btw, Anytime if we can see you and karpathy together in a podcast?
this would make an epic podcast, 1000%
The middle to middle explanation is a new framing for me, but congruent to what I see. The idea generation is still human lead, and the shipping plus getting someone to buy now very much a constraint in the build anything reality.
"Fundamentally, this is a model of constrained AI rather than omnipotent AI." Of course it is. You can be sure these megalomaniacs will keep the omnipotent AI to themselves!
AI is economically, mathematically, and physically constrained; you laid this out well. But it’s still epistemically unconstrained, at least in the sense that it can reshape what people know by shifting what they see. Yes, models generate outputs, but beyond that they work to route attention by amplifying some narratives while pushing others out of the frame of view. The hard part is not generation but filtering/selecting what gets surfaced, like in what context and to whom.
I expanded on this angle here, in case it’s of interest:
https://dianoiaprotocol.substack.com/p/ai-ranking-censorship
The constraints on perception may end up mattering more than the ones on computation. Ranking is the constraint layer no one audits (at least not yet).
So now the question this raises is that even if we have many models, could convergence in what they surface lead us to something that’s epistemically monotheistic after all?
I’m curious if you see room for structured diversity at the surfacing layer (and not just the model layer).
Agree with most points, they echo my own thoughts in many ways.
Especially the part about having specific models for specific use cases, which inherently drives more decentralization.
We’re building for on-device and believe in the future of specialized, efficient models that strike the right balance between quality and cost.
There should be many purpose-built models to switch between, and chances are, we’ll eventually see multiple AGIs as well.
Really enjoyed your last 2 posts. Would love to have one of these on a regular basis in my inbox (e.g. 2x a month)
I have a lot of respect for your work but having worked in AI for 30+ years and seeing the current trends, most of your conclusions are based on a relatively static assumption of the progress/limitations of the tech today. Unfortunately that’s not the reality we are living in. In time…AI plus robotics will become capable to do most if not all of what we do to creative economic value today. It’s just a matter of when. And I think it’ll be well before most non-industry people would forecast today.
I agree with the polytheistic theme model. I called this: AGI will be a flock, not a bird. Not only there will be many AGI in parallel, but we NEED to ensure competition between them. Otherwise 2 things will happen:
- knowledge will stagnate, then simplify, the decrease, then vanish
- the owners of the "only one" emerging API will be the de facto Matrix overlords
We don't want any of that.
(for context, the whole article on Medium: https://medium.com/p/252a832be700)
AI unlocks potential for those who know how to wield it most effectively. This is not unlike most technology throughout time. AI just happens to be the most recent. People should stop fearing it and learn how to leverage it.
Touche @balaji
This is good news because at least it gives us more time to find alternatives for humans (potentially) displaced in favor of AI. In aggregate we need all the human workforce and the AI to address our global debt and productivity challenges. The only question is the trajectory we will take to solve the problems. Thank you for this thought piece.
Today's AI is not going to kill us all. It is not agentic and humans do the prompting and verifying. It is possible that future AI will be agentic and will do the prompting and verifying for us. Also known as AGI or ASI. You may believe it is impossible but in todays world “Something has not ever happened” is not an argument for “it is impossible”. For our safety no one should build an AI that is agentic and does the prompting and verifying for us.
This is not an anti-progress argument. AI will be fabulously productive and humanity will flourish if we do not cross that line.