34 Comments
User's avatar
Hollis Robbins (@Anecdotal)'s avatar

The smartest thing I've read this year on AI, notably these 3 points: "AI doesn’t take your job, it lets you do any job," "AI is better for visuals than verbals," and "Killer AI is already here, and it’s called drones."

Expand full comment
Malte's avatar

Your polytheistic AI model is encouraging because distributed intelligence beats centralized dominance. I agree with this a lot. Just wrote about it. But you are missing the metabolic constraints that matter most. Every inference burns energy we're extracting faster than the planet can regenerate. The real question isn't whether AI will be constrained by cryptography but whether we'll design it to work with biological intelligence instead of consuming it. You called it "amplified intelligence" and I think the framing is right, but amplifying what? If we're just accelerating the same extractive logic that created our ecological crisis, we're building better tools for sophisticated collapse. The opportunity is using AI to understand mycorrhizal networks, optimize regenerative systems, and learn from the bio intelligence that's been solving complex problems for billions of years without destroying its substrate.

Expand full comment
Ray's avatar

Uranium and thorium fix this.

Expand full comment
Malte's avatar

Explain me how..read about EROI and cost of materials to extract and build the infrastructure. Than calculate how much uranium actually is there.

Expand full comment
Uberboyo's avatar

Very good observations!

Expand full comment
Anurag Sharma's avatar

Balaji Sir, How can someone be soo insightful, and can develop such a great sense of intuition and factualness. Btw, Anytime if we can see you and karpathy together in a podcast?

Expand full comment
Dima Shvets's avatar

this would make an epic podcast, 1000%

Expand full comment
Craig's avatar

Today's AI is not going to kill us all. It is not agentic and humans do the prompting and verifying. It is possible that future AI will be agentic and will do the prompting and verifying for us. Also known as AGI or ASI. You may believe it is impossible but in todays world “Something has not ever happened” is not an argument for “it is impossible”. For our safety no one should build an AI that is agentic and does the prompting and verifying for us.

This is not an anti-progress argument. AI will be fabulously productive and humanity will flourish if we do not cross that line.

Expand full comment
Elena Luneva's avatar

The middle to middle explanation is a new framing for me, but congruent to what I see. The idea generation is still human lead, and the shipping plus getting someone to buy now very much a constraint in the build anything reality.

Expand full comment
Prakruti's avatar

Loved the clarity of this piece and I have so many thoughts

For one- i think instead of prompting to verifying there's the prompting to packaging model. Because how you send it out will still remain a challenge.

Okay have too many will be writing a psot.

Thanks for sparking my brain chemistry

Expand full comment
Nick Rizk's avatar

AI syncretism will be next. Then AI monotheism

Expand full comment
Running Elk's avatar

"Fundamentally, this is a model of constrained AI rather than omnipotent AI." Of course it is. You can be sure these megalomaniacs will keep the omnipotent AI to themselves!

Expand full comment
Rickie Elizabeth's avatar

AI is economically, mathematically, and physically constrained; you laid this out well. But it’s still epistemically unconstrained, at least in the sense that it can reshape what people know by shifting what they see. Yes, models generate outputs, but beyond that they work to route attention by amplifying some narratives while pushing others out of the frame of view. The hard part is not generation but filtering/selecting what gets surfaced, like in what context and to whom.

I expanded on this angle here, in case it’s of interest:

https://dianoiaprotocol.substack.com/p/ai-ranking-censorship

The constraints on perception may end up mattering more than the ones on computation. Ranking is the constraint layer no one audits (at least not yet).

So now the question this raises is that even if we have many models, could convergence in what they surface lead us to something that’s epistemically monotheistic after all?

I’m curious if you see room for structured diversity at the surfacing layer (and not just the model layer).

Expand full comment
Pat D's avatar

The Laffer curve for AI is brilliant,as is the point about drones.

Expand full comment
Dima Shvets's avatar

Agree with most points, they echo my own thoughts in many ways.

Especially the part about having specific models for specific use cases, which inherently drives more decentralization.

We’re building for on-device and believe in the future of specialized, efficient models that strike the right balance between quality and cost.

There should be many purpose-built models to switch between, and chances are, we’ll eventually see multiple AGIs as well.

Expand full comment
David  Dannenberg's avatar

Am I the only one alive that watched “Colosus: The Forbin Project” as a teen? Of all the movies about the theme of computers taking over for humans—and there have been many—it seems the most prescient, though of course it is Sci-fi, which tends more towards fiction than science.

Regardless, the points made in the polytheistic post are well considered and illuminating about the current state of AI.

Expand full comment
VB's avatar

these are phenomenal takes.

Expand full comment
Philipp's avatar

Really enjoyed your last 2 posts. Would love to have one of these on a regular basis in my inbox (e.g. 2x a month)

Expand full comment
Zafar Satyavan's avatar

Ai models the human Brain but we have emotions & hormones to deal with !!

Expand full comment