Yael on AI

Yael on AI

Share this post

Yael on AI
Yael on AI
The arrogance of scale: when bigger isn't better in AI

The arrogance of scale: when bigger isn't better in AI

The AI scaling showdown: is bigger truly better?

Yael Rozencwajg's avatar
Yael Rozencwajg
Dec 21, 2024
∙ Paid
2

Share this post

Yael on AI
Yael on AI
The arrogance of scale: when bigger isn't better in AI
Share

The world of AI is a battlefield of ideas, where competing visions for the future of intelligent machines clash.

In one corner, we have the proponents of "bigger is better," those who believe that scaling up language models with ever-increasing data and compute power is the path to artificial general intelligence (AGI).

In the other corner, a growing chorus of dissenters argues that this brute-force approach is a dead end, a costly distraction from the true pursuit of cognitive machines.

Share

Share Yael on AI

This debate recently reached a head when Ilya Sutskever, the former chief scientist of OpenAI, admitted at NeurIPS that the era of "scaling laws"—the assumption that larger models automatically lead to better performance—is over.

This admission, echoed by Scale AI's Alexandr Wang, has sent shockwaves through the AI community, forcing a reassessment of the dominant paradigm.

My view:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Yael Rozencwajg
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share