The end of AI scaling may not be nigh: Here’s what’s next

As AI systems achieve superhuman performance in increasingly complex tasks, the industry is grappling with whether bigger models are even possible — or if innovation must take a different path.

The general approach to large language model (LLM) development has been that bigger is better, and that performance scales with more data and more computing power. However, recent media discussions have focused on how LLMs are approaching their limits. “Is AI hitting a wall?” The Verge questioned, while Reuters reported that “OpenAI and others seek new path to smarter AI as current methods hit limitations.”

The concern is that scaling, which has driven advances for years, may not extend to the next generation of models. Reporting suggests that the development of frontier models like GPT-5, which push the current limits of AI, may face challenges due to diminishing performance gains during pre-training. The Information reported on these challenges at OpenAI and Bloomberg covered similar news at Google and Anthropic.

This issue has led to concerns that these systems may be subject to the law of diminishing returns — where each added unit of input yields progressively smaller gains. As LLMs grow larger, the costs of getting high-quality training data and scaling infrastructure increase exponentially, reducing the returns on performance improvement in new models. Compounding this challenge is the limited availability of high-quality new data, as much of the accessible information has already been incorporated into existing training datasets.

This does not mean the end of performance gains for AI. It simply means that to sustain progress, further engineering is needed through innovation in model architecture, optimization techniques and data use.

Learning from Moore’s Law

A similar pattern of diminishing returns appeared in the semiconductor industry. For decades, the industry had benefited from Moore’s Law, which predicted that the number of transistors would double every 18 to 24 months, driving dramatic performance improvements through smaller and more efficient designs. This too eventually hit diminishing returns, beginning somewhere between 2005 and 2007 due to Dennard Scaling — the principle that shrinking transistors also reduces power consumption— having hit its limits which fueled predictions of the death of Moore’s Law.

I had a close up view of this issue when I worked with AMD from 2012-2022. This problem did not mean that semiconductors — and by extension computer processors — stopped achieving performance improvements from one generation to the next. It did mean that improvements came more from chiplet designs, high-bandwidth memory, optical switches, more cache memory and accelerated computing architecture rather than the scaling down of transistors.

New paths to progress

Similar phenomena are already being observed with current LLMs. Multimodal AI models like GPT-4o, Claude 3.5 and Gemini 1.5 have proven the power of integrating text and image understanding, enabling advancements in complex tasks like video analysis and contextual image captioning. More tuning of algorithms for both training and inference will lead to further performance gains. Agent technologies, which enable LLMs to perform tasks autonomously and coordinate seamlessly with other systems, will soon significantly expand their practical applications.

Future model breakthroughs might arise from one or more hybrid AI architecture designs combining symbolic reasoning with neural networks. Already, the o1 reasoning model from OpenAI shows the potential for model integration and performance extension. While only now emerging from its early stage of development, quantum computing holds promise for accelerating AI training and inference by addressing current computational bottlenecks.

The perceived scaling wall is unlikely to end future gains, as the AI research community has consistently proven its ingenuity in overcoming challenges and unlocking new capabilities and performance advances.

In fact, not everyone agrees that there even is a scaling wall. OpenAI CEO Sam Altman was succinct in his views: “There is no wall.”

Source: X https://x.com/sama/status/1856941766915641580 

Speaking on the “Diary of a CEO” podcast, ex-Google CEO and co-author of Genesis Eric Schmidt essentially agreed with Altman, saying he does not believe there is a scaling wall — at least there won’t be one over the next five years. “In five years, you’ll have two or three more turns of the crank of these LLMs. Each one of these cranks looks like it’s a factor of two, factor of three, factor of four of capability, so let’s just say turning the crank on all these systems will get 50 times or 100 times more powerful,” he said.

Leading AI innovators are still optimistic about the pace of progress, as well as the potential for new methodologies. This optimism is evident in a recent conversation on “Lenny’s Podcast” with OpenAI’s CPO Kevin Weil and Anthropic CPO Mike Krieger.

In this discussion, Krieger described that what OpenAI and Anthropic are working on today “feels like magic,” but acknowledged that in just 12 months, “we’ll look back and say, can you believe we used that garbage? … That’s how fast [AI development] is moving.”

It’s true — it does feel like magic, as I recently experienced when using OpenAI’s Advanced Voice Mode. Speaking with ‘Juniper’ felt entirely natural and seamless, showcasing how AI is evolving to understand and respond with emotion and nuance in real-time conversations.

Krieger also discusses the recent o1 model, referring to this as “a new way to scale intelligence, and we feel like we’re just at the very beginning.” He added: “The models are going to get smarter at an accelerating rate.”

These expected advancements suggest that while traditional scaling approaches may or may not face diminishing returns in the near-term, the AI field is poised for continued breakthroughs through new methodologies and creative engineering.

Does scaling even matter?

While scaling challenges dominate much of the current discourse around LLMs, recent studies suggest that current models are already capable of extraordinary results, raising a provocative question of whether more scaling even matters.

A recent study forecasted that ChatGPT would help doctors make diagnoses when presented with complicated patient cases. Conducted with an early version of GPT-4, the study compared ChatGPT’s diagnostic capabilities against those of doctors with and without AI help. A surprising outcome revealed that ChatGPT alone substantially outperformed both groups, including doctors using AI aid. There are several reasons for this, from doctors’ lack of understanding of how to best use the bot to their belief that their knowledge, experience and intuition were inherently superior.

This is not the first study that shows bots achieving superior results compared to professionals. VentureBeat reported on a study earlier this year which showed that LLMs can conduct financial statement analysis with accuracy rivaling — and even surpassing — that of professional analysts. Also using GPT-4, another goal was to predict future earnings growth. GPT-4 achieved 60% accuracy in predicting the direction of future earnings, notably higher than the 53 to 57% range of human analyst forecasts.

Notably, both these examples are based on models that are already out of date. These outcomes underscore that even without new scaling breakthroughs, existing LLMs are already capable of outperforming experts in complex tasks, challenging assumptions about the necessity of further scaling to achieve impactful results.

Scaling, skilling or both

These examples show that current LLMs are already highly capable, but scaling alone may not be the sole path forward for future innovation. But with more scaling possible and other emerging techniques promising to improve performance, Schmidt’s optimism reflects the rapid pace of AI advancement, suggesting that in just five years, models could evolve into polymaths, seamlessly answering complex questions across multiple fields.

Whether through scaling, skilling or entirely new methodologies, the next frontier of AI promises to transform not just the technology itself, but its role in our lives. The challenge ahead is ensuring that progress remains responsible, equitable and impactful for everyone.