OpenAI’s GPT-5 reportedly falling short of expectations

OpenAI’s efforts to develop its next major model, GPT-5, are running behind schedule, with results that don’t yet justify the enormous costs, according to a new report in The Wall Street Journal.

This echoes an earlier report in The Information suggesting that OpenAI is looking to new strategies as GPT-5 might not represent as big a leap forward as previous models. But the WSJ story includes additional details around the 18-month development of GPT-5, code-named Orion.

OpenAI has reportedly completed at least two large training runs, which aim to improve a model by training it on enormous quantities of data. An initial training run went slower than expected, hinting that a larger run would be both time-consuming and costly. And while GPT-5 can reportedly perform better than its predecessors, it hasn’t yet advanced enough to justify the cost of keeping the model running.

The WSJ also reports that rather than just relying on publicly available data and licensing deals, OpenAI has also hired people to create fresh data by writing code or solving math problems. It’s also using synthetic data created by another of its models, o1.

OpenAI did not immediately respond to a request for comment. The company previously said it would not be releasing a model code-named Orion this year.