OpenAI’s next big language model (LLM) won’t make as big a leap forward as its predecessors, online sources say. Against this background, OpenAI has formed a group of developers tasked with working out possible ways to further develop and improve the neural network.

Image source: OpenAI

The company’s new LLM is codenamed Orion. Staff testing this algorithm found that its performance was better than existing LLMs, but the improvements were not as dramatic as when moving from GPT-3 to GPT-4. The pace of LLM improvement appears to be slowing, with Orion only marginally superior to the company’s previous language models in some areas, such as coding.

To change this, OpenAI created a team of developers whose task was to find strategies that could allow the company to continue to improve LLM in the face of shrinking training data. According to the source, new strategies include training Orion on synthetic data generated by neural networks, as well as more active improvement of LLM after the completion of the initial training stage. OpenAI officials declined to comment on the matter.

Leave a Reply

Your email address will not be published. Required fields are marked *