OpenAI’s next big language model (LLM) won’t make as big a leap forward as its predecessors, online sources say. Against this background, OpenAI has formed a group of developers tasked with working out possible ways to further develop and improve the neural network.
The company’s new LLM is codenamed Orion. Staff testing this algorithm found that its performance was better than existing LLMs, but the improvements were not as dramatic as when moving from GPT-3 to GPT-4. The pace of LLM improvement appears to be slowing, with Orion only marginally superior to the company’s previous language models in some areas, such as coding.
To change this, OpenAI created a team of developers whose task was to find strategies that could allow the company to continue to improve LLM in the face of shrinking training data. According to the source, new strategies include training Orion on synthetic data generated by neural networks, as well as more active improvement of LLM after the completion of the initial training stage. OpenAI officials declined to comment on the matter.
Two independent research groups have reported an advance in the development of lithium-sulfur batteries that…
Until now, it was believed that large suppliers of semiconductor products such as Qualcomm and…
Microsoft has lifted restrictions on updating Windows 11 to version 24H2 for computers running Assassin's…
Microsoft is testing a new artificial intelligence (AI)-powered search feature in the latest build for…
Perplexity AI proposed on Saturday, a day before TikTok was blocked in the United States,…
Not defined Roguelikes with a first-person perspective are a fairly niche genre segment, but they…