OpenAI’s next big language model (LLM) won’t make as big a leap forward as its predecessors, online sources say. Against this background, OpenAI has formed a group of developers tasked with working out possible ways to further develop and improve the neural network.
Image source: OpenAI
The company’s new LLM is codenamed Orion. Staff testing this algorithm found that its performance was better than existing LLMs, but the improvements were not as dramatic as when moving from GPT-3 to GPT-4. The pace of LLM improvement appears to be slowing, with Orion only marginally superior to the company’s previous language models in some areas, such as coding.
To change this, OpenAI created a team of developers whose task was to find strategies that could allow the company to continue to improve LLM in the face of shrinking training data. According to the source, new strategies include training Orion on synthetic data generated by neural networks, as well as more active improvement of LLM after the completion of the initial training stage. OpenAI officials declined to comment on the matter.
The US Federal Trade Commission (FTC) has delayed a rule that would require companies to…
Played on PC Developers from Sobaka Studio have built a reputation for themselves as authors…
Satellites move in orbit at speeds of thousands of kilometers per hour, and without precise…
A dangerous new TikTok challenge has gone viral in which American schoolchildren are deliberately damaging…
Apple is developing processors for data centers that will serve requests from Apple Intelligence artificial…
One of my first memories (or perhaps the very first one – is it possible…