OpenAI’s next big language model (LLM) won’t make as big a leap forward as its predecessors, online sources say. Against this background, OpenAI has formed a group of developers tasked with working out possible ways to further develop and improve the neural network.
Image source: OpenAI
The company’s new LLM is codenamed Orion. Staff testing this algorithm found that its performance was better than existing LLMs, but the improvements were not as dramatic as when moving from GPT-3 to GPT-4. The pace of LLM improvement appears to be slowing, with Orion only marginally superior to the company’s previous language models in some areas, such as coding.
To change this, OpenAI created a team of developers whose task was to find strategies that could allow the company to continue to improve LLM in the face of shrinking training data. According to the source, new strategies include training Orion on synthetic data generated by neural networks, as well as more active improvement of LLM after the completion of the initial training stage. OpenAI officials declined to comment on the matter.
Google plans to roll out end-to-end encryption (E2EE) of email to all users, even those…
Vladimir Putin signed a law aimed at protecting citizens from telephone and cyber fraudsters: employees…
«In the coming months, OpenAI plans to release an open source large language model of…
The US Federal Aviation Administration (FAA) has announced that it has received a document containing…
During spring cleaning, UK resident Ellie Hart threw out a USB device with the trash,…
Early last month, TSMC announced plans to spend another $100 billion to expand its U.S.…