OpenAI, made famous by its AI chatbot ChatGPT, has been working with Broadcom for several months to create its first AI accelerator, Reuters writes, citing its own sources. According to them, to do this, OpenAI has formed a team of chip developers of about 20 people, including leading specialists who previously participated in the creation of tensor processing units (TPUs) at Google, including Thomas Norrie and Richard Ho.
Particular attention is being paid to the accelerator’s ability to run previously trained neural networks, inference, as analysts predict that the need for inference chips could outpace the demand for AI accelerators for training models as more AI applications are deployed. It is expected that production of the new chip at the facilities of the Taiwanese manufacturer TSMC will begin in 2026.
Agency sources also learned of OpenAI’s plans to begin using AMD AI chips along with Nvidia accelerators through the Microsoft Azure cloud platform to meet the growing needs for AI infrastructure. We are talking about AMD Instinct MI300 accelerators.
Nvidia accelerators currently account for over 80% of the AI accelerator market share. But shortages and rising costs are forcing large customers such as Microsoft, Meta✴ and now OpenAI to look for alternatives, both internally and externally. However, for the foreseeable future, OpenAI will continue to rely primarily on Nvidia solutions for both model training and inference.
Apple could become the first company to be fined under the European Union's new digital…
Jim Keller enjoys well-deserved authority in the computer industry, as he has at various times…
Nintendo's decline in the number of Switch game consoles to be shipped in the current…
Having reported its third-quarter results the day before, the American company GlobalFoundries admitted a 6%…
At the annual SEMA show in Las Vegas, Kia presented two SUV concepts designed for…
The next iOS 18.2 update may introduce a long-awaited feature that will allow you to…