OpenAI, made famous by its AI chatbot ChatGPT, has been working with Broadcom for several months to create its first AI accelerator, Reuters writes, citing its own sources. According to them, to do this, OpenAI has formed a team of chip developers of about 20 people, including leading specialists who previously participated in the creation of tensor processing units (TPUs) at Google, including Thomas Norrie and Richard Ho.
Particular attention is being paid to the accelerator’s ability to run previously trained neural networks, inference, as analysts predict that the need for inference chips could outpace the demand for AI accelerators for training models as more AI applications are deployed. It is expected that production of the new chip at the facilities of the Taiwanese manufacturer TSMC will begin in 2026.
Agency sources also learned of OpenAI’s plans to begin using AMD AI chips along with Nvidia accelerators through the Microsoft Azure cloud platform to meet the growing needs for AI infrastructure. We are talking about AMD Instinct MI300 accelerators.
Nvidia accelerators currently account for over 80% of the AI accelerator market share. But shortages and rising costs are forcing large customers such as Microsoft, Meta✴ and now OpenAI to look for alternatives, both internally and externally. However, for the foreseeable future, OpenAI will continue to rely primarily on Nvidia solutions for both model training and inference.
Not the most widely publicized news following Nvidia's quarterly report was the statement by the…
In addition to the WD_Black SN7100 series of NVMe drives and the WD_Black C50 Storage…
Western Digital today introduced a range of solid-state drives under the WD_Black and SanDisk brands.…
The season of major video game awards in 2024 opens with the Golden Joystick Awards.…
Amid the long-awaited release of S.T.A.L.K.E.R. 2: Heart of Chornobyl, the GOG digital store has…
Xiaomi's Redmi brand is 11 years old. In honor of this event, a new logo…