Recognized leader in AI, OpenAI, is making serious efforts to reduce its dependence on Nvidia’s AI accelerators. In the next few months, OpenAI plans to complete the design of its own chip and begin manufacturing it at TSMC’s fabs using the most advanced process technology.

Image source: Samsung

According to analysts, “OpenAI is on track to achieve its ambitious goal of mass production at TSMC in 2026.” The most critical step on the way from design to production is Tape-out — the process of transferring a digital design of a chip to a photomask for subsequent production. This step usually costs several tens of millions of dollars, and it takes up to six months before the first chip is released. In the event of a failure, it is necessary to diagnose the problem and repeat the process.

OpenAI sees its upcoming AI accelerator as a strategic tool to strengthen its negotiating position with other chip suppliers. If the initial rollout goes well, OpenAI could offer an alternative to Nvidia chips, which currently account for more than 80% of the AI ​​accelerator market, later this year.

If the first chip is successful, OpenAI engineers plan to develop more advanced processors with greater capabilities with each new iteration. The company has already signed up for the $500 billion Stargate infrastructure program announced by US President Donald Trump last month.

The chip is being developed by an internal OpenAI team led by Richard Ho in collaboration with Broadcom. Ho joined OpenAI more than a year ago from Google, where he led a program to build specialized AI chips. While Ho’s team has grown to 40 employees in recent months, that number is still an order of magnitude smaller than the large-scale projects of tech giants like Google or Amazon.

Analysts believe that OpenAI’s new AI accelerator will initially play a limited role in the company’s infrastructure. To create a program as comprehensive as Google’s or Amazon’s for designing AI chips, OpenAI would need to hire hundreds of engineers.

According to industry sources, a new chip design for the ambitious, large-scale program could cost $500 million. That cost could double if you factor in the software and peripherals needed to build it. By comparison, Meta✴ plans to spend $60 billion on AI infrastructure in 2025, while Microsoft’s annual investment in the space is $80 billion.

Leave a Reply

Your email address will not be published. Required fields are marked *