Broadcom has figured out how to speed up next-generation AI chips

Broadcom said its custom chip division, which makes artificial intelligence solutions for cloud providers, has developed new technology to improve the speed of semiconductors. This is especially true given the high demand for AI infrastructure.

Image source: broadcom.com

Broadcom is one of the biggest beneficiaries of strong demand for artificial intelligence hardware as so-called hyperscalers, especially large cloud providers, turn to its custom chips to build out their AI infrastructure.

Now Broadcom has introduced 3.5D eXtreme Dimension System in Package (XDSiP) technology, which allows the creation of next-generation computing accelerators. 3.5D XDSiP allows you to assemble chips with a total area of ​​more than 6000 mm2 and up to 12 HBM memory stacks on one substrate. This makes it possible to create even more complex, productive and at the same time more energy-efficient accelerators. Broadcom notes that its technology is the first in the industry to allow face-to-face (F2F) interconnection of dies, i.e., front-to-end, whereas previously only face-to-back interconnection was available or F2B).

The Broadcom 3.5D XDSiP platform provides significant improvements in interconnect density and power efficiency compared to the F2B approach. The innovative F2F layup directly connects the top metal layers of the top and bottom dies, providing a tight and reliable connection with minimal electrical interference and exceptional mechanical strength. The Broadcom 3.5D XDSiP platform includes both ready-made solutions for implementation in chips and systems for designing custom solutions.

The technology, called 3.5D XDSiP, will allow Broadcom chip customers to increase the amount of memory inside each packaged chip and speed it up by directly connecting critical components. TSMC will produce chips with the 3.5D XDSiP layout. There are currently five products in development with the new Broadcom technology, and their deliveries will begin in February 2026.

The company did not specify for which cloud providers it develops custom chips, but analysts indicate that Google and Meta✴ are among its clients. “Our hyperscaler customers continue to scale their AI clusters,” Broadcom CEO Hock Tan said in September when the company raised its fiscal 2024 AI revenue forecast from $11 billion to $12 billion. three major clients for custom chip development, he added.

Broadcom’s biggest competitor in this area is Marvell, which also offers advanced chip interconnect solutions. The market for custom chips could grow to $45 billion by 2028 – two companies will divide it, Marvell CEO Chris Koopmans recently said.

admin

Share
Published by
admin

Recent Posts

Physicists Doubt Microsoft’s Majorana 1 Quantum Processor’s Performance on Majorana Fermions

There is an opinion among experts that the new topological quantum processor Microsoft Majorana 1…

14 minutes ago

Google has begun to disable uBlock Origin en masse in Chrome due to the transition to Manifest V3

Some Chrome users have noticed that the uBlock Origin extension no longer works. The developers…

34 minutes ago

Apple CEO Promises Trump to Invest Hundreds of Millions of Dollars in Developing Manufacturing in the U.S.

The directness of the current US President Donald Trump sometimes creates inconvenience for his partners,…

3 hours ago

Apple Confirms It Will Soon Make Vision Pro Headsets More Comfortable and Smarter

Apple has officially confirmed that its generative AI platform, Apple Intelligence, will be coming to…

9 hours ago

OpenAI Purges ChatGPT of Suspected Malicious Accounts from China and North Korea

OpenAI has suspended accounts of users in China and North Korea who allegedly used the…

9 hours ago