Broadcom has figured out how to speed up next-generation AI chips

Broadcom said its custom chip division, which makes artificial intelligence solutions for cloud providers, has developed new technology to improve the speed of semiconductors. This is especially true given the high demand for AI infrastructure.

Image source: broadcom.com

Broadcom is one of the biggest beneficiaries of strong demand for artificial intelligence hardware as so-called hyperscalers, especially large cloud providers, turn to its custom chips to build out their AI infrastructure.

Now Broadcom has introduced 3.5D eXtreme Dimension System in Package (XDSiP) technology, which allows the creation of next-generation computing accelerators. 3.5D XDSiP allows you to assemble chips with a total area of ​​more than 6000 mm2 and up to 12 HBM memory stacks on one substrate. This makes it possible to create even more complex, productive and at the same time more energy-efficient accelerators. Broadcom notes that its technology is the first in the industry to allow face-to-face (F2F) interconnection of dies, i.e., front-to-end, whereas previously only face-to-back interconnection was available or F2B).

The Broadcom 3.5D XDSiP platform provides significant improvements in interconnect density and power efficiency compared to the F2B approach. The innovative F2F layup directly connects the top metal layers of the top and bottom dies, providing a tight and reliable connection with minimal electrical interference and exceptional mechanical strength. The Broadcom 3.5D XDSiP platform includes both ready-made solutions for implementation in chips and systems for designing custom solutions.

The technology, called 3.5D XDSiP, will allow Broadcom chip customers to increase the amount of memory inside each packaged chip and speed it up by directly connecting critical components. TSMC will produce chips with the 3.5D XDSiP layout. There are currently five products in development with the new Broadcom technology, and their deliveries will begin in February 2026.

The company did not specify for which cloud providers it develops custom chips, but analysts indicate that Google and Meta✴ are among its clients. “Our hyperscaler customers continue to scale their AI clusters,” Broadcom CEO Hock Tan said in September when the company raised its fiscal 2024 AI revenue forecast from $11 billion to $12 billion. three major clients for custom chip development, he added.

Broadcom’s biggest competitor in this area is Marvell, which also offers advanced chip interconnect solutions. The market for custom chips could grow to $45 billion by 2028 – two companies will divide it, Marvell CEO Chris Koopmans recently said.

admin

Share
Published by
admin

Recent Posts

SnowRunner creators’ ‘revolutionary’ RoadCraft simulator earns ‘mixed’ reviews on Steam release

As promised, the “revolutionary” construction simulator RoadCraft from Saber Interactive (SnowRunner, Expeditions: A MudRunner Game)…

7 hours ago

Google has taught Meet to translate speech on the fly while preserving intonation and tone of voice

Google unveiled a new live translation feature for its Google Meet video conferencing service at…

7 hours ago

CMF Phone 2 Pro Review: Still Surprising

Last year, Nothing introduced the first smartphone under its budget sub-brand CMF by Nothing. The…

7 hours ago

Google Chrome Will Start Automatically Changing Weak or Hacked Passwords, But Will Ask for Permission First

At Google I/O, the company announced a new feature in Chrome that will automatically update…

7 hours ago

The End of Silent AI Video: Google Unveils Veo 3, the First Video Generator with Sound

Google presented the latest AI model for generating videos from text descriptions, Veo 3, at…

7 hours ago

GTX 750 Ti is no longer enough for the game: Ubisoft announced the system requirements of Rainbow Six Siege X

Publisher and developer Ubisoft has revealed the system requirements for Tom Clancy's Rainbow Six Siege…

1 day ago