Broadcom has figured out how to speed up next-generation AI chips

Broadcom said its custom chip division, which makes artificial intelligence solutions for cloud providers, has developed new technology to improve the speed of semiconductors. This is especially true given the high demand for AI infrastructure.

Image source: broadcom.com

Broadcom is one of the biggest beneficiaries of strong demand for artificial intelligence hardware as so-called hyperscalers, especially large cloud providers, turn to its custom chips to build out their AI infrastructure.

Now Broadcom has introduced 3.5D eXtreme Dimension System in Package (XDSiP) technology, which allows the creation of next-generation computing accelerators. 3.5D XDSiP allows you to assemble chips with a total area of ​​more than 6000 mm2 and up to 12 HBM memory stacks on one substrate. This makes it possible to create even more complex, productive and at the same time more energy-efficient accelerators. Broadcom notes that its technology is the first in the industry to allow face-to-face (F2F) interconnection of dies, i.e., front-to-end, whereas previously only face-to-back interconnection was available or F2B).

The Broadcom 3.5D XDSiP platform provides significant improvements in interconnect density and power efficiency compared to the F2B approach. The innovative F2F layup directly connects the top metal layers of the top and bottom dies, providing a tight and reliable connection with minimal electrical interference and exceptional mechanical strength. The Broadcom 3.5D XDSiP platform includes both ready-made solutions for implementation in chips and systems for designing custom solutions.

The technology, called 3.5D XDSiP, will allow Broadcom chip customers to increase the amount of memory inside each packaged chip and speed it up by directly connecting critical components. TSMC will produce chips with the 3.5D XDSiP layout. There are currently five products in development with the new Broadcom technology, and their deliveries will begin in February 2026.

The company did not specify for which cloud providers it develops custom chips, but analysts indicate that Google and Meta✴ are among its clients. “Our hyperscaler customers continue to scale their AI clusters,” Broadcom CEO Hock Tan said in September when the company raised its fiscal 2024 AI revenue forecast from $11 billion to $12 billion. three major clients for custom chip development, he added.

Broadcom’s biggest competitor in this area is Marvell, which also offers advanced chip interconnect solutions. The market for custom chips could grow to $45 billion by 2028 – two companies will divide it, Marvell CEO Chris Koopmans recently said.

admin

Share
Published by
admin

Recent Posts

It became known what Durov talked about and what he promised during interrogation in a French court

The founder of the Telegram messenger Pavel Durov promised French judges to improve content moderation…

1 hour ago

Donald Trump Posts ‘Official Meme’ – Some Earn Millions of Dollars from It in Minutes

Two days before taking office, US President-elect Donald Trump published an “official meme” on social…

1 hour ago

The new heavy European rocket Ariane 6 has been improved – the next launch will take place in February

The European Space Agency (ESA) announced that the second ever launch of the new Ariane…

1 hour ago

TikTok has warned it will stop operating in the US tomorrow unless Biden intervenes.

TikTok has warned that it will be forced to shut down the short-video platform in…

1 hour ago

Amazon stops delivering goods with drones after incident at testing site

Amazon has suspended testing of its delivery drones after two models crashed during testing at…

3 hours ago

In France, they created a tiny robot surgeon that will very carefully delve into the brains

The smaller the surgical intervention, the easier the consequences of the operations. This is especially…

3 hours ago