Broadcom has figured out how to speed up next-generation AI chips

Broadcom said its custom chip division, which makes artificial intelligence solutions for cloud providers, has developed new technology to improve the speed of semiconductors. This is especially true given the high demand for AI infrastructure.

Image source: broadcom.com

Broadcom is one of the biggest beneficiaries of strong demand for artificial intelligence hardware as so-called hyperscalers, especially large cloud providers, turn to its custom chips to build out their AI infrastructure.

Now Broadcom has introduced 3.5D eXtreme Dimension System in Package (XDSiP) technology, which allows the creation of next-generation computing accelerators. 3.5D XDSiP allows you to assemble chips with a total area of ​​more than 6000 mm2 and up to 12 HBM memory stacks on one substrate. This makes it possible to create even more complex, productive and at the same time more energy-efficient accelerators. Broadcom notes that its technology is the first in the industry to allow face-to-face (F2F) interconnection of dies, i.e., front-to-end, whereas previously only face-to-back interconnection was available or F2B).

The Broadcom 3.5D XDSiP platform provides significant improvements in interconnect density and power efficiency compared to the F2B approach. The innovative F2F layup directly connects the top metal layers of the top and bottom dies, providing a tight and reliable connection with minimal electrical interference and exceptional mechanical strength. The Broadcom 3.5D XDSiP platform includes both ready-made solutions for implementation in chips and systems for designing custom solutions.

The technology, called 3.5D XDSiP, will allow Broadcom chip customers to increase the amount of memory inside each packaged chip and speed it up by directly connecting critical components. TSMC will produce chips with the 3.5D XDSiP layout. There are currently five products in development with the new Broadcom technology, and their deliveries will begin in February 2026.

The company did not specify for which cloud providers it develops custom chips, but analysts indicate that Google and Meta✴ are among its clients. “Our hyperscaler customers continue to scale their AI clusters,” Broadcom CEO Hock Tan said in September when the company raised its fiscal 2024 AI revenue forecast from $11 billion to $12 billion. three major clients for custom chip development, he added.

Broadcom’s biggest competitor in this area is Marvell, which also offers advanced chip interconnect solutions. The market for custom chips could grow to $45 billion by 2028 – two companies will divide it, Marvell CEO Chris Koopmans recently said.

admin

Share
Published by
admin

Recent Posts

Samsung will halve the costs of its contract division for the production of chips

The status of the largest supplier of semiconductor products does not fully please the South…

52 minutes ago

Microsoft now allows OpenAI to use competitors’ cloud services

OpenAI this week announced a joint venture with SoftBank and Oracle that will invest $500…

2 hours ago

Meta plans to release Oakley smart glasses, watches and headphones with AI

Meta✴ continues to develop the direction of wearable devices and, in addition to updating its…

3 hours ago

Nvidia overtakes Apple for the third time to become the most valuable company in the world

Nvidia has regained its title as the most valuable company in the world, surpassing Apple…

3 hours ago

Windows 11 received a Steam-style Edge Game Assist gaming overlay with tips and guides

Microsoft has released the in-game browser overlay Edge Game Assist for Windows 11, which has…

3 hours ago