JEDEC has published a preliminary specification for fourth-generation HBM4 memory, which promises significant increases in capacity and throughput for artificial intelligence and high-performance computing systems.

Image Source: AMD

The new HBM4 standard will support 24 GB and 32 GB stacks, and will also offer configurations for 4-, 8-, 12-, and 16-layer stacks with vertical TSV interconnects. The JEDEC Committee has tentatively agreed on speeds of up to 6.4 Gt/s, but discussions are ongoing about the possibility of achieving even higher data rates.

A 16-layer stack based on 32-gigabit chips will be able to provide a capacity of 64 GB, that is, in this case, a processor with four memory modules will be able to support 256 GB of memory with a peak bandwidth of 6.56 TB / s using an 8192-bit interface.

Although HBM4 will have double the number of channels per stack compared to HBM3 and a larger physical size to ensure compatibility, a single controller will be able to handle both HBM3 and HBM4. However, different substrates will be required to accommodate different form factors. Interestingly, JEDEC did not mention the possibility of integrating HBM4 memory directly into processors, which is perhaps the most intriguing aspect of the new memory type.

Previously, SK hynix and TSMC announced collaboration on the development of HBM4 base crystals, and a little later at the European Symposium 2024, TSMC confirmed that it will use its 12FFC+ (12nm class) and N5 (5nm class) processes to produce these crystals.

TSMC’s N5 process allows for the integration of more logic and functions, with interconnect pitches ranging from 9 to 6 microns, which is critical for on-chip integration. The 12FFC+ process, based on TSMC’s 16nm FinFET technology, will produce cost-effective base dies that connect memory to host processors using silicon wafers.

Note that HBM4 is primarily designed for the needs of generative artificial intelligence and high-performance computing, which require processing very large amounts of data and performing complex calculations. Therefore, it is unlikely that we will see HBM4 in client applications such as GPU. SK hynix expects to launch production of HBM4 in 2026.

admin

Share
Published by
admin

Recent Posts

Windows 11 will become smarter: Microsoft is testing AI file search

Microsoft is testing a new artificial intelligence (AI)-powered search feature in the latest build for…

18 minutes ago

Merger instead of sale: Perplexity AI wants to save TikTok in the US

Perplexity AI proposed on Saturday, a day before TikTok was blocked in the United States,…

18 minutes ago

Battle Shapers – fear of ambition. Review

Not defined Roguelikes with a first-person perspective are a fairly niche genre segment, but they…

5 hours ago

ASRock introduced industrial mini-PCs and motherboards based on Intel Arrow Lake-H and AMD Ryzen 300 AI

ASRock Industrial, according to the CNX-Software resource, presented industrial computers of a small form factor…

6 hours ago

The potential US Secretary of Transportation promised to deal with SpaceX fines and eliminate the space bureaucracy

This week, Congress held confirmation hearings for new ministers nominated by new US President Donald…

8 hours ago

Vast Space has built the world’s first private space station; it will go into orbit this year

California-based startup Vast Space has announced the completion of the world's first commercial space station,…

8 hours ago