At the North American Technology Symposium held by TSMC, SK hynix unveiled samples of its next-generation HBM4 memory and new products based on the current HBM3E.
Image source: skhynix.com
With 12-layer HBM4 memory stacks, the manufacturer achieved a speed of 2 TB/s, which is significantly higher than the 1.2 TB/s of HBM3E. SK hynix also showed samples of 16-layer HBM3E. In March, the Korean company announced that it was the first in the world to start sending HBM4 samples to large customers, and that preparations for mass production of memory based on the new technology would be completed in the second half of 2025. Large customers are likely to mean Nvidia and AMD.
In comparison to gaming hardware, just one HBM4 memory stack can deliver more bandwidth than the entire memory of an Nvidia GeForce RTX 5090 graphics card, which has 32GB of GDDR7 with 1792GB/s or almost 1.8TB/s. A single HBM4 stack offers 2TB/s with a capacity of up to 48GB. But AI accelerators are usually connected to multiple HBM stacks at once, allowing them to reach speeds that gaming graphics cannot. For example, a single Nvidia B200 chip connects to eight HBM3E.