Nvidia and partners to create ultra-fast SOCAMM memory module for AI PC

Nvidia, together with leading manufacturers SK Hynix, Samsung and Micron, is developing a new memory standard aimed at high performance and compact dimensions. The standard is called System On Chip Advanced Memory Module (SOCAMM) and is already undergoing testing.

Image source: NVIDIA

Firstly, SOCAMM will be more cost-effective compared to DRAM in the SO-DIMM format, since the technology allows placing LPDDR5X memory directly on the substrate. Secondly, SOCAMM will receive more input/output interfaces – up to 694 ports versus 644 for LPCAMM and 260 for traditional DRAM, which should significantly increase the throughput and data exchange rate.

In addition, since the module is detachable, it will make it easier to upgrade the equipment in the future. At the same time, the compact size will also help to increase the overall memory capacity.

Technical details of SOCAMM are currently being kept under wraps, as the standard is being developed outside of JEDEC. According to available information, Nvidia and partners are currently exchanging prototypes of the module for performance testing.

admin

Share
Published by
admin

Recent Posts

Intel has officially begun offering 18A technology to its customers

Intel management has repeatedly stated that it will not delay providing its customers with access…

5 hours ago

Elon Musk has managed to make X profitable, but revenue still lags behind Twitter’s independent days

The sudden surge of investor interest in Elon Musk's X has been reported recently, but…

5 hours ago

Trump’s allies intend to hold tech giants accountable for censorship on social networks and other services

The new head of the US Federal Trade Commission (FTC), appointed by President Donald Trump,…

5 hours ago

Chinese Go Underground to Find Place to Store Energy in Compressed Air

The project of storing energy in compressed air, tested in Germany in the 1970s, has…

5 hours ago