Innodisk announced the CXL (Compute Express Link) memory module, designed with an eye on AI systems and cloud data centers. Mass deliveries of the product are planned to be organized in the first quarter of 2025.
The demand for AI servers is growing rapidly. According to Trendforce forecasts, in 2024 such systems will occupy approximately 65% of the global server market (in money terms). According to Innodisk, AI systems now require at least 1.2 TB of RAM to operate effectively. Traditional DDR products are not always able to meet these demands, leading to problems such as underutilization of CPU resources and increased latency, the company says. CXL modules are designed to eliminate such limitations.
Let us recall that CXL is a high-speed interconnect that ensures interaction between the host processor and accelerators, memory buffers, input/output devices, etc. The Innodisk solution uses the PCIe 5.0 x8 interface and has a capacity of 64 GB. The module provides throughput up to 32 GB/s. It says compatibility with CXL 1.1/2.0. The device is made in the E3.S 2T form factor and is equipped with an EDSFF 2C connector. The operating temperature range extends from 0 to +70 °C.
It is noted that if four Innodisk CXL modules of 64 GB each are installed in a server that carries eight DIMMs of 128 GB each, the total memory capacity can be increased by a quarter, and the total bandwidth by 40%. At the same time, CXL provides memory pooling, which allows you to optimize resource sharing and improve overall system efficiency.