Micron Technology today announced the availability of the Micron 9550 series of NVMe drives, which are positioned as the world’s fastest PCIe 5.0 SSDs for the data center with industry-leading energy efficiency and performance for demanding workloads such as AI. The new products use 232-layer TLC NAND flash memory, are offered in U.2 (15 mm), E3.S (1T) and E1.S (15 mm) form factors, and their capacities range from 3.2 to 30. 72 TB.

The Micron 9550 delivers sequential read speeds of up to 14.0 GB/s and sequential write speeds of up to 10.0 GB/s, which is 67% faster than similar class SSDs from competitors, the company says, referring to solutions from Kioxia and Samsung. The drive’s performance on random read and write operations is 3.3 million IOPS and 400 thousand IOPS, respectively, which is up to 35% and up to 33% higher than competitors’ offerings.

Image source: Micron

Micron 9550 supports TCG Opal 2.01, SPDM 1.2, OCP 2.5 telemetry, SED, end-to-end data integrity, signed firmware, Micron Secure Execution Environment, power loss protection, and NVMe v2 compliant .0b and OCP 2.0 (r21). FIPS 140-3 Level 2 and TAA certified options are also available.

The company offers two versions of the Micron 9550: the 9550 PRO for intensive reads with 1 DWPD (the permissible number of rewrites of the entire drive capacity per day) during the five-year warranty period and the 9550 MAX for mixed workloads with 3 DWPD reliability. The E1.S variant is only available for the 9550 PRO. The available capacity of the 9550 PRO is 3.84/7.68/15.36/30.72 TB. The 9550 MAX drive version offers smaller capacities – 3.2/6.4/12.8/25.6 TB.

As Micron noted, AI workloads require high-performance storage solutions. The sequential and random read and write performance of the SSD 9550 allows it to be used in exactly these scenarios. The drives support Big Accelerator Memory (BaM) and GPU-Initiated Direct Storage (GIDS) architectures.

For example, large language models (LLMs) require high sequential read performance, while graph networks (GNNs) require high random read performance. The company said the Micron 9550 outperforms competitors’ offerings for AI workloads with up to 33% faster execution time, up to 60% faster BaM aggregation, and up to 34% higher throughput when using Magnum IO GPUDirect Storage (GDS).

According to the press release, the Micron 9550 delivers industry-leading power efficiency to support a variety of AI workloads, including:

  • Training graph networks with BaM: reducing average SSD power consumption by up to 43% and reducing overall server power consumption by up to 29%.
  • NVIDIA Magnum IO GPUDirect Storage: Up to 81% less power consumption per 1 TB transferred.
  • MLPerf: up to 35% less drive energy consumption and up to 13% less system energy use.
  • Tuning Llama LLM using Microsoft DeepSpeed: SSD power consumption up to 21% less

The Micron 9550 has a vertically integrated architecture using Micron-developed technologies, providing design flexibility and advanced security capabilities, the company says. Micron also partners with NVIDIA and open source developers to ensure solutions meet the needs of the most demanding AI workloads.

Leave a Reply

Your email address will not be published. Required fields are marked *