Neo Semiconductor has come up with 3D X-AI memory, which can handle AI workloads without a GPU

NEO Semiconductor has introduced 3D X-AI chips designed to replace HBM memory, which is used in modern GPU-based accelerators. 3D DRAM memory has a built-in AI processing engine that handles data streams without requiring mathematical calculations. This helps solve the bus width problem between the processor and memory, helping improve the performance and efficiency of AI systems.

Image source: NEO Semiconductor

At the heart of the 3D X-AI chip is a layer of neural circuitry that processes data from 300 layers of memory on the same chip. The memory density on the component is eight times higher than that of the current HBM, and 8000 neural circuits provide a 100-fold increase in performance due to data processing directly in memory – the volume of data is radically reduced, which reduces the accelerator’s energy consumption by 99%.

«Existing AI chips waste significant performance and power resources due to architectural and technological inefficiencies. Today’s AI chip architecture stores data in the HBM and delegates all computation to the GPU. This architecture of separated data storage and processing inevitably turns the bus into a performance bottleneck. Transferring huge amounts of data through the bus reduces performance and greatly increases energy consumption. 3D X-AI can perform AI processing in each HBM chip. This can significantly reduce the amount of data transferred between the HBM and the GPU to improve performance and significantly reduce power consumption,” said NEO Semiconductor founder and CEO Andy Hsu.

A single 3D X-AI die has a capacity of 128 GB and offers AI processing speeds of 10 TB/s. Twelve dies in one HBM package can provide 1.5 TB of memory with a bandwidth of 120 TB/s. Developers of AI processing hardware are exploring solutions that can improve their throughput and speed – semiconductors are becoming faster and more efficient, but the bus between components is often a bottleneck. Intel, Kioxia and TSMC, for example, are working on optical technologies to speed up communication between components on a motherboard. By moving some of the AI ​​workload processing from the GPU to the HBM, NEO Semiconductor’s solution can make accelerators much more efficient than they are today.

admin

Share
Published by
admin

Recent Posts

Threads gets ‘long overdue improvements’ to search and trends

Meta✴ Platforms, the owner of the social network Threads, announced “long overdue improvements” for its…

26 minutes ago

Ubisoft spoke about the capabilities and innovations of stealth mechanics in Assassin’s Creed Shadows – new gameplay

Image source: Ubisoft Let us remind you that the events of Assassin’s Creed Shadows will…

1 hour ago

Assembly of the second NASA SLS rocket has started – in a year it will send people on a flight around the Moon

NASA announced that assembly of the second lunar rocket, SLS (Space Launch System), has begun…

1 hour ago

The creators of Black Myth: Wukong will surprise players before the end of the year – teaser from the head of Game Science

Co-founder and CEO of the Chinese studio Game Science, Feng Ji, hinted that some surprises…

3 hours ago

Nvidia stock is no longer the best performer – MicroStrategy soars 500% in a year thanks to Bitcoin

Last Wednesday, trading volume in MicroStrategy shares exceeded that of Nvidia and Tesla. The company,…

4 hours ago

Tired of waiting: sales of S.T.A.L.K.E.R. 2: Heart of Chornobyl exceeded one million copies within two days of release

The post-apocalyptic open-world shooter S.T.A.L.K.E.R. 2: Heart of Chornobyl from the developers from the GSC…

4 hours ago