The modern memory market is structured in such a way that Chinese companies start producing chips of established standards over time, after which prices fall to such a level that it becomes unprofitable for large market players to produce them. In such conditions, Samsung prefers to curtail the production of LPDDR4-type chips.
Image Source: Samsung Electronics
This was reported by Commercial Times, citing a notice received by Samsung Electronics customers. The company is starting to wind down deliveries of 8 Gbit LPDDR4 chips, which are manufactured using the 1z class process technology, this month. They can be ordered until June of this year, and the last batch will be shipped in October. It is now simply unprofitable for Samsung to produce such memory, since Chinese competitors like CXMT offer it in decent quantities at lower prices. Samsung prefers to focus on more profitable chips like LPDDR5 and HBM.
Taiwanese memory makers like Winbond Electronics and Nanya Semiconductor are currently focused on DDR4, so Samsung’s decision to exempt this market segment will benefit them. U.S. tariffs could also have an impact on Samsung’s decision. Although semiconductor products as such are currently exempt from the increased tariffs, demand for finished electronic devices in the local market should decrease due to the price increase caused by the new tariffs, and therefore the U.S. market becomes less interesting for Samsung without localized production.
According to some estimates, demand for memory chips this year will not grow by 12.8%, as planned before Trump introduced higher tariffs, but will increase by 4.8% at most. The pessimistic scenario assumes demand growth of only 3.5%. In the NAND segment, selling prices are already close to cost price, so manufacturers will try to reduce supply by 10-20%, which will provide some support to the price level.
For Samsung, as Chinese sources note, the HBM segment will not be able to remain a “safe haven” forever, since manufacturers from China hope to master the production of HBM3 by 2026, and to switch to HBM3E by 2027. This memory is needed by local developers of accelerators for artificial intelligence systems.