Samsung secures NVIDIA's backing and moves forward with crucial memory for AI accelerators
Samsung has reached a decisive turning point in the AI market by securing NVIDIA’s approval for its next generation of HBM4 memory. This green light paves the way for its use in next-gen AI accelerators, a segment currently driven by record demand and intense competition.
According to reports from South Korea, the tech giant plans to begin mass production of HBM4 starting in February 2026. This strategy aims to avoid the supply bottlenecks faced with HBM3E, which is currently in high demand among major AI clients.
HBM4 plays a central role in advanced computing systems, allowing for the handling of massive data volumes while reducing latency between the processor and memory. Passing NVIDIA’s quality tests indicates sufficient technical maturity for integration into commercial platforms.
Production is expected to be concentrated at the Pyeongtaek complex, one of Samsung's primary semiconductor hubs. There, the company intends to adopt a 10-nanometer process for the “base die,” a choice that may offer a competitive edge over its direct rival, SK Hynix, which is focused on 12-nanometer technology.
Internal tests indicate speeds of up to 11.7 gigabits per second, meeting the requirements for generative AI workloads and high-performance computing. These figures reinforce the appeal of HBM4 for data centers and supercomputers.
The primary initial destination for the new memory is expected to be NVIDIA’s next AI accelerator platform, codenamed “Rubin,” scheduled for release in the second half of 2026. This architecture promises further gains in performance and energy efficiency.
Part of the production is also expected to supply Google, which is interested in using HBM4 for its seventh-generation Tensor Processing Units (TPUs)—proprietary chips designed for large-scale AI model training and execution.
In parallel, Samsung recently introduced the second generation of its SOCAMM2 modules, an alternative to traditional HBM for AI infrastructure and data centers. This solution focuses on greater flexibility, high speeds, and up to 45% lower power consumption.
Even as the company considers delaying the end-of-life cycle for DDR4 memory to mitigate the component crisis, it is clear that Samsung continues to expand its dominance in the AI sector.