Nvidia to Celebrate: Samsung's Competitor Begins Production of HBM3E to be Used in Blackwell Ultra GPUs.
SK Hynix has announced that it will begin volume shipments by the end of 2024.
The South Korean memory giant, SK Hynix, has begun mass production of the world's first 12-layer HBM3E, which offers a total capacity of 36GB, a significant increase from the previous 24GB in the 8-layer configuration. This new design was achieved by reducing the thickness of each DRAM chip by 40%, allowing for more layers to be stacked while maintaining the same overall size. The company plans to start shipping in volume by the end of 2024.
HBM3E memory provides a bandwidth of 9600 MT/s, resulting in an effective speed of 1.22 TB/s when used in an eight-layer configuration. This improvement makes it an ideal option for handling Large Language Models (LLMs) and artificial intelligence (AI) tasks that require both speed and high capacity. The ability to process more data at faster rates allows AI models to operate more efficiently.
To achieve advanced memory stacking, SK Hynix utilizes innovative packaging technologies such as Through Silicon Via (TSV) and the Mass Reflow Molded Underfill (MR-MUF) process. These methods are essential for maintaining the structural integrity and heat dissipation necessary for stable and high-performance operation of the new HBM3E. Improvements in heat dissipation performance are particularly crucial for maintaining reliability during intensive AI processing tasks.
In addition to its increased speed and capacity, the HBM3E is designed to offer enhanced stability, with SK Hynix's patented packaging processes ensuring minimal deformation during stacking. The company's MR-MUF technology allows for better management of internal pressure, reducing the likelihood of mechanical failures and ensuring long-term durability.
The first samples of this 12-layer HBM3E product began in March 2024, and it is expected that Nvidia's Blackwell Ultra GPUs and AMD's Instinct MI325X accelerators will be among the first to leverage this enhanced memory, benefiting from up to 288GB of HBM3E to support complex AI computations. Recently, SK Hynix rejected an advance payment of $374 million from an unknown company to ensure it could provide Nvidia with enough HBM for its AI hardware in high demand.
Justin Kim, President of AI Infrastructure at SK Hynix, stated that "SK Hynix has once again pushed technological boundaries, demonstrating our leadership in memory for AI. We will continue our position as the global number one provider of AI memory while preparing next-generation memory products to meet the challenges of the AI era."