sexta-feira, 21 de abril de 2023

SK hynix Now Sampling 24GB HBM3 Stacks, Preparing for Mass Production

When SK hynix initially announced its HBM3 memory portfolio in late 2021, the company said it was developing both 8-Hi 16GB memory stacks as well as even more technically complex 12-Hi 24GB memory stacks. Now, almost 18 months after that initial announcement, SK hynix has finally begun sampling its 24GB HBM3 stacks to multiple customers, with an aim towards going into mass production and market availability in the second half of the year. All of which should be a very welcome development for SK hynix's downstream customers, many of whom are chomping at the bit for additional memory capacity to meet the needs of large language models and other high-end computing uses.

Based on the same technology as SK hynix's existing 16GB HBM3 memory modules, the 24GB stacks are designed to further improve on the density of the overall HBM3 memory module by increasing the number of DRAM layers from 8 to 12 – adding 50% more layers for 50% more capacity. This is something that's been in the HBM specification for quite some time, but it's proven difficult to pull off as it requires making the extremely thin DRAM dies in a stack even thinner in order to squeeze more in.

Standard HBM DRAM packages are typically 700 – 800 microns high (Samsung claims its 8-Hi and 12-Hi HBM2E are 720 microns high), and, ideally, that height needs to be maintained in order for these denser stacks to be physically compatible with existing product designs, and to a lesser extent to avoid towering over the processors they're paired with. As a result, to pack 12 memory devices into a standard KGSD, memory producers must either shrink the thickness of each DRAM layer without compromising performance or yield, reduce the space between layers, minimize the base layer, or introduce a combination of all three measures.

While SK hynix's latest press release offers limited details, the company has apparently gone for thinning out the DRAM dies and the space between them with an improved underfill material. For the DRAM dies themselves, SK hynix has previously stated that they've been able to shave their die thickness down to 30 microns. Meanwhile, the improved underflow material on their 12-Hi stacks is being provided via as part of the company's new Mass Reflow Molded Underfill (MR-MUF) packing technology. This technique involves bonding the DRAM dies together all at once via the reflow process, while simultaneously filling the gaps between the dies with the underfill material.

SK hynix calls their improved underfill material "liquid Epoxy Molding Compound", or "liquid EMC", which replaces the older non conductive film (NCF) used in older generations of HBM. Of particular interest here, besides the thinner layers this allows, according to SK hynix liquid EMC offers twice the thermal conductivity of NCF. Keeping the lower layers of stacked chips reasonably cool has been one of the biggest challenges with chip stacking technology of all varieties, so doubling the thermal conductivity of their fill material marks a significant improvement for SK hynix. It should go a long way towards making 12-Hi stacks more viable by better dissipating heat from the well-buried lowest-level dies.

Assembly aside, the performance specifications for SK hynix's 24GB HBM3 stacks are identical to their existing 16GB stacks. That means a maximum data transfer speed of 6.4Gbps/pin running over a 1024-bit interface, providing a total bandwidth of 819.2 GB/s per stack.

Ultimately, all the assembly difficulties with 12-Hi HBM3 stacks should be more than justified by the benefits that the additional memory capacity brings. SK hynix's major customers are already employing 6+ HBM3 stacks on a single product in order to deliver the total bandwidth and memory capacities they deem necessary. A 50% boost in memory capacity, in turn, will be a significant boon to products such as GPUs and other forms of AI accelerators, especially as this era of large language models has seen memory capacity become bottlenecking factor in model training. NVIDIA is already pushing the envelope on memory capacity with their H100 NVL – a specialized, 96GB H100 SKU that enables previously-reserved memory – so it's easy to see how they would be eager to be able to provide 120GB/144GB H100 parts using 24GB HBM3 stacks.

Source: SK Hynix



from AnandTech https://ift.tt/DLN9XST
via IFTTT