Micron, SK-Hynix's Shipping Bandwidth-boosting LPDDR5 For On-device AI

Memory vendors Micron and SK Hynix this week began shipping their first LPDDR5 memory modules capable of achieving speeds up to 9,600MT/s.

For reference, that's technically 12 percent faster than the LPDDR5 spec, and between 30-50 percent faster than the memory found in most thin and light notebooks.

That speed translates into higher memory bandwidth, something that's become increasingly important as chipmakers have boosted core counts and embedded ever faster GPUs, neural processing units, and other co-processors into their system on chips (SoCs).

For instance, with the announcement of Qualcomm's Snapdragon 8 Gen 3 system on chip (SoC) this week, the silicon slinger is betting on a future where customers run machine learning and large language models, like Meta's Llama 2 or Stable Diffusion, entirely on their personal devices.

Most GPUs and accelerators used to run AI workloads use speedy GDDR or high-bandwidth memory (HBM) modules. However in a slim laptop, tablet, or smartphone this isn't always practical, and the CPU, GPU and other co-processors must often share a common pool of DDR5.

One of the techniques to prevent bandwidth from becoming a bottleneck is co-packaging memory alongside their compute dies. Apple's M-series processors are a prime example of that approach, with the memory modules on the same die as the CPU and co-processors.

Apple's M2 Max — for the moment, its most powerful notebook SoC — can deliver 400 GB/s of memory bandwidth to the CPU and GPU. To put that in perspective, that's just shy of the 460GB/s of bandwidth AMD's Epyc 4 datacenter CPU can manage when all 12 of its memory channels are full up.

If Apple were to move to Micron or SK-Hynix's latest 9,600 MT/s memory, the company might just be able to eke out another 200GB/s of bandwidth.

Intel is also rumored to be working on a version of its Meteor Lake processors with on-package LPDDR memory. However, it's not in space constrained mobile devices that we're seeing chipmakers go this route. Nvidia's 144-core Grace CPU Superchip uses LPDDR5X memory to keep the processors fed with 1TB/s of bandwidth.

One of the downsides to LPDDR memory is you can't really upgrade the device by tossing in a higher capacity SODIMM. This isn't really a problem for smartphones and tablets but may be a turn off for prospective laptop buyers. LPDDR modules are designed to be soldered down to the motherboard or co-packaged alongside the SoC, so taking advantage of LPDDR5's higher operating frequencies means forgoing upgradability.

Having said that, we won't have to wait long for SK-Hynix and Micron's latest memory modules hit the market. The companies claim that Qualcomm's Snapdragon 8 Gen 3 will be among the first to support their 9,600 MT/s memory modules. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more