IBM Describes Analog AI Chip That Might Displace Power-hungry GPUs
IBM Research has developed a mixed-signal analog chip for AI inferencing that it claims may be able to match the performance of digital counterparts such as GPUs, while consuming considerably less power.
The chip, which is understood to be a research project at present, is detailed in a paper published last week in Nature Electronics. It uses a combination of phase-change memory and digital circuits to perform matrix–vector multiplications directly on network weights stored on the chip.
This isn’t the first such chip IBM has developed as part of its HERMES project, but the latest incarnation comprises 64 tiles, or compute cores, as opposed to a 34-tile chip it presented at the IEEE VLSI symposium in 2021. It also demonstrates many of the building blocks that will be needed to deliver a viable low-power analog AI inference accelerator chip, IBM claims.
For example, the 64 cores are interconnected via an on-chip communication network, and the chip also implements additional functions necessary for processing convolutional layers.
Deep neural networks (DNNs) have driven many of the recent advances in AI, such as foundation models and generative AI, but in current architectures the memory and processing units are separate.
This means that computational tasks involving constantly shuffling data between the memory and processing units, which slows processing and is a key source of energy inefficiency, according to IBM.
IBM’s chip follows an approach called analog in-memory computing (AIMC), using phase-change memory (PCM) cells to store the weights as an analog value and also perform computations.
Each of the 64 cores of the chip contains a PCM crossbar array capable of storing a 256×256 weight matrix and performing an analog matrix–vector multiplication using input activations provided from outside the core.
This means that each core can perform the computations associated with a layer of a DNN model, with the weights encoded as analog conductance values of the PCM devices.
The digital components are made up of a row of eight global digital processing units (GDPUs) that provide additional digital post-processing capabilities needed when processing networks with convolutional and long short-term memory (LSTM) layers.
The paper highlights how the PCM cells are programmed using digital-to-analog converters that generate programming pulses with variable current amplitudes and time durations. After this, the core can be used to perform matrix–vector multiplications by applying pulse-width-modulated (PWM) read voltage pulses to the PCM array, the output of which is digitized by an array of 256 time-based analog-to-digital convertors.
This is an oversimplification, of course, as the IBM paper published in Nature Electronics goes into exhaustive detail on how the circuitry within each AIMC operates to process the weights of a deep learning model.
The paper also demonstrates how the chip achieves the near-software-equivalent inference accuracy, said to be 92.81 percent on the CIFAR-10 image dataset.
IBM also claims the measured matrix–vector multiplication throughput per area of 400 giga-operations per second per square millimeter (400 GOPS/mm2) is more than 15 times higher than previous multicore chips based on resistive memory, while achieving comparable energy efficiency.
IBM does not appear to provide a useful energy efficiency comparison with other AI processing systems such as GPUs, but does mention that during tests, a single input to ResNet-9 was processed in 1.52 μs and consumed 1.51 μJ of energy.
IBM’s paper claims that with additional digital circuitry to enable the layer-to-layer activation transfers and intermediate activation storage in local memory, it should be possible to run fully pipelined end-to-end inference workloads on chips such as this.
The authors said that further improvements in weight density would also be required for AIMC accelerators to become a strong competitor to existing digital solutions such as GPUs.
The chips used in testing were fabricated using a 14nm process at IBM’s Albany Nanotech Center in New York, and run at a maximum matrix–vector multiplication clock frequency of 1GHz.
- IBM gives z/OS an AI infusion in major upgrade aimed in part at easing admin chores
- Blue Origin tells staff to catch next rocket back to their desks
- AWS and IBM Netezza come out in support of Iceberg in table format face-off
- RHEL drama, ChromeOS and more ... Our vultures speak freely about the latest in Linux
IBM isn’t the only company working on analog chips for AI. Last year, another research paper published in Nature described an experimental chip that stored weights in resistive RAM (RRAM). It was estimated that the chip in question would consume less than 2 microwatts of power to run a typical real-time keyword spotting task.
In contrast, the typical compute infrastructure used for AI tasks using GPUs is getting ever more power hungry. It was reported this month that some datacenter operators are now supporting up to 70 kilowatts per rack for infrastructure intended for AI processing, while traditional workloads typically require no more than 10 kilowatts per rack. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more