Researchers at the University of Minnesota Twin Cities have unveiled a hardware device that could revolutionize artificial intelligence (AI) computing.
The researchers claim that their device, called Computational Random Access Memory (CRAM), will solve one of the field’s most pressing challenges by reducing the energy consumption of AI applications by a factor of at least 1,000.
The International Energy Agency (IEA) recently projected that AI energy consumption will more than double, from 460 terawatt-hours (TWh) in 2022 to a staggering 1,000 TWh by 2026, roughly equivalent to Japan’s entire electricity use.
“This work is the first experimental demonstration of a CRAM that can process data entirely within the memory array, without ever leaving the grid where the computer stores the information,” explained Jan Ruff, a postdoctoral researcher in the university’s Department of Electrical and Computer Engineering and lead author of the study.
Traditional AI techniques consume significant power to transfer data between logic units (where information is processed) and memory (where it is stored), but CRAM eliminates the need for these energy-intensive transfers by keeping data in memory.
The custom hardware device plans to help make artificial intelligence more energy efficient. (Source: University of Minnesota, Twin Cities)
20 years worth of stuff
The researchers estimate that their CRAM-based machine learning accelerator could achieve up to 2,500 times the energy savings compared to traditional methods.
This major breakthrough didn’t happen overnight, but is the result of more than 20 years of research spearheaded by Jianping Wang, McKnight Professor Emeritus and Robert F. Hartman Dean of the School of Electrical and Computer Engineering.
“Twenty years ago, the original concept of using memory cells directly for computing seemed far-fetched,” Wang recalled in a press release. “Thanks to an ever-evolving group of students since 2003 and the truly interdisciplinary faculty we’ve built at the University of Minnesota, from physics, materials science and engineering, computer science and engineering, to modeling and benchmarking, to hardware creation, we’ve seen positive results that have now demonstrated that this kind of technique is feasible and ready to be integrated into technology.”
The CRAM architecture builds on the team’s previous work on magnetic tunnel junctions (MTJs) and nanostructured devices, which already have applications in hard drives, sensors and other microelectronic systems. MTJs form the basis of magnetic random access memory (MRAM), which is implemented in microcontrollers and smartwatches.
Rethinking Computer Architecture for AI
CRAM represents a radical departure from the traditional von Neumann architecture that underpins most modern computers, as it allows computations to be performed directly within the memory cells, eliminating a long-standing problem in computer design: the computational memory bottleneck.
“As an extremely energy-efficient digital-based in-memory computing substrate, CRAM is extremely flexible in that it allows computations to be performed anywhere within the memory array,” emphasized Ulya Karpuzcu, associate professor in the Department of Electrical and Computer Engineering and co-author of the paper.
“Thus, CRAM can be reconfigured to best suit the performance needs of diverse AI algorithms.”
The technology makes use of spintronic devices, which use the spin of electrons rather than their charge to store data. This approach offers significant advantages over traditional transistor-based chips, including higher speed, lower power consumption and resistance to harsh environments.
“This is more energy efficient than today’s traditional building blocks for AI systems,” Karpuzcu added. “CRAM performs computations directly within the memory cells and efficiently utilizes array structures, eliminating the need for slow, energy-intensive data transfers.”
The research team, which has already received multiple patents, is now working with semiconductor industry leaders, including in Minnesota, to expand the demonstration and produce hardware that can improve AI capabilities.
Details of the team’s research were published in the peer-reviewed journal npj Unconventional Computing.
Newsletter
Blueprint Daily
Get the latest news in engineering, technology, space and science with The Blueprint.
About the Editor
Amal Jos Chacko Amal dreams of a typical work day writing code, taking photos of cool buildings and reading a book by the fire. He loves all things technology, electronics, photography, cars, chess, football, F1 and more.