Machine Learning Innovation Significantly Reduces Computer Power Consumption
A framework that uses machine learning to make decisions about power usage can reduce energy use by up to 60% without affecting computing performance in multi-core processors used in large servers around the world.
The innovation could lead to more efficient computing, especially in large data centers, which produce about 1% of the energy-related global greenhouse gas emissions, according to the International Energy Agency. The Washington State University and Intel researchers received a best paper award for the work at the 2023 ACM/IEEE International Symposium on Low Power Electronics and Design.
In their work, the researchers used machine learning algorithms to manage power usage, selecting voltage and frequency levels for different clusters of a large, 64-core computer processor. These multi-core processors are typically used in servers or supercomputers. The researchers’ framework was able to learn highly optimized ways to manage the power and is scalable, so it could be used to improve the energy efficiency of even larger multi-core processors. The algorithm also didn’t reduce the performance of the multi-processor.
“We were able to come up with better decision making to determine the voltage and frequency level, so that we had significant energy savings without sacrificing performance,” said Partha Pande, a corresponding author on the paper and WSU Boeing Centennial Chair in Computer Engineering.
While researchers have used other techniques to control the voltage and frequency of processors, the researchers were able to take their novel machine learning approaches one step further, showing they can fine tune voltage and frequency to actually save significant power, said Pande.
“If you have a decision-making policy that tunes the voltage and frequency, you need to know when you are confident that your decision is good in terms of saving energy without loss of performance,” said Jana Doppa, a corresponding author on the paper and WSU Huie-Rogers Endowed Chair in Computer Science. “So when you think your decision is good, and you are pretty certain in a formal sense, then you don’t want to do any additional learning in such situations.”
If the machine learning algorithm can reduce the ambiguity about the correct decision at a particular scenario, then it can efficiently search for an alternative, using, for instance, power and performance models to figure out the best option in a small set of ambiguous possibilities, he said.
“So that’s a useful training example, and then we can actually improve the decision-making policy for the future,” said Doppa.
The methodology of the work is fundamental and is intended for future larger computing systems of up to 500 or 1000 core processors or for very small embedded systems, he said. The researchers hope that the work will someday lead to further improved energy efficiency.
“Currently we can’t have a real data center in my office or on my phone, but that is the vision – getting server-scale performance from handheld devices,” said Pande. “This is pushing the boundary of the state-of-the-art.”
In addition to Doppa and Pande, the work was led by graduate student Gaurav Narang. The research team also included Raid Ayoub and Michael Kishinevsky from the Intel Corporation. The work was funded by Semiconductor Research Corporation.