University of Copenhagen: Neural networks behind social media can consume an infinite amount of energy

Artificial neural networks are deployed intensively by social media platforms like Twitter and Facebook to recommend content that matches user preferences. The process is energy intensive and generates heavy carbon emissions. In fact, the world’s entire energy supply could be used to train a single neural network. Therefore, researchers behind a new study recommend that the technology be used where it benefits the public interest most.

Artificial neural networks are brain-inspired computing systems that can be trained to solve complex tasks better than humans.

These networks are frequently used in social media, streaming, online gaming and areas where users are targeted with posts, movies, fun games or other content that matches their individual preferences. Elsewhere, neural networks are used in health care to recognize tumors on CT scans, among other things.

While the technology is incredibly effective, a Danish researcher behind a new study believes that it should not be misused. The study’s authors have demonstrated that all of the world’s energy could be used to train a single neural network without ever achieving perfection.

“The problem is that an infinite amount of energy can be used to, for example, train these neural networks just to target advertisements at us. The network would never stop training and improving. It’s like a black hole that swallows up whatever energy you throw at it, which is by no means sustainable,” explains Mikkel Abrahamsen, an assistant professor at the University of Copenhagen’s Department of Computer Science.

Therefore, this technology should be deployed wisely and carefully considered before every use, as simpler, more energy-efficient solutions may suffice. Abrahamsen elaborates:

“It’s important for us to consider where to use neural networks, so as to provide the greatest value for us humans. Some will see neural networks as better suited for scanning medical imagery of tumors than to target us with advertising and products on our social media and streaming platforms. In some cases, one might be able to do with less resource-intensive techniques, like regression tasks or random decision forests.”



A theoretical study:
Together with researchers from Germany and the Netherlands, Mikkel Abrahamsen provides a theoretical explanation for the fact that neural networks used by social media, among others, use enormous amounts of energy because they can never be trained to perfection.

The researchers proved that neural network training belongs in a heavier complexity class than once thought. Neural networks should be moved up into the Existential Theory of the Reals (∃ℝ) complexity class instead of the lighter class, known as NP.

The new ∃ℝ class contains problems similar to the solving of lots of quadratic equations with multiple simultaneous unknowns, which is impossible in practice.
The study was released at the NeurIPS conference last December.




Photo of hands painted like the earth
Our present use of neural networks is not sustainable, says UCPH-researcher behind new study. Photo: Getty Images
An endless training
Neural networks are trained by feeding them data. This could be the scanned images of tumors, through which a neural network learns to spot cancer in a patient.

In principle, such training can continue indefinitely. In their new study, the researchers demonstrate that this is a bottomless pit, because the process becomes like solving highly advanced equations with many unknowns.

“Today’s best algorithms can only manage up to eight unknowns, while neural networks can be set up to consider several billion parameters. Therefore, an optimal solution might never be found while training a network, even if the world’s entire energy supply were to be used,” explains Mikkel Abrahamsen.

Neural networks become progressively worse at using the energy provided to them.

“Things get slower and slower as we train neural networks. For example, they can attain 80 percent accuracy after one day, but an entire month more to reach 85 percent. So, one gets less and less out of the energy used in the training, while never achieving perfection,” he says.

Many people don’t realize that networks can be trained indefinitely, which is why Abrahamsen thinks that we need to focus on their heavy appetite for power.

“We don’t appreciate our contribution towards this enormous use of energy when we log on to Facebook or Twitter, when compared, for example, to our awareness about the impacts of intercontinental flights or clothing purchases. So, we should open our eyes to the degree to which this technology pollutes and affects our climate,” Abrahamsen concludes.



What is a neural network?
A neural network is a machine learning model inspired by neuron activity in the human brain that can be trained to perform complex tasks at extremely efficient superhuman levels.

Neural networks have lots of parameters that need to be adjusted for them to provide meaningful output – a process called training.

Neural networks are typically trained using an algorithm known as backpropagation, which gradually adjusts parameters in the right direction.