Technology 3 min read

Researchers Develop New Energy-Efficient AI System

Gerd Altmann/

Gerd Altmann/

Last year, researchers at OpenAI in San Francisco unveiled an algorithm that could manipulate the pieces of a Rubik’s cube using a robotic hand. The AI system learned to perform this task through trial and error.

While this feat was remarkable, it was also extremely power-intensive. The researchers had to use over 1,000 desktop computers, including other machines running specialized graphics chips.

Since the intensive calculations lasted for several months, it might have consumed up to 2.8 gigawatt-hours of electricity. That’s roughly equal to the output of three nuclear power plants for an hour.

Artificial intelligence’s accomplishments are always astonishing, whether it’s beating humans in poker or generating random images. However, these feats require a staggering amount of computing power — and electricity.

One way to solve this problem is to consider the functioning of the human brain.

A Power-Efficient Supercomputer in our Body

Our brain has the computing power of a supercomputer. But, it only needs 20 watts of electricity to function — a millionth of a supercomputer’s energy.

One reason for the low power requirement is the efficient transfer of information between neurons in the brain.

Neurons communicate by sending short electrical impulses called spikes to each other. However, to save energy, they only do this as often as necessary.

Using this simple principle, researchers at the Graz University of Technology decided to develop a new machine-learning algorithm. And they’re calling it e-propagation (or e-prop for short.)

Developing an Energy-Efficient AI System

The team, led by Wolfgang Maass and Robert Legenstein, use spikes in the model for communication between neurons in an artificial neural network.

As in the brain, the spikes only become active when they are necessary for information processing in the network. What’s more, each neuron documents when its connections were used in an eligibility trace.

Thanks to this decentralized method, the new learning method doesn’t require enormous storage space. Yet, it’s roughly as powerful as the best and most elaborate methods today.

Previous AI implementations store all network activities centrally and offline. So, there’s a constant data transfer between the memory and processors, which requires enormous energy.

On the other hand, e-prop works entirely online. As such, it doesn’t require a separate memory in real operation, making learning more energy-efficient.

Using an Energy-Efficient AI System to Drive Neuromorphic Hardware

The TU Graz team is already working on integrating the learning ability with hardware components. For example, the researchers have collaborated with colleagues at the University of Manchester in the Human Brain Project to integrate e-prop into the neuromorphic SpiNNaker system.

Likewise, TU Graz is working with Intel researchers to integrate the algorithm into the next version of Intel’s neuromorphic chip Loihi.

Read More: The Best Artificial Intelligence Books you Need to Read Today

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.

Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Most Recent most recent
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.