Although a deep neural network may sound like a vague “techy” thing, we already use it in our daily lives.
It’s why your Gmail‘s spam filter is so effective. Also, Spotify uses the same artificial intelligence to choose songs for your “Discover” weekly playlist.
The deep neural network finds the correct mathematical manipulation to turn input into an output from a linear or a non-linear relationship. In other words, AI is good at recognizing and classifying data.
There’s just one thing.
DNN usually requires a lot of computing power, as well as memory to run. While that may not be a problem for high-end laptops, an average smartphone could never handle the task.
But that’s about to change. A team of researchers at Northeastern have demonstrated a way to run deep neural networks on a smartphone or a similar device.
In a statement about the project, an assistant professor of electrical and computer engineering at Northeastern, Yanzhi Wang said:
“It is difficult for people to achieve the real-time execution of neural networks on a smartphone or these kinds of mobile devices. But we can make the most deep learning applications work in real-time.”
Wang and his colleagues described how they accomplished the feat here.
Getting Deep Neural Network to Run on Mobile Devices
Currently, mobile devices need to be connected to the internet to access a deep neural network. The phone collects data and sends it to a remote server for processing.
That explains why Siri only responds when you’re connected to the internet. Wang and his colleagues have a solution.
In addition to generating a code to run the neural network model efficiently, the Northeastern team has devised a way to reduce its size too.
That way, the network can execute tasks 56 times faster than past demonstrations while maintaining optimum accuracy. This could result in the implementation of DNN in off-the-shelf devices that may not be connected to the internet.
Expectedly, the technology’s potential application extends beyond having an offline Siri.
Wang noted:
“There are so many things that need intelligence. Medical devices, wearable devices, sensors, smart cameras. All of these, they need something enhancing recognition, segmentation, tracking, surveillance, and so many things, but currently, they’re limited.”
The researchers also point out that deep neural networks raise privacy concerns.
Currently, devices collect personal information and send them to the cloud for processing. Meanwhile, the Northwestern team’s method would enable users to process their data locally.
“Previously, people believed that deep learning needed dedicated chips, or could only be run on servers over the cloud,” Wang says. “This kind of assumption of knowledge limits the application of deep learning. We cannot always rely on the cloud. We need to make local, smart decisions.”
Comments (0)
Most Recent