Research in the
area of deep learning is advancing so quickly that neural networks
are now
able to dream and can even communicate with
each other using inhuman cryptographic language indecipherable
to humans and other computers.

The only drawback
to the technology is that the networks require a lot of memory and
power to operate, but MIT associate professor of electrical
engineering and computer science Vivienne Sze and her
colleagues have been working on a solution that could enable the
powerful software to operate
on cell phones.

Sze and her team
made a breakthrough last year in designing an energy-efficient
computer chip that could allow mobile devices to run powerful
artificial intelligence systems.

The researchers
have since taken an alternate approach to their research by
designing an array of new techniques to make neural nets more energy
efficient.

"First, they
developed an analytic method that can determine how much power a
neural network will consume when run on a particular type of
hardware.

Then they used
the method to evaluate new techniques for paring down neural
networks so that they'll run more efficiently on handheld
devices," MIT
News reports.

There, they will
describe their methods for reducing neural networks' power
consumption by as much as 43 percent over the best previous method
and 73 percent over the standard implementation with the use of
"energy-aware pruning."

According to
Hartwig Adam, the team lead for mobile vision at Google:

"Recently, much
activity in the deep-learning community has been directed toward
development of efficient neural-network architectures for
computationally constrained platforms.

However, most
of this research is focused on either reducing model size or
computation, while for smartphones and many other devices energy
consumption is of utmost importance because of battery usage and
heat restrictions."

Adam added:

"This work is
taking an innovative approach to CNN (convolutional neural net)
architecture optimization that is directly guided by
minimization of power consumption using a sophisticated new
energy estimation tool, and it demonstrates large performance
gains over computation-focused methods.

I hope other
researchers in the field will follow suit and adopt this general
methodology to neural-network-model architecture design."