Machine learning is huge and we’re seeing a lot of big tech companies getting involved in this sector. This can range from personal AI assistants, to games and everything in between including under-the-hood optimizations. Most of the time the computation for AI is done in the cloud, but Qualcomm wants to change that in the future.
Previously, these types of computations would take up a lot of time, resources and power (or battery on phones), but things will be changing soon.
With the introduction of the Hexagon 682 DSP, Qualcomm wants software developers to be able to offload some of that machine learning code directly onto the hardware. This would make the process faster (since it doesn’t have to send data to a server and then wait for its response), and it also enables the machine learning to do be done without a connection to the internet. And now it’s been announced that the Hexagon 682 DSP inside the Snapdragon 835 is optimized for Google’s TensorFlow machine learning technology.
The Hexagon DSP (Digital Signal Processor) is described as a world-class processor with both CPU and DSP functionality to support deeply-embedded processing needs of the mobile platform, for both multimedia and modem functions. Most of the time DSPs are used for things like audio and speech signal processing, digital image processing, and signal processing for telecommunications, but we’re seeing Qualcomm allow the Hexagon DSP to be used for specialized workloads too.
So machine learning computation is generally better on the DPS rather than the CPU since it exploits the power & performance benefits of offloading the ARM cores for performance, reduced power dissipation, or concurrency requirements. These cores are optimized for both high performance and energy efficiency, but most of the time it’s used for its energy efficiency since it’s designed to strive for high levels of work per cycle (instead of increasing the MHz).
from xda-developers http://ift.tt/2j1cIOj
via IFTTT
Aucun commentaire:
Enregistrer un commentaire