Google breaks Moore’s law
By Digital News Asia May 31, 2016
- Processor skips three manufacturing generations
- Used only in specific Google servers at present
GOOGLE has announced a ‘revolutionary’ new processing accelerator unit for machine learning that is expected to move a slowing Moore’s Law forward by at least three chip generations, or seven years.
Google says its new Tensor Processing Unit, or TPU, is capable of delivering an order of magnitude higher performance-per-watt “than all commercially available GPUs and FPGAs.” The new accelerator unit is specifically built and custom designed for machine learning tasks.
In fact, Google engineer Norm Jouppi says in a company blog post that the TPU accelerators have been running in company datacenters for more than a year, with at least one order of magnitude better performance-per-watt for machine learning tasks such as deep learning and object recognition.
The name “Tensor Processing Unit” stems from the accelerator’s original application purpose, Tensor Flow, an open-source software library for computing very large mathematical datasets and interpreting them using visual graphs.
According to company CEO, Sundar Pichai, the TPU accelerators will never replace CPUs and GPUs, but they can speed up machine learning processes with a fraction of the power draw required by other ASICs. One drawback, however, is that ASICs such as Google’s TPU are traditionally designed for highly-specific workloads.
Google is now claiming that its TPU will effectively bring 2023 performance-per-watt levels into the present for machine learning applications. The new accelerators will effectively skip three generations worth of Moore’s Law.
Currently, Google uses TPUs to improve web search results using its RankBrain algorithm and to improve the overall accuracy and quality of Street View, maps and navigation routes in general.
Verizon expands APAC enterprise cloud capabilities
Moving to the hybrid cloud in three steps
Google’s enterprise cloud play seems rather … cloudy