Google boosts machine learning with its Tensor Processing Unit

Google has some new hardware out, and no, it's not a Nexus device. The search giant makes extensive use of machine learning to power services like RankBrain and Street View, and it felt it could give those tasks a little more oomph. Enter the Tensor Processing Unit, or TPU for short.

The TPU is a custom-designed ASIC small enough to fit into a hard drive slot in Google's data center racks. Although the TPU has only just been revealed to the world, Google says it's actually been using the hardware in its datacenters for over a year as a "stealthy project." Google's engineers say that the TPU offers a 10x performance-per-watt improvement over off-the-shelf solutions when dealing with machine learning tasks. Unsurprisingly, the TPU is optimized for the company's open-source TensorFlow machine intelligence library.

Google's TPU does use a dirty trick of sorts—it works with with "reduced precision," roughly meaning that the results of an operation are approximations of the "proper" result. Although the notion may sound counterintuitive at first, it's actually a perfectly acceptable method in some computing tasks. For example, a number of algorithms for calculating a square root rely on approximating the calculation to the actual answer until the deviation is small enough to not matter. Tailoring the TPU to reduced-precision tasks apparently netted Google big gains when it came to hardware design. That approach let the company kill off a substantial number of transistors that would otherwise have been necessary for common operations.