Google’s Machine Learning Chips Beat Intel and Nvidia (GOOG, INTC)
. The Mountain View, California-based company recently released its first machine learning chip – called the Tensor Processing Unit (TPU) – that performs “computationally intensive” tasks, such as voice search and image processing, and its performance beats that of chips released by Intel and Nvidia by as much as 15 times to 30 times.
According to a post on Google’s site, TPUs power search as well as vision models for several products, such as Image Search and Google Photos. In addition, TPUs “were instrumental in Google DeepMind’s victory over Lee Sedol, the first instance of a computer defeating a world champion in the ancient game of Go.” (See also: Google’s Growing Cloud Ambitions.)
The business implication of this development is that Google can save costs through significantly reducing its data center footprint by using the chips in its machines. As artificial intelligence applications become commonplace, this development could also shift business away from manufacturers of chips for data centers and set new benchmarks for performance.
In a paper that will be presented in June, Google outlined its benchmarking tests conducted against Intel’s Haswell microprocessor and Nvidia’s K80, which the company bills as the world’s fastest GPU (chips used in gaming platforms) accelerator. Google measured the number of operations performed per second by its chip versus those from the other two companies. It found that the TPU was 14.5 times as fast as the Intel processor and 13.2 times as fast as Nvidia’s chip.
Norman P. Jouppi, who led more than 70 engineers on the project, said Google wanted to attract the best talent by making its achievements publicly known. Google’s TPU chips also provide it with a leg up in its cloud business. The company is a distant third behind Amazon.com, Inc. (AMZN
While not much is known about the nuts and bolts of Amazon’s data centers, Microsoft uses Field Programmable Gate Array (FPGA) chips, made by Intel’s Altera business, to make its data centers faster. Google’s Tensor chips could also cut down costs for its data centers. In its paper, the company wrote that its TPU provides 17 to 34 times better performance per watt compared with Intel’s chips and 25 to 29 times better performance compared with Nvidia’s chips. In effect, this means that a TPU chip can do more with less. Potentially, this could reduce the number of chips required in server farms, thereby reducing the total cost of ownership (TCO). (See also: Two Announcements From Google’s Cloud Conference.)
Read more: Google’s Machine Learning Chips Beat Intel and Nvidia (GOOG, INTC) | Investopedia http://www.investopedia.com/news/googles-machine-learning-chips-beat-intel-and-nvidia-goog-intc/#ixzz4drXKcWBD
Follow us: Investopedia on Facebook