Google has launched its first Machine learning chip Posted Apr 15, 2017 by Aapt Dubey

Next Article

Last year Google had announced at its developer conference that they have currently embarked on a new project of creating its own personalized chips to help expedite the machines capacity in learning different algorithms. It has almost been a year since Google’s intriguing announcement and according to sources Google has finally broken their silence and has started sharing intricate details about this intelligent chip.

The underlining reason why Google has diversified into creating its own specialized chip is due to their inapt ability to forecast future trends. Google realized that if all its users start using its voice recognition services for more than three minutes per day, then they will have to quadruple the number of its data centers to power these services. This is exactly why last year Google started developing its own chip to expedite the inference stages of neural networks. This chip was optimized around Google’s personal Tensor Flow machine adapting framework and all the intricate details of this chip has finally been published in Google’s analytical papers.

What is the Tensor Processing Unit?

The Tensor Processing unit is an application of precise integrated circuits that has specifically been designed for machine learning. This chip has been designed to align with Google’s Tensor Flow framework to cater to a higher volume of computation with a higher IOPS per watt. By instilling this chip in Google’s tensor frame work they have successfully reduced the transistors that every transaction requires, by using a precision of 8 bits instead of the traditional 32 bit precision.

According to Google has personified their goal of expediting their frameworks capacity in learning different algorithms. Google claims that these chips are actually 30x faster than generic GPU and CPUS in learning the workload algorithms. They have also stated that the TPU offers optimum power consumption counts and provide 30x to 80x higher TeraOps compared to standard GPU and CPUS.

Most chip architects design their chips for convolutional neural networks, but these networks only contribute 5% percent of the data work load. This is due to the fact that most of its applications operate using versatile multi coated perceptrons which is why their data center workload is increasing every single day. This is exactly why Google made this project its priority to create a custom built inference that would provide cost efficiency by 10x compared to standard GPU and CPUS. These conventional CPUS were too expensive to support the increase in computation demands and also required excess hardware to align with the computation increase. This chip utilizes the inference models in the TPU network to reduce the interactions with the host CPU and to be flexible enough to align with NN needs of our society.

As more and more people start using different multi layer perceptrons that are present in Google’s applications this requirement of increasing the efficiency of their machine learning algorithm has never been more necessary. Apparently, Google has achieved this by developing their very own specialized chip and according to sources, it is highly unlikely that Google’s TPU will be available outside their cloud network.

Newsletter Subscriptions

Select categories you are interested in to receive our emails: