According to Google, its Tensor Processing Unit, or TPU, a custom machine learning accelerator, is faster than contemporary CPUs and GPUs.

Google's Own Machine Learning TPU Chips

Google has crafted its own machine learning chip to beef up its machine learning algorithms, that's no secret. It revealed the said TPU chips back in May 2016 during its I/O developer conference, but it never did reveal enough metrics to make them into a talking point. But now, the chips are going to be talking points, with Google finally disclosing details and benchmarks about the project.

Just How Fast Are These Chips?

Those who are more well-versed in the language of chip design can head over and read Google's official paper on the chips, as it goes into finer detail about the technology. What matters most, however, is the fact that in its production workloads that utilize neural network inference, the TPU chips proved much faster — at least 15 to 30 times faster than contemporary CPU and GPU combinations, to be exact, and in Google's case, the combination refers to Intel's Haswell Xeon E5-2699 v3 processors and Nvidia's Tesla K80 GPUs.

These custom chips, which have been powering Google's data centers since 2015, also proved to be more power and energy efficient than standard chips, achieving at least 30 to 80 times improvement in terms of tera-operations per Watt of energy consumed. This is notable because power consumption and efficiency is key aspect of data centers.

These applications are of course powered by neural networks, but Google says that running them isn't much of a strain, only requiring a surprisingly low count of 100 to 1,500 lines of code. The code is based on TensorFlow, Google's open-source machine learning framework.

While the numbers are looking stellar, it's worth keeping in mind that these largely refer to using machine learning models in production, and not the training phase of a model, which has somewhat different characteristics, according to Google.

But the fact remains that Google has embraced artificial neural networks more cordially than other major tech companies. In 2013 neural networks proved too popular that their usage could double the computation demands of Google's data centers. If it weren't for custom machine learning chips, Google would have relied on standard CPUs and GPUs to meet those demands, which would have been not only slower, as it has now proven, but much more expensive at that scale.

Google says that its TPU chips allow it to make predictions rapidly and, because of its speed, power products that require responses in a fraction of a second. The chips are also behind every search query and they enable accurate vision models that support fronts such as Google Image Search, Google Photos, and Google Cloud Vision API. These TPU chips also enable the major improvements to Google Translate pushed out last year.

Google won't likely make the TPU chips available outside its cloud platform, although the company notes that others can glean from its results and craft their own chips that may even usurp what Google has achieved, raising the bar even higher.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion