Tech giant Google has launched a beta version of TPU (Tensor Processing Unit) chips in a bid to fasten machine-learning models for business-critical applications in the cloud. With this launch, Google will be able to scale up specific machine learning workloads by compressing the time taken to understand complex cluster patterns from days to hours.
According to Google, demystifying cluster communication patterns would be a daunting task and would take days or weeks. They also claim that with the TPU chips launched, the time taken to process these clusters would be considerably reduced, accelerating the pace of business-critical machine-learning models.
Commenting on the advantage of the TPUs launched on Google Cloud, Alfred Spector, Chief Technology Officer at Two Sigma, an investment firm said:
We made a decision to focus our deep learning research on the cloud for many reasons, but mostly to gain access to the latest machine learning infrastructure. Using Cloud TPUs instead of clusters of other accelerators has allowed us to focus on building our models without being distracted by the need to manage the complexity of cluster communication patterns.
According to Google, TPU chips utilize TensorFlow, an open source machine learning framework to crunch the workload time. They also claim that Google will continue their tryst with ‘innovative, rapidly evolving technology to support deep learning’. Emphasizing the same, John Barrus, Product Manager for cloud TPUs at Google Cloud, added:
Here at Google Cloud, we want to provide customers with the best cloud for every ML workload and will offer a variety of high-performance CPUs (including Intel Skylake) and GPUs (including NVIDIA’s Tesla V100) alongside Cloud TPUs.