Google colab is blessed with V100, and the happiness of collecting wool is doubled again

If you’re a student party, you don’t have money, and you don’t have enough computer cards in your school lab, but you’ve entered the door of machine learning, you must have used a artifact: Google cola. < / P > < p > colaboratory is a Google research project designed to help disseminate machine learning training and research results. It’s a jupyter laptop environment that doesn’t need to be set up to use and runs entirely in the cloud. Colaboratory laptops are stored on Google’s cloud drive and can be shared just as you use Google documents or forms. Colaboratory is free to use. At the same time, it can also run keras, tensorflow, pytorch, opencv and other frameworks for deep learning development and application. Recently, someone found that the GPU they got when running colab was Tesla V100: “V100, not P100. Because I’m a colab professional user, I’m not sure if all users do. Obviously, he’s not alone. Another professional user also posted a screenshot: “I’m a Google collab Pro User. Whenever I connect to a service, I habitually check which GPU I got. Today, when I saw V100, I was a little shocked for the first time. I ran the command again to check. Yes, it was V100. “< / P > < p > in the past two years, colab’s hardware has undergone several upgrades. First, in April last year, Google upgraded colab’s GPU from the antique K80 to Tesla T4, which is more suitable for low-precision inference. The training is much faster than K80. In November last year, colab opened the P100 again, with two hardware upgrades in a year. < / P > < p > at present, there is no official information confirming that colab will provide V100 free of charge. Maybe this is just a little benefit for professional users. But then again, the P100 is free, will V100 be far away? < / P > < p > as shown in the figure below, the speed of V100 in resnet-50 deep neural network training task is 2.4 times faster than that of P100. If the target delay of each image is 7ms, the reasoning speed of V100 using resnet-50 deep neural network is 3.7 times faster than that of P100. < / P > < p > most importantly, compared with P100, Tesla V100 adds a tensor core specially designed for deep learning, which can significantly speed up the running speed of deep learning algorithm and framework. < p > < p > referring to the hardware change speed of colab last year, it seems that the official V100 is also close at hand. If you have time now, you can open colab and run it. Maybe you can have a surprise. Fifth personality will be updated, please remember your game account, otherwise you may not be able to play normally