Launch alternative AI chip, this start-up company wants to cooperate with NVIDIA “PK” fast company

AI chips ushered in a species explosion, and many enterprises swarmed in, including not only technology giants such as arm, NVIDIA and Huawei, but also new generation power enterprises like graphcore, hoping to replace NVIDIA’s “monopoly” status. On July 15, graphcore, a British semiconductor company, released its second generation IPU, Colossus MK2 gc200. The company claims that this is the most “complex” Ai chip than NVIDIA A100 GPU, and even surpasses the performance of NVIDIA A100 GPU. < / P > < p > with the announcement at the beginning of the year that the company has completed $150 million in D round financing, graphcore has raised more than $450 million of venture capital funds in four years, and the company’s valuation is close to $2 billion, which is a real Unicorn enterprise. Sir Hermann Hauser, the father of British semiconductors and co-founder of arm, once commented on graphcore as follows: “there have been only three revolutions in computer history, one is CPU in the 1970s, the second is GPU in 1990s, and graphcore is the third revolution.” < / P > < p > sufficient capital “ammunition” and high evaluation and reputation are the foundation for graphcore to confront NVIDIA. < p > < p > in the past two years, the artificial intelligence industry has ushered in a bottleneck period, and the algorithm level has made slow progress. More practitioners are looking forward to using professional AI chips to improve their computing power. At the same time, they can handle training and reasoning, so as to replace the traditional CPU and GPU. As a result, many enterprises package AI chips into various words. For example, DPU, NPU, EPU, etc. Many people also think that IPU is more like a marketing word. However, graphcore indicates that IPU is actually the abbreviation of intelligent processing unit, which is a neural network processing chip architecture specially designed for machine learning workload, which is a brand-new large-scale parallel processor. Compared with the traditional CPU and GPU, IPU uses a larger parallel MIMD processor core, a very large distributed on-chip SRAM, and a new processing architecture, which is specially suitable for intelligent computing workload. According to Lu Tao, senior vice president of graphcore and general manager of China, in the application of AI technology, parallel and complex computing of large-scale data is usually required. In this case, the traditional CPU and GPU are almost doing scientific computing or high-performance computing, and the algorithm processing ability of AI technology is not ideal. Therefore, IPU has obvious advantages in intelligent computing, and can handle the training and reasoning ability of the algorithm at the same time. At the same time, IPU has also made special optimization for low precision data model. In addition to standard neural network, IPU is also suitable for Bayesian network and Markov network, which are most commonly used in AI field. < / P > < p > it can be understood that IPU is a chip specially made for AI. Whether it is edge computing or terminal computing, IPU can be competent. < / P > < p > in 2018, graphcore launched the company’s first AI chip product, Colossus MK1, and put forward the concept of IPU processor for the first time, which has been widely concerned by the semiconductor industry. In the past two years, graphcore has grown rapidly, raising more than $450 million of venture capital funds in four years. There are many academic investors in AI field, such as demis hassabis, father of alphago and co-founder of deepmind, zoubin gramani, Professor of Cambridge University, Greg Brockman, co-founder of openai, and star institutional investors such as Sequoia Capital. And Dell, Samsung, Microsoft and other well-known enterprises as strategic investors also participate in it. < / P > < p > two years later, graphcore announced the launch of hardware gc200 IPU and software platform poplar, providing a complete set of solutions, hoping to solve the algorithm and workload problems in AI. NVIDIA A100, the first GPU based on ampere architecture, was released in May this year, with 54 billion transistors and 20 times of AI computing power. It is known as “the most powerful AI chip in the world”. < / P > < p > although under Moore’s law, traditional CPU and GPU are steadily speeding up to form powerful processors, the future improvement of AI computing power may come from special AI chips. Graphcore believes that this is an opportunity for gc200 IPU to be launched. On July 15, graphcore released gc200 IPU, the successor product of MK1. In addition to TSMC’s 7Nm process, which is the same as NVIDIA A100, the processing unit encapsulates 59.4 billion transistors, 5.4 billion more than A100 GPU, 16 times faster than A100 GPU in performance, and outperforms A100 GPU in many mainstream models. < / P > < p > the core three elements of AI technology are computing power, algorithm and big data. Although gc200 IPU has made important breakthroughs in computing, memory and communication performance, surpassing NVIDIA’s A100 GPU, graphcore is still in the early stage of big data application and commercialization, and there is a certain gap between graphcore and NVIDIA. < / P > < p > in terms of computing power, the newly released gc200 IPU of graphcore has increased from 1216 to 1472 independent IPU tiles processor core units, with a total of 8832 threads that can be executed in parallel. The advantage of doing this is that there is more room for improvement in computational power training. < / P > < p > compared with the first generation of IPU, the actual performance is improved by 8 times and the computing power training is improved by two times. Among them, the performance of Bert large is improved by 9.3 times, that of 3-layer Bert reasoning is 8.5 times, and that of efficient net-b3 is 7.4 times. According to Lu Tao, there is more room for CV and NLP applications with powerful computing power. < / P > < p > in terms of storage technology, graphcore proposes the concept of IPU exchange memory, which can increase the bandwidth by nearly 100 times and the capacity by about 10 times compared with the HBM technology currently used by NVIDIA. The improvement of storage has stronger hardware thrust and performance advantages for many complex AI model algorithms. < / P > < p > in terms of communication, graphcore specially designed the AI horizontal expansion structure of IPU fabric for gc200 IPU, which can achieve 2.8tbps ultra-low delay, and support the extension between 64000 IPUs at most. It can be interconnected through direct connection or Ethernet switch, which will greatly improve the processing speed of the algorithm. < / P > < p > on the software system, graphcore launched the software platform popular. Through seamless integration with tensorflow and open neural network exchange, the first graphic tool chain specially designed for machine intelligence is created. With the participation of developers, graphcore’s unique “IPU ecology” is formed. < / P > < p > with three breakthroughs in computing, data and communication, together with the targeted optimization of AI algorithm, graphcore has built a large-scale and scalable ipu-pod system, which makes gc200 IPU get the best performance and become one of the powerful competitors of NVIDIA dgx-a100. < / P > < p > even if graphcore has a powerful IPU processor, algorithm framework and software system, big data is still the key factor at the bottom of AI technology. < / P > < p > Lu Tao frankly admitted to titanium media that although both NVIDIA and graphcore have public data sets, graphcore does not have and is not the company’s existing goal in terms of private data sets owned by NVIDIA, such as edge computing scene data and automatic driving data set. Graphcore is considering expanding into unsupervised learning, but it will take time and process. < / P > < p > “NVIDIA is trying to become a solution company such as mobileeye or Shangtang, which requires private data sets. This is not the goal of graphcore at present. We are now focusing on the data center.” “Because graphcore is still a relatively small company. At present, we focus on the application fields related to machine learning, and some users are exploring the use of IPU to do some non AI applications, but it is not the key area of the company’s overall development. ” “The performance of unsupervised learning using IPU is much better than that of GPU, and the throughput of training is up to 13 times. However, there is still a process from technology to application. ” Lu Tao told titanium media. < / P > < p > compared with NVIDIA A100 GPU and more than 20 enterprises such as Gu Geyun and Oracle, graphcore is only available to commercial users, universities and research institutions, including Microsoft, Baidu and Jinshan cloud. There are few individual developers among the user types. According to graphcore, their next step is to attract more Chinese customers and open up the Chinese market. Through iplar, software and community, we can develop through iplar and popcore. < / P > < p > “graphcore has found its own track. From a global perspective, in fact, our fastest application is still in the super large-scale data center, and then in the fields of Finance and health care, we have made great progress.” “In terms of the volume of AI market, graphcore believes that China is one of the most important markets in the world and one of the fastest landing markets.” Lu Tao told titanium media that in the long run, the Chinese market will account for 40% or even 50% of graphcore’s global market in the future. < / P > < p > at this stage, gc200 IPU seems to surpass NVIDIA’s A100 GPU in terms of computing power and algorithm. However, graphcore is still in its early stage in terms of data and commercialization process, and there is a certain gap between graphcore and NVIDIA. When talking about how to replace the existing CPU and GPU with IPU, Lu Tao said that in the future, there will be a market where CPU, GPU and IPU will coexist in the future. For customers who are suitable for using IPU, the company is confident that customers can quickly replace the existing CPU and GPU landing products with gc200 IPU. < / P > < p > “whether to replace or not mainly depends on the value provided by IPU, whether the migration cost is low, whether the performance price ratio is high, and whether the ecology is good. At that time, there will be more users in the ecosystem, more people using IPU and more people who know technology. Of course, the resistance to replacement will be smaller. ” Lu Tao told titanium media. As Lu Tao said, graphcore is still a relatively small company. However, this small company has an extraordinary ability in AI chip research and development. Using IPU, it is laying out the general AI technology in the future. Whether IPU can replace GPU still needs time to verify. Huawei has finally made a choice! Xiaomi and ov have also followed up. Have you ever thought about today?

Author: zmhuaxia