Increase 1000 times! Future computing “acceleration”

As a small bird, Xuanfeng parrot’s brain weighs only 2G, consumes only 50MW and can fly at 22mph. By contrast, the UAV’s onboard processor weighs 20 times as much, consumes 350 times as much energy, and can only fly 10 to 20 minutes on a single charge. < / P > < p > although the traditional computing architecture has made great progress in the past 70 years, compared with the existing computing performance, the brain is still the most invincible computing device. How to shorten the distance between reality and future? < / P > < p > “the future has come, but the distribution is uneven.” Rich Uhlig, President of Intel Research Institute, said in response to a recent online open day of Intel Research Institute 2020, quoting science fiction writer William Gibson. < / P > < p > on the same day, the heads of the Institute’s frontier fields such as integrated optoelectronics, neural mimicry computing, quantum computing, secure computing and machine programming were all present, revealing many disruptive researches they planned to open up future computing and devote themselves to 1000 times performance improvement. < / P > < p > in 2004, Intel set a grand goal of using light as the basis of connection technology. At that time, they saw that with the growing demand for computing bandwidth, the scale of electrical input / output could not keep growing synchronously, thus forming the “I / O power wall”, which limited the available energy for computing operation. Therefore, they began to turn their attention to optical communication and silicon photonic technology. More than ten years later, James jaussi, director of phy Research Laboratory at Intel Research Institute, asserted: “there is an obvious inflection point between optical interconnection and electrical interconnection.” There are two main reasons for this: first, we are fast approaching the physical limits of electrical performance. Without fundamental innovation, there will be many limitations in the design of high energy efficiency circuits. Secondly, the “I / O power wall” is facing increasing challenges. < p > < p > for this reason, he proposed the concept of “Integrated Optoelectronics” for the first time, that is, integrating CMOS circuits and photonic technology through collaborative integration. He believes that this move “can systematically remove the barriers in terms of cost, energy and size constraints, and give the server packaging the transformative ability of optical interconnection”. < / P > < p > on the open day, as one of Intel’s major progress in building key technology modules, Intel’s newly developed micro ring modulator, which is 1000 times smaller than traditional components, made its debut. “Traditional chip modulators take up too much space, and the cost of IC packaging is very high. The micro ring modulator developed this time will reduce the size of the modulator to 1 / 1000 of the original size, thus eliminating the main obstacle of integrating silicon photons into the computing package. ” Jaussi said. < / P > < p > in the past year, quantum computer has become one of the focuses of the industry. As director of quantum applications and architecture at Intel Research Institute, Anne Matsuura announced on the open day that the company’s second generation low temperature control chip horse ridge II is ready. This marks another milestone in Intel’s breakthrough in quantum computing scalability. < / P > < p > scalability is one of the biggest difficulties in quantum computing. According to the introduction, based on the innovation of the first generation horse ridge controller launched in 2019, horse ridge II supports enhanced functions and higher integration to achieve effective control of quantum systems. The new functions include the ability to manipulate and read the states of qubits, and the ability to control multiple quantum gates needed for multiple qubits entanglement. < / P > < p > “we are gradually realizing the vision of commercial quantum computing.” Matsuura said. However, she admitted that the current quantum computing system with only 100 qubits, or even thousands of qubits, can not achieve the goal of commercialization. “We need to develop a full stack commercial quantum computing system with millions of qubits to achieve the goal of practicality.” < / P > < p > to release data potential, security is one of the important considerations. To solve this problem, Intel has been promoting the development of secure computing. Jason Martin, the chief engineer of security intelligence project group of Intel Research Institute, introduced his latest software protection extension technology. It can integrate the functions of confidentiality, integrity and authentication together to ensure the data security in use. However, this technology can only protect data on a single computer. In order to support more organizations to use sensitive data safely in collaboration, Intel and Perelman Medical School of the University of Pennsylvania are exploring the use of federal learning technology to eliminate the obstacles encountered by different data owners when integrating data, and ensure data privacy while gaining insight, so as to pursue the improvement of computing performance by 1000 times. < p > < p > in addition, they are still studying the homomorphic data decryption technology. It allows applications to directly perform computation operations on encrypted data without exposing data. However, this technology also faces some challenges, such as the cost of data processing will increase with the increase of data, which makes homomorphic encryption not widely used. “We hope to popularize this technology in the future. To this end, we are exploring new hardware and software approaches and working with partners and standards bodies. ” Martin said. < / P > < p > when looking forward to the development path of the third generation of artificial intelligence, “brain like” is one of the important ideas put forward by the academic circles. In 2017, Intel launched the first neural mimicry research chip Loihi, which is an important step in the development of neural mimicry hardware. < / P > < p > after a few years, Mike Davies, director of the neuromimicry Computing Laboratory at Intel Research Institute, announced that the research will soon enter the next stage, that is, to explore practical applications together with ecosystem partners and expand the application scope of the technology. < / P > < p > Davies said that through continuous development, prototype design and testing of applications on neural mimicry system, the company and its partners have achieved more and more results, and showed that neural mimicry computing can bring performance consistency improvement in various workloads. For example, Accenture has tested the ability to recognize voice commands on the Loihi chip and on the standard graphics processing unit, and found that Loihi not only achieves the accuracy similar to GPU, but also improves energy efficiency by more than 1000 times and response speed by 200 ms. < / P > < p > he said frankly that in the short term, due to the cost problem, neural mimicry calculation is only applicable to small-scale devices such as edge devices and sensors, or to cost insensitive applications such as satellites and special robots. “Over time, we expect that innovation in memory technology will reduce costs and expand the scope of application of neural mimicry solutions to all kinds of intelligent devices that need to process data in real time but are limited by size, weight, power consumption and other factors.” Davies said. < p > < p > > one of the highlights of the open machine programming show is neural computing. On the same day, Justin gottschlich, director and founder of machine programming research at Intel Research Institute, announced that the company launched the machine programming research system controlflag, which can independently detect errors in program code. “Although still in its early stages, this novel self-monitoring system is expected to become a powerful productivity tool to help software developers with time-consuming and laborious debugging. In initial testing, controlflag trained and learned new defects with more than a billion unmarked lines of product level code. ” < / P > < p > looking ahead, can we imagine a future where everyone can create high-quality software? Gottschlich’s answer is yes. They will be able to adapt to the three pillars of human programming, that is, the machine will be able to create the blueprint for the future by automatically expressing their intention. < / P > < p > “the five ongoing disruptive research projects we just introduced are just the tip of the iceberg. On the way to the goal of 1000 times performance improvement, we will have more achievements to share with you. ” Uhlig concluded. Skip to content