In the 70 years of artificial intelligence, the last painful lesson that researchers want to face is

Since the famous meeting of Dartmouth College in 1956, artificial intelligence research has gone through so many years of journey, experienced many peaks and many lows. There are many experiences and lessons that need to be learned repeatedly, but the most important one is the most difficult one for many researchers to accept. < p > < p > from 70 years of artificial intelligence research, the biggest lesson we have drawn is that the general method of maximizing the use of computing power is the most effective method and has great advantages. The fundamental reason is that Moore’s law, or rather, Moore’s law promotes the phenomenon that the unit calculation cost continues to decline exponentially. The only way to improve the performance of AI is to use AI in most cases. This inevitably requires a lot of computation and takes a little longer than the usual research projects. In order to make great progress in the short term, researchers try to make use of the existing human knowledge in this field, but in the long run, only computing is really useful. < / P > < p > the two need not be antagonistic, but in practice they are often. Take time to study one, and you’ll ignore the other. It’s a bit like psychological commitment in investment. Human knowledge method is easy to be complicated, which is not conducive to the use of general methods of calculation to play the greatest role. Many examples show that artificial intelligence researchers are too late to understand this painful lesson, and some of the most typical examples will enlighten us. < / P > < p > in the 1997 chess man-machine war, the algorithm to defeat world champion Gary Kasparov is based on a large number of deep search. At the time, most computer chess researchers were frustrated, after all, that they had been exploring methods that took advantage of human understanding of the special structure of chess. These chess researchers, based on human knowledge, cannot afford to lose when it is fully proven that this relatively simple search based algorithm using special hardware and software is more effective. They argue that “violent” search may win this time, but it’s not a general strategy, and it’s not the way people play chess anyway. The researchers hoped that the human knowledge-based approach would win, but they were disappointed. There is a similar pattern of research progress in computer go, only 20 years later. At the beginning, people tried to avoid search as much as possible by using human knowledge or game characteristics. However, once the search was applied effectively on a large scale, all efforts were in vain. < p > < p > it is also important to learn, that is, to learn the value function through self play. Learning through self play and general learning is like searching, which can achieve a lot of computation. < p > < p > search and learning are two of the most important technologies in artificial intelligence. Computer go is like computer chess. Researchers initially wanted to use human understanding to achieve their goals, but later they had to accept search and learning to achieve greater success. < p > < p > in the field of speech recognition, as early as the 1970s, the advanced research projects agency of the U.S. Department of defense launched a speech recognition competition. The entries use many special methods to use human knowledge, such as vocabulary knowledge, phoneme knowledge, human vocal tract knowledge and so on. On the other hand, the new method based on Hidden Markov model is more statistical in nature and has more computation. < p > < p > statistical methods once again overcome the methods based on human knowledge, leading to significant changes in the field of natural language processing. After decades of development, statistics and computing have gradually occupied the leading position in this field. Recently, the rise of deep learning in the field of speech recognition is the latest step in this direction. Deep learning methods rely less on human knowledge and use more computation. In addition to learning a large number of training sets, the speech recognition system created by this method is much better than the original one. < / P > < p > as in games, researchers always try to make the system work their own way of thinking. They tried to put knowledge into the system, but when according to Moore’s law, a large number of calculations became feasible, and effective ways to use them were found, which ultimately proved to be counterproductive and a great waste of researchers’ time. There is a similar pattern in the field of computer vision. Early methods used to think that vision was to search for edges, generalized cylinders or SIFT features, but these ideas have been abandoned today. Modern deep learning neural networks only use the concept of convolution and some invariance, which are much easier to use. This is an important lesson. We keep making the same mistakes and never fully understand the field of artificial intelligence. If you want to avoid making mistakes again, you have to figure out what caused them. In the long run, it is impossible to construct the system in our way of thinking, which is a painful lesson that must be learned. < / P > < p > this painful lesson comes from historical observation: artificial intelligence researchers often try to build knowledge into agents, which is always useful in the short term and is personally satisfactory to the researchers. But in the long run, this approach is stagnant and may even hinder further development. By using the opposite method of scale computing based on search and learning, researchers finally made a breakthrough. < / P > < p > such success is slightly bitter and often difficult to fully accept, and it is not achieved through a supportive, people-centered approach. < / P > < p > what we should learn from this painful lesson is that general methods have great power. Even if the available computing becomes very large, these methods will continue to expand with the increase of the amount of calculation. It seems that search and learning are two kinds of random expansion methods in this way. < / P > < p > the second thing we should learn is that the content of ideas is actually extremely complex. We should not explore the content of ideas in simple ways, such as thinking about space, objects, multi-agent or symmetry. These are all part of the external world, which is arbitrary and complex in nature. They should not be solidified, because they are inexhaustible in complexity. On the contrary, only meta methods that can discover and capture this arbitrary complexity need to be built. The key to these methods is that they can get good approximations, but this capture should be done by our methods, not by ourselves. What the AI has to do is to discover like humans, not to store our discoveries. Building on the basis of human discovery makes it more difficult to see how the discovery process is accomplished. < p > < p > more than 70 years of exploration have made these facts more and more clear to the researchers. Even if they do not want to face them any more, they need to recognize these realities. The way to avoid mistakes is to face them. Fifth personality will be updated, please remember your game account, otherwise you may not be able to play normally