Imagine a scene where you yell at an animal and you can tell whether it’s a dog or a horse from the echoes. You may think that such a thing is far away from us, but a scientific team has already completed this photographic effect. < / P > < p > in a new paper recently published in the journal optica, researchers in the UK, Italy and the Netherlands describe a new way to create animated 3D images by capturing the temporal information of photons rather than their spatial coordinates. < / P > < p > researchers extract three-dimensional images of a scene by adjusting the time that light is reflected on a detector. This new technology, called time imaging, demonstrates an important use of machine learning. The time imaging system has some advantages over the conventional imaging system. For example, the new system will be very fast and may be working at 1000 frames per second; and this rough and fast 3D imaging may have many applications, such as a camera for autopilot, in order to improve the accuracy and speed of route finding, and to develop 360 degrees for mobile devices and health monitors. Most importantly, this kind of single point detector for collecting time data is small, light and cheap. < / P > < p > Photos and videos are usually made by using digital sensors to capture photons, i.e. ambient light will reflect an object and the lens will focus it on a screen composed of tiny photosensitive elements or pixels. An image is a pattern formed by bright spots and dark spots produced by reflected light. < / P > < p > take the most common digital camera as an example. It consists of hundreds of pixels, which form an image by detecting the intensity and color of light at each spatial point. < / P > < p > at the same time, 3D images can be generated by placing several cameras around the object and photographing the object from multiple angles, or scanning the object with photon flow and reconstructing it in 3D. However, no matter what method is used, the image is constructed by collecting the spatial information of the scene. < / P > < p > in recent decades, researchers have developed a more ingenious way to capture images using only a single pixel detector. To do this, instead of exposing objects to uniform light, they expose them to different lighting patterns, which are similar to square bar codes on packaging. < / P > < p > each pattern reflects different parts of the object, so the light intensity measured by the pixel changes with the pattern. By tracking these changes, researchers can reconstruct images of objects. Today, data scientist Alex Turpin and physicist Daniel Faccio of the University of Glasgow and their colleagues have developed a method to generate 3D images with a single pixel but no patterned flash. Using lightning fast single photon detectors, they illuminate a scene with a uniform flash of light and simply measure the reflection time. < / P > < p > the accuracy of the detector is one quarter nanosecond, and it can calculate the functional relationship between the number of photons arriving and time, and researchers can reconstruct the field only with this information < / P > < p > this is a novel method, because in principle, there is no one-to-one correspondence between the arrangement of objects in the scene and the time information. For example, photons reflected from any surface 3 meters away from the detector, in any direction toward the surface, will arrive within 10 nanoseconds. < / P > < p > and the so-called time of flight camera can increase depth and produce 3D images by accurately calculating the flash time from an object to different pixels. < / P > < p > the new 3D imaging device starts with a simple, inexpensive single point detector that is tuned to act as a stopwatch for photons. Unlike cameras that measure the spatial distribution of color and intensity, the detector only records the time required for photons generated by instantaneous laser pulses to rebound from each object in any given scene and reach the sensor. The farther the object is, the longer it takes for each reflected photon to reach the sensor. < / P > < p > then, these images are converted into 3D images with the help of complex neural network algorithms. The researchers trained the algorithm to show it thousands of regular photos of the team moving and carrying objects in the laboratory, as well as time data captured simultaneously by a single point detector. They also used a non flight time camera to take real 3D images of the scene. In the end, this neural network is enough to understand the corresponding relationship between time data and photos, so that only time data can create highly accurate images. Compared with the time-of-flight camera, the time image is fuzzy and lacks details. However, it clearly reveals people’s form. “At first glance, this ambiguity seems to make the problem impossible,” says Laura Waller, a computer scientist and electrical engineer at the University of California, Berkeley. Single pixel imaging, when I first heard of this concept, I thought, it should work. But on second thought, it should not work. ” Dr. Alex Turpin, a researcher in data science at the school of computational science at the University of Glasgow, said: “if we only consider spatial information and single point detectors do not have spatial information, single pixel imaging is not possible. However, such detectors can still provide valuable time information. Unlike traditional image production, our method can completely separate the light from the process. ” < / P > < p > and to achieve this, Alex Turpin and his colleagues used a machine learning program called a neural network, which, after training the neural network with data sets, can automatically image the moving people in the scene.
is different from traditional cameras. The single point detector with time data collection is small, light and cheap, which means that they can be easily added to the existing system, such as being used as the camera of an autopilot car, in order to improve the accuracy of route finding and the speed of braking reaction. < / P > < p > in addition, they can enhance existing sensors in mobile devices, such as Google pixel 4. The sensor already has a simple gesture recognition system based on radar technology. It can even use the next generation technology to monitor the rise and fall of the chest of patients in the hospital, and remind them of their breathing changes or tracking movements, so as to ensure their safety in a way consistent with data security. < / P > < p > Dr Alex Turpin added: “we are very excited about the potential of the system we have developed and we look forward to continuing to tap its potential. Our next goal is to develop a stand-alone, portable, out of the box system, and we are eager to start looking at our options and further with the help of business partners. ” Iqoo5 series debut strength interpretation of “120 super full mark flagship”