Using neural networks to add light to photos, Google’s research has achieved a “ghost film” effect

Lighting is an important step in the process of image processing, the quality of lighting may affect the display of the overall effect. There are also different lighting methods. A new research by MIT, Google and so on has found a new way. The second lighting and view synthesis of images are carried out by means of neuro optical transmission, which achieves quite good results. < p > < p > image synthesis is not a new topic, but “lighting” is a difficult problem for all photos. For human photographers, lighting is a very complex thing, so how to solve the problem of light in synthetic images? < / P > < p > recently, researchers from MIT, Google, and the University of California, San Diego, attempted to re light images and synthesize views via neurooptical transmission. < / P > < p > then, how effective is the NLT method proposed in this study? The researchers tested in several scenarios, including directional lighting, lighting based on different image backgrounds, lighting effect after view synthesis according to different camera paths, and so on. < p > < p > the light transmission of the scene describes how the scene looks in different light settings and viewing angles. A comprehensive understanding of the scene lt is helpful to synthesize new views under any lighting conditions. < / P > < p > this paper discusses image-based LT acquisition, which is mainly used for human body in lighting platform setting. A semi parametric method is proposed to learn the neural representation of LT embedded in the texture atlas space with known geometric properties. All non diffuse and global LT are modeled as residuals and added to physically accurate diffuse substrate rendering. < / P > < p > specifically, the study shows how to fuse previously seen light sources and view observations to synthesize new images of the same scene based on selected viewpoints and desired lighting conditions. < / P > < p > this strategy allows the network to learn complex material effects and global illumination while ensuring the physical correctness of the diffuse Lt. With this learned LT, we can use directional lights or HDRI maps to re light the scene in a realistic way, synthesize a new view with view dependent effects, or use a set of previously observed sparse results to perform both secondary lighting and view compositing in a unified framework. In this study, qualitative and quantitative experiments show that NLT method is superior to the current optimal secondary lighting and view synthesis solutions, and does not need to deal with these two problems separately as in previous work. < / P > < p > an end-to-end semi parametric method is proposed, which uses convolutional neural network to learn from the actual data to interpolate the 6D optical transmission function of each object. < / P > < p > by embedding the network into the parameterized texture atlas, and using a set of one-light-at-a-time images as input, a unified framework of secondary lighting and view synthesis can be implemented at the same time. < / P > < p > proposes a set of enhanced texture space inputs and a residual learning mechanism based on physically accurate diffuse substrate, which makes it easy for the network to learn non diffuse, high-order optical transmission effects, subsurface scattering and global illumination. < / P > < p > the framework used by the researchers is a semi parametric model with residual learning mechanism to narrow the realistic gap between the diffuse rendering of geometric proxy and the actual input image, as shown in Figure 2 below. < / P > < p > semi parametric method is used to fuse previously recorded observations to generate new realistic images under any expected illumination and viewing angle. This method benefits from the development of computer vision in recent years, which enables researchers to achieve accurate 3D reconstruction of human objects. < / P > < p > Figure 2: the realistic gap between the previous relightables method, the NLT method proposed in this study, and the real image. < / P > < p > the model network includes two paths, query path and observation path. The “observation path” takes K nearby observations based on the target light source and view direction as input, and encodes them into multi-scale features. Finally, these features are pooled to eliminate the dependence on order and quantity. < / P > < p > then, these pooled features are connected to the feature activation function of the query path, which takes the expected light source and viewing angle direction as well as the physically accurate diffuse reflection substrate as input. The path prediction residual map is queried and then added to the diffuse substrate to generate texture rendering results. < / P > < p > because the whole network is embedded in the texture space of human objects, we can train the same model according to the input and supervision signals to perform the secondary lighting and view synthesis respectively, or perform both operations at the same time. < / P > < p > as shown in Table 4 below, the researchers conducted quantitative analysis on the view synthesis effect of NLT and other baseline secondary lighting methods, and the results showed that NLT was better than all baseline methods, and the effect was comparable to the method proposed by thies et al. Finally, the performance of NLT method under different factors was analyzed. The results show that with the degradation of geometry, the neural rendering method used in this study is always better than the traditional re projection method which depends heavily on the quality of geometry. When performing the second lighting, the researchers also confirmed that the NLT method can work reasonably when the number of light sources is reduced, indicating that NLT method may also be suitable for smaller lighting platforms. < / P > < p > as shown in Figure 13 below, the control variables of NLT method were studied on the second lighting task. The results show that removing different components of the model will reduce the rendering quality to varying degrees. < / P > < p > of course, NLT method has also failed in view composition. As shown in Figure 14 below, NLT methods may not be able to generate realistic views of complex light transmission effects, such as those of necklaces worn around the neck. 865 optimization is different? These mobile phones should teach you a lesson!