Scientific and technological developments, according to foreign media reports, Stanford University and the University of California, San Diego (University of California San Diego, UCSD ) developed a single-center lens (monocentric lens), and the use of multiple lenses and micro-lens array (microlens array ) The sensors are connected, greatly improving the field of view (FOV) captured by the light field (LF) - this camera can generate 4D images and capture information in the 138° field of view .
The camera is used in the first light field camera with a single-lens, wide field of view (also translated as "wide field of view"), which can produce rich images and video frames, which in turn enables robots to Good implementation of global navigation and understanding of specific environmental information such as object distance and surface texture. According to researchers, the technology will be used in self-driving vehicles in the future to enhance its performance. In addition, this technology will also be used for virtual reality (VR) technology. The researchers demonstrated the new technology at the 2017 Computer Vision and Pattern Recognition Conference (CVPR 2017) held in July this year.
According to Gordon Wetzstein, a professor of electrical engineering at Stanford University, "For the expansion of computer vision applications, light field capture and processing capabilities play an important role, providing rich texture, depth information, and simplifying task complexity. Although the light field camera has been commercialized, current devices cannot provide wide-field images because it is limited to a certain extent by fisheye lenses. Fundamentally, the latter The entrance pupil diameter is limited, which in turn severely limits the depth sensitivity."
He also said: "In this study, we talked about a new compact optical design that connects a single-center lens with multiple sensors using microlens arrays to increase the field-of-view camera's ability to capture Unprecedented level, thanks to excellent light field performance, the research team proposed a brand-new method to efficiently realize the connection of spherical lenses and planar sensors, replacing the expensive and bulky ones. Fiber bundles."
The team produced a single-sensor light-field camera prototype that rotates a sensor that is opposite to a stationary main lens to simulate a wide-field, multi-sensor scenario. In the end, the team also talked about a set of processing toolchains, including a practical spherical light field parameter setting. In addition, the team also demonstrated depth estimation and post-capture refocusing of indoor and outdoor panoramas with a pixel size of 15 × 15 × 1600 for 4D images. × 200 (72 MPix) with a field of view of 138°.
Professor Gordon Wetzstein collaborated with Joseph Ford, professor of electrical engineering at the University of California, San Diego, to promote the research project. Researchers at the University of California, San Diego, designed a spherical mirror that provides an extremely wide field of view for the camera. The field of view covers nearly one-third of the perimeter of the camera.
The team led by Prof. Ford had previously participated in the “SCENICC” program of the US Department of Defense Advanced Research Projects Agency (DARPA) and developed a spherical mirror designed to create a compact video camera that can capture 360-degree HD panoramas. Image, the number of video frames up to 125 million pixels (125 megapixels). In this project, the video cameras used for fiber optic bundles combine the spherical image with the traditional planar focal planes (flat focal planes) to improve the picture performance, but at a higher cost.
The new camera uses a spherical lens that eliminates fiber bundles with small crystals and digital signal processing. The combination of optical design and system integration hardware in Ford's laboratory and signal processing and algorithmic techniques in Wetzstein's laboratory resulted in a digital solution that increased the resolution of the picture while producing extremely wide images.
The camera also relies on light field photography developed by Stanford University to provide 4D images. This camera captures light in the biaxial direction of the lens and combines this information with 2D images. Another feature of the light field photography is that the image can be refocused after capturing the image, because the image includes the position of the light source and the light position and direction. The robot will use this technology to make its vision unaffected by rain and other obstructions.
The camera head, like a regular camera, can run long distances and is designed to enhance the quality of close-up images. As part of the virtual reality system, its depth information will generate more seamless renderings of real-world scenes to better support the integration of such scenes with virtual components.
Currently, the camera is in the proof of concept phase and the team plans to make a compact prototype for testing on robots. The study was funded by the National Science Foundation (NSF)/Intel Partner on Visual and Experiential Computing.