AI-designed structured material creates super-resolution images using a low-resolution display
The Californer/10221170

Trending...
LOS ANGELES - Californer -- One of the promising technologies being developed for next-generation augmented/virtual reality systems is holographic image displays that emulate the 3D optical waves representing the objects within a scene. This technology empowers more compact and lightweight designs for wearable displays. However, the capabilities of holographic projection systems are restricted mainly due to the limited number of independently controllable pixels in existing image projectors and spatial light modulators.

A recent study published in Science Advances reported a deep learning-designed transmissive material that can project super-resolved images using low-resolution image displays. In their paper, UCLA researchers, led by Professor Aydogan Ozcan, used deep learning to spatially-engineer transmissive diffractive layers at the wavelength scale, and created a material-based physical image decoder that achieves super-resolution image projection as the light is transmitted through its layers.

More on The Californer
Imagine a stream of high-resolution images to be visualized on a wearable display. Instead of sending these images directly to the wearable display, this new technology first runs them through a digital neural network to compress them into lower-resolution images looking like bar-codes, not meaningful to the human eye. However, this image compression is not decoded or decompressed on a computer. Instead, a transmissive material-based diffractive decoder decompresses these lower-resolution images all optically and projects the desired high-resolution images as the light from the low-resolution display passes through thin layers of the diffractive decoder. Therefore, the image decompression from low to high resolution is completed using only light diffraction through a passive and thin structured material. This process is extremely fast since the transparent diffractive decoder could be as thin as a stamp. This diffractive image decoding scheme is also energy-efficient since the image decompression process follows light diffraction through a passive material and does not consume power except for the illumination light.

More on The Californer
Researchers showed that these diffractive decoders designed by deep learning can achieve a super-resolution factor of ~4 in each lateral direction of an image, corresponding to a ~16-fold increase in the effective number of useful pixels in the projected images. This diffractive image display also provides a significant decrease in data transmission and storage requirements due to encoding the high-resolution images into compact optical representations with a lower number of pixels, which significantly reduces the amount of information that needs to be transmitted to a wearable display.

This innovation was demonstrated using 3D-printed diffractive decoders that operate at the terahertz part of the electromagnetic spectrum, which is frequently used in, for example, security image scanners at airports. Researchers also reported that the super-resolution capabilities of the presented diffractive decoders could be extended to project color images with red, green and blue wavelengths.

Publication: https://www.science.org/doi/10.1126/sciadv.add3433

Source: UCLA ITA
Filed Under: Science

Show All News | Report Violation

0 Comments

Latest on The Californer