Optical Computing for Object Classification Through Diffusive Random Media
The Californer/10233078

LOS ANGELES - Californer -- Object recognition through random scattering media has been an important but challenging task in many fields, such as biomedical imaging, oceanography, security, robotics, and autonomous driving. Numerous computational solutions have been developed to address this problem. However, all of these require large-scale digital computing and consume significant amounts of energy, while still lacking generalization to new random diffusers never used in the training phase.

Researchers at UCLA have developed an all-optical method that enables objects to be classified through unknown random diffusers using diffractive deep neural networks (D2NNs). D2NNs compute a given task by modulating the light diffraction through a series of spatially structured surfaces, collectively forming an all-optical computer that can operate at the speed of light. Such an all-optical computing framework has the benefits of high speed, parallelism and low power consumption and could be useful in many computing tasks, such as object classification, quantitative phase imaging, microscopy, universal linear transformations, etc.

More on The Californer
Published in Light: Science & Applications, this research paper entitled "All-optical image classification through unknown random diffusers using a single-pixel diffractive network" presented a new method that uses broadband diffractive networks to directly classify unknown objects through unknown, random diffusers using a single-pixel spectral detector. This broadband diffractive network architecture uses 20 discrete wavelengths to map a diffuser-distorted object into a spectral signature detected through a single pixel. During the training, many randomly generated phase diffusers were used to help the generalization performance of the diffractive optical network. After the deep learning-based training process, which is a one-time effort, the resulting diffractive layers can be physically fabricated to form a single-pixel network that classifies objects completely hidden by new, unknown random diffusers never seen during the training.

This network was demonstrated to successfully recognize handwritten digits through randomly selected unknown phase diffusers with a blind testing accuracy of 87.74%. Furthermore, the researchers experimentally demonstrated the feasibility of this single-pixel broadband classier using a 3D-printed diffractive network and a terahertz time-domain spectroscopy system. This optical computing framework can be scaled with respect to the illumination wavelength to operate at any part of the electromagnetic spectrum without redesigning or retraining its layers.

More on The Californer
The research was led by Dr. Aydogan Ozcan, Chancellor's Professor and Volgenau Chair for Engineering Innovation at UCLA and HHMI Professor with the Howard Hughes Medical Institute. The other authors of this work include Bijie Bai, Yuhang Li, Yi Luo, Xurong Li, Ege Çetintaş, and Prof. Mona Jarrahi, all from the Electrical and Computer Engineering department at UCLA. Prof. Ozcan also has UCLA faculty appointments in the bioengineering and surgery departments and is an associate director of the California NanoSystems Institute (CNSI).

Reference Article: https://www.nature.com/articles/s41377-023-01116-3

Source: UCLA ITA

Show All News | Report Violation


Latest on The Californer