Advanced optical neural networks that compute at the speed of light using engineered matter
The Californer/10041592

LOS ANGELES - Californer -- Diffractive deep neural network is an optical machine learning framework that uses diffractive surfaces and engineered matter to all optically perform computation. After its design and training in a computer using modern deep learning methods, each network is physically fabricated, using for example 3D printing or lithography, to engineer the trained network model into matter. This 3D structure of engineered matter is composed of transmissive and/or reflective surfaces that altogether perform machine learning tasks through light-matter interaction and optical diffraction, at the speed of light, and without the need for any power, except for the light that illuminates the input object. This is especially significant for recognizing target objects much faster and with significantly less power compared to standard computer based machine learning systems, and might provide major advantages for autonomous vehicles and various defense related applications, among others.  Introduced by UCLA researchers [1], this framework was experimentally validated for object classification and imaging, providing a scalable and energy efficient optical computation framework.

More on The Californer
In their latest work [2] the UCLA group has taken full advantage of the inherent parallelization capability of optics, and significantly improved the inference and generalization performance of diffractive optical neural networks, helping to close the gap between all-optical and the standard electronic neural networks. These new design strategies achieved unprecedented levels of inference accuracy for all-optical neural network based machine learning, which come close to the performance of some of the earlier generations of all-electronic deep neural networks. At the same time, the all optical neural networks have important advantages, such as the inference speed, scalability, parallelism and the low-power requirement of passive optical networks that utilize engineered matter to compute through diffraction of light.

This research was led by Dr. Aydogan Ozcan who is a Chancellor's Professor of electrical and computer engineering at UCLA, and an associate director of the California NanoSystems Institute (CNSI). The other authors of this work are graduate students Jingxi Li, Deniz Mengu and Yi Luo, as well as Dr. Yair Rivenson, an adjunct Professor of Electrical and Computer Engineering at UCLA.

More on The Californer
"Our results provide a major advancement to bring optical neural network-based low power and low-latency solutions for various machine learning applications," said Prof. Ozcan. Moreover, these systematic advances in diffractive optical network designs might bring us a step closer to the development of next generation, task-specific and intelligent computational camera systems.

This research was supported by the Koç Group, NSF and HHMI.

References
[1] https://science.sciencemag.org/content/361/6406/1004
[2] https://www.spiedigitallibrary.org/journals/advanced-photonics/volume-1/issue-04/046001/Class-specific-differential-detection-in-diffractive-optical-neural-networks-improves/10.1117/1.AP.1.4.046001.full?SSO=1

Source: UCLA ITA
stats
Filed Under: Engineering

Show All News | Report Violation

0 Comments
1000 characters max.

Latest on The Californer