New optical generative models, ushering in a new era of sustainable generative AI
The Californer/10324441

Trending...
LOS ANGELES - Californer -- In a major leap for artificial intelligence (AI) and photonics, researchers at the University of California, Los Angeles (UCLA) have created optical generative models, capable of producing novel images using the physics of light instead of conventional electronic computation. Published in Nature, the work presents a new paradigm for generative AI that could dramatically reduce energy use while enabling scalable, high-performance content creation.

Generative models, including diffusion models and large language models, form the backbone of today's AI revolution. Running such models requires massive computational infrastructure, raising concerns about their long-term sustainability.

The UCLA team, led by Professor Aydogan Ozcan, has charted a different course. Instead of relying solely on digital computation, their system performs the generative process optically—harnessing the inherent parallelism and speed of light to produce images in a single pass. By doing so, the team addresses one of AI's greatest bottlenecks: balancing performance with efficiency.

More on The Californer
The models integrate a shallow digital encoder with a free-space diffractive optical decoder, trained together as one system. Random noise is first processed into "optical generative seeds," which are projected onto a spatial light modulator and illuminated by laser light. As this light propagates through the static, pre-optimized diffractive decoder, it produces images that statistically follow the target data distribution. Unlike digital diffusion models that require hundreds to thousands of iterative steps, this process achieves image generation in a snapshot, requiring no additional computation beyond the initial encoding through a shallow digital network and light illumination.

To validate their approach, the team demonstrated both numerical and experimental results across diverse datasets. The models generated new images of handwritten digits, fashion items, butterflies, human faces, and even artworks inspired by Vincent van Gogh. The optically generated outputs were shown to be statistically comparable to those from advanced diffusion models, based on standard image quality metrics. They also produced multi-color images and high-resolution Van Gogh-style artworks, underscoring the creative range of the optical generative AI approach.

More on The Californer
The researchers developed two frameworks: snapshot optical generative models, which produce new images in a single optical pass, and iterative optical generative models, which mimic digital diffusion to refine outputs over successive steps. This flexibility allows multiple tasks to be performed on the same optical hardware simply by updating the encoded seeds and the pre-trained diffractive decoder.

The broader implications of this breakthrough are significant. Optical generative models could lower the energy footprint of AI at scale, making sustainable deployment possible while unlocking ultra-fast inference speeds.

The authors of the work include Dr. Shiqi Chen, Yuhang Li, Yuntian Wang, Hanlong Chen, and Dr. Aydogan Ozcan, all from the UCLA Samueli School of Engineering.

Article: https://www.nature.com/articles/s41586-025-09446-5

Source: ucla ita
Filed Under: Science

Show All News | Report Violation

0 Comments

Latest on The Californer