AI-designed camera only records objects of interest while being blind to others
The Californer/10205086

LOS ANGELES - Californer -- A new research paper published in eLight demonstrated a new paradigm to achieve privacy-preserving imaging by building a fundamentally new type of imager designed by AI. In their paper, UCLA researchers presented a smart camera design that images only certain types of desired objects, while instantaneously erasing other types of objects from its images without requiring any digital processing.

This new camera design consists of successive transmissive surfaces, each composed of tens of thousands of diffractive features at the scale of the wavelength of light. The structure of these transmissive surfaces is optimized using deep learning to modulate the phase of the transmitted optical fields such that the camera only images certain types/classes of desired objects and erases the others. After its deep learning-based design (training), the resulting layers are fabricated and assembled in 3D. When the input objects from the target classes of objects appear in front of this camera, they form high-quality images at the camera's output – as desired. In contrast, when the input objects in front of the same camera belong to other undesired classes, they are optically erased, forming non-informative and low-intensity patterns similar to random noise. Since the characteristic information of undesired classes of objects is all-optically erased at the camera output through light diffraction, this AI-designed camera never records their direct images. Therefore, the protection of privacy is maximized since an adversarial attack that has access to the recorded images of this camera cannot bring the information back.

More on The Californer
This AI-based camera design was also used to build encryption cameras, providing an additional layer of security and privacy protection. Such an encryption camera optically performs a selected linear transformation, exclusively for the target objects of interest. Only those with access to the decryption key (i.e., the inverse linear transformation in this case) can recover the original image of the target objects. On the other hand, the information of the other undesired objects is irreversibly lost since the AI-designed camera all-optically erases them at the output. Therefore, even if the decryption key is applied to the recorded images, it yields noise-like, unrecognizable features for other classes of undesired objects.

This research was led by Professor Aydogan Ozcan along with Professor Mona Jarrahi, both from the Electrical and Computer Engineering (ECE) department at UCLA. The other authors include graduate students Bijie Bai, Yi Luo, Tianyi Gan, Yuhang Li, Yifan Zhao, Deniz Mengu and post-doctoral researcher Dr. Jingtian Hu, all with the ECE department at UCLA. The authors report the support of the US Office of Naval Research and the US Department of Energy.

More on The Californer
See the article:

Source: UCLA ITA

Show All News | Report Violation


Latest on The Californer