Autonomous detection of AI hallucinations in digital pathology
The Californer/10317029

Trending...
LOS ANGELES - Californer -- Tissue staining is a cornerstone of medical diagnostics, used to highlight cellular structures and render tissue features visible under an optical microscope—critical for identifying diseases such as cancer. Traditionally, this process involves applying chemical dyes, like hematoxylin and eosin (H&E), to thinly sliced tissue samples. While effective, it is time-consuming, destructive, and resource-intensive. Virtual staining, powered by AI, offers a transformative alternative by digitally generating the equivalent of histochemically stained images from label-free autofluorescence microscopy data. This computational approach enables faster, more cost-effective, and scalable diagnostics without the need for physical dyes, while also preserving the tissue sample for further analysis. However, like other generative AI models, virtual staining carries the risk of hallucinations—errors where the AI adds or alters microscopic tissue features that are not present in the actual specimen. When these hallucinations appear realistic, they can mislead even experienced pathologists, jeopardizing diagnostic accuracy.

More on The Californer
To address this challenge, a team led by Professor Aydogan Ozcan at the University of California, Los Angeles (UCLA), in collaboration with pathologists from the University of Southern California and Hadassah Hebrew University Medical Center, developed an autonomous image quality assessment tool for detecting hallucinations in virtual staining and digital pathology. Named AQuA (Autonomous Quality Assessment), this AI-powered tool autonomously detects subtle hallucinations in digitally stained tissue slides—without requiring histochemical ground truth for comparison—and outperforms human experts in identifying potentially misleading tissue image artifacts.

Published in Nature Biomedical Engineering, AQuA operates independently of the original AI staining model and does not rely on paired histochemically stained images. It uses iterative image translation cycles between the H&E and autofluorescence domains, which amplify even subtle inconsistencies. These cycles produce sequences of images that are rapidly evaluated by an ensemble of neural networks—effectively a panel of digital judges—to determine image quality and flag hallucinations before the images reach pathologists. This architecture makes AQuA fast, adaptable, and scalable across different tissue types, staining styles, and pathology applications.

More on The Californer
In extensive testing on human kidney and lung biopsy samples, AQuA achieved 99.8% and 97.8% accuracy, respectively, in distinguishing high-quality from low-quality virtually stained images—all without access to the original histochemically stained tissue images or the AI model used to generate the virtually stained counterparts. It also showed over 98% agreement with board-certified pathologists and, in some cases, outperformed them—especially in detecting realistic-looking hallucinations that experts missed when ground truth staining was unavailable. Beyond virtual staining, the researchers demonstrated that AQuA could also assess the quality of conventional chemically stained tissue slides, automatically detecting common staining artifacts in clinical workflows.

Paper: https://www.nature.com/articles/s41551-025-01421-9

Source: ucla ita
Filed Under: Science

Show All News | Report Violation

0 Comments

Latest on The Californer