"High achiever" neurons carry the brunt of memories
The Californer/10287567

Trending...
A novel neural model sheds light on how the brain stores and manages information.

MOUNTAIN VIEW, Calif. - Californer -- Within neural networks, diversity is key to handling complex tasks. A 2017 study (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC56...) by Dr. Gabriele Scheler revealed that neurons develop significant variability through the learning process.[1] As networks learn, neuronal properties change—they fire at different rates, form stronger or weaker connections, and vary how easily they can be activated. Dr. Scheler showed that this heterogeneity follows a predictable pattern across brain regions and neuronal subtypes: while most neurons function at average levels, a select few are highly active. Does neuronal variability enable networks to process information more efficiently? A new study offers some answers.

In a June 30th preprint on bioRxiv (https://www.biorxiv.org/content/10.1101/658153v...), Dr. Scheler and Dr. Johann Schumann introduced a neuronal network model that mimics the brain's memory storage and recall functions.[2] Central to this model are high "mutual-information" (MI) neurons, the high-functioning neurons identified in the 2017 study. They found that high MI neurons carry the most crucial information within a memory or pattern representation. Remarkably, stimulating only high MI neurons can trigger the recall of entire patterns, although in a compressed form.

More on The Californer
The finding supports a "hub-and-spoke" model of neural networks, where a few "hub" neurons represent broad concepts, and "spoke" neurons represent specific details connected to those concepts. By activating just the central hub neurons, the connected spoke neurons are triggered downstream, to recreate the original pattern. These mini teams of neurons, or neural ensembles, could be key in recording and recalling complex memories in the brain. "We believe that such structures imply greater advantages for recall," the authors concluded.

This model not only provides insights into human cognition but could have major implications for building better AI systems. Unlike typical AI models that rely on vast amounts of data to learn, a neural ensemble-based model could potentially adapt and learn from fewer examples. For instance, a traditional AI model would require many images to train it to identify shapes. However, this model could be trained on the basic identifying properties of each shape (e.g., a square has four equal sides) from a handful of examples, teaching the model to recognize these shapes in various contexts without extensive training data.

More on The Californer
Next, Dr. Scheler, Dr. Schumann, and collaborator Prof. Röhrbein, are focused on deploying models such as this one to help other researchers build better AI systems. They are in the initial stages of launching a startup that offers a platform for developers to build network models rooted in biology. "By providing these ready-made components, users can create models that are more cognitively oriented and computationally efficient than traditional statistical machine learning models", says Dr. Scheler.

Contact
For media inquiries contact Dr. Gabriele Scheler
***@theoretical-biology.org


Source: Carl Correns Foundation for Mathematical Biology
Filed Under: Science

Show All News | Report Violation

0 Comments

Latest on The Californer