Discovering New Powers of AI

In the past few decades, artificial intelligence has proven to be very good at achieving outstanding goals in multiple fields. Chess is one of them: In 1996, the computer Deep Blue defeated the chess champion Garry Kasparov for the first time. A new study shows that brain strategies used to store memories may lead to imperfect memories, but in turn allow them to store more memories and be less troublesome than AI.

Fremont, CA: By adjusting the connections between neurons, real or artificial neural networks can be learned. Make them stronger or weaker, some neurons become more active, some become fewer, until a pattern of activity appears. This model is what we call "memory". The AI ​​strategy uses a complex long algorithm that iteratively adjusts and optimizes connections. The work of the brain is much simpler: each connection between neurons changes based on how active two neurons are at the same time. Compared with AI algorithms, it has long been thought that it can store less memory. However, in terms of storage capacity and retrieval, this wisdom is largely based on the analysis of the network and assumes a basic simplification: neurons can be regarded as binary units.

The new research, however, shows otherwise: the fewer number of memories stored using the brain strategy depends on such unrealistic assumptions. When the simple strategy used by the brain to change the connections is combined with biologically plausible models for single neurons response, that strategy performs as well as, or even better, than AI algorithms. How could this be the case? Paradoxically, the answer is in introducing errors: when a memory is effectively retrieved this can be identical to the original input-to-be-memorized or correlated to it.

The brain strategy leads to the retrieval of memories which are not identical to the original input, silencing the activity of those neurons that are only barely active in each pattern. Those silenced neurons, indeed, do not play a crucial role in distinguishing among the different memories stored within the same network. By ignoring them, neural resources can be focused on those neurons that do matter in an input-to-be-memorized and enable a higher capacity.

Overall, this research highlights how biologically plausible self-organized learning procedures can be just as efficient as slow and neurally implausible training algorithms.

Weekly Brief

Read Also