Recent advances in genetic artificial intelligence are helping to explain how memories allow us to learn about the world, relive old experiences and create entirely new experiences for imagination and programming, according to a new study by UCL researchers.
The study, published in Nature Human Behavior and funded by Wellcome, uses an AI computational model — known as a genetic neural network — to simulate how neural networks in the brain learn and remember a sequence of events (each represented by a simple scene).
The model featured networks representing the hippocampus and neocortex to explore how they interact. Both parts of the brain are known to work together during memory, imagination and planning.
Lead author PhD student Eleanor Spens (UCL Institute of Cognitive Neuroscience) said: “Recent advances in generative networks used in artificial intelligence show how information can be extracted from experience so that we can remember a particular experience and also to flexibly imagine what new experiences might be like.
“We think of remembering as imagining the past based on concepts, combining some stored details with our expectations of what might have happened.”
Humans need to make predictions to survive (e.g. to avoid danger or find food), and AI networks suggest how when we replay memories while resting, it helps our brains pick up patterns from past experiences that can be used to make predictions.
The researchers played the model 10,000 images of simple scenes. The hippocampal network rapidly encoded each scene as it was experienced. He then repeated the scenes over and over to train the genetic neural network in the neocortex.
The neocortical network learned to pass the activity of the thousands of input neurons (neurons that receive visual information) that represent each scene through smaller intermediate layers of neurons (the smallest contains only 20 neurons), to recreate the scenes as patterns of activity in thousands of neurons output (neurons that predict visual information).
This caused the neocortical network to learn highly efficient “conceptual” representations of the scenes that capture their meaning (eg, the layouts of walls and objects) — allowing both the representation of old scenes and the creation of entirely new ones.
Consequently, the hippocampus was able to encode the meaning of the new scenes presented to it, rather than having to encode every detail, allowing it to focus resources on encoding unique features that the neocortex could not reproduce — such as new types of objects.
The model explains how the neocortex slowly acquires conceptual knowledge and how, together with the hippocampus, this allows us to “relive” events by reconstructing them in our minds.
The model also explains how new events can be created during imagination and planning for the future, and why existing memories often contain “gist-like” distortions — in which unique features are generalized and remembered more like features in previous events.
Senior author Professor Neil Burgess (UCL Institute of Cognitive Neuroscience and UCL Queen Square Institute of Neurology) explained: “The way memories are reconstructed, rather than being true records of the past, shows us how meaning or essence of how an experience is recombined with unique details and how this can lead to biases in the way we remember things’.