An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group of the Institute for Basic Science (IBS) has revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a new perspective on memory consolidation, which is a process that converts short-term memories into long-term ones, in artificial intelligence systems.
In the race to develop Artificial General Intelligence (AGI), with influential entities such as OpenAI and Google DeepMind leading the way, understanding and replicating human intelligence has become a major research interest. Central to these technological developments is the Transformer model, whose fundamental principles are now being explored in new depth.
The key to strong AI systems is understanding how they learn and remember information. The team applied principles of human brain learning, specifically focusing on memory consolidation via the NMDA receptor in the hippocampus, to artificial intelligence models.
The NMDA receptor is like a smart door to your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes stimulation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic guard is pushed aside are substances allowed to flow into the cell. This is the process that allows the brain to create and retain memories, and the role of the gatekeeper (the magnesium ion) in the whole process is quite specific.
The team made an exciting discovery: the Transformer model appears to use a gatekeeping process similar to the brain’s NMDA receptor. This revelation led the researchers to investigate whether transformer memory consolidation might be controlled by a mechanism similar to the NMDA receptor gating process.
In animal brains, a low level of magnesium is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor. Just as in the brain, where changing magnesium levels affect memory strength, adjusting the Transformer’s parameters to reflect the gating action of the NMDA receptor led to improved memory in the AI model. This groundbreaking finding suggests that how AI models learn can be explained by established knowledge in neuroscience.
C. Justin LEE, who is the director of neuroscientism at the institute, said: “This research takes a critical step in advancing artificial intelligence and neuroscience. It allows us to delve deeper into the principles of how the brain works and develop more advanced artificial intelligence systems based on these ideas.”
CHA Meeyoung, who is a data scientist in the team and at KAIST, notes, “The human brain is remarkable in how it works with minimal energy, unlike large artificial intelligence models that need enormous resources. Our work opens up new possibilities for low-cost, high-performance artificial intelligence systems that learn and remember information like humans.”
What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an artificial intelligence construct, marking a significant advance in simulating human memory consolidation. The convergence of human cognition and AI design not only holds promise for low-cost, high-performance AI systems, but also provides valuable insights into brain function through AI models.