The Brain ‘Rotates’ Memories to Save Them From New Sensations

During every waking moment, we humans and other animals have to balance on the edge of our awareness of past and present. We must absorb new sensory information about the world around us while holding on to short-term memories of earlier observations or events. Our ability to make sense of our surroundings, to learn, to act, and to think all depend on constant, nimble interactions between perception and memory.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop­ments and trends in mathe­matics and the physical and life sciences.

But to accomplish this, the brain has to keep the two distinct; otherwise, incoming data streams could interfere with representations of previous stimuli and cause us to overwrite or misinterpret important contextual information. Compounding that challenge, a body of research hints that the brain does not neatly partition short-term memory function exclusively into higher cognitive areas like the prefrontal cortex. Instead, the sensory regions and other lower cortical centers that detect and represent experiences may also encode and store memories of them. And yet those memories can’t be allowed to intrude on our perception of the present, or to be randomly rewritten by new experiences.

A paper published recently in Nature Neuroscience may finally explain how the brain’s protective buffer works. A pair of researchers showed that, to represent current and past stimuli simultaneously without mutual interference, the brain essentially “rotates” sensory information to encode it as a memory. The two orthogonal representations can then draw from overlapping neural activity without intruding on each other. The details of this mechanism may help to resolve several long-standing debates about memory processing.

To figure out how the brain prevents new information and short-term memories from blurring together, Timothy Buschman, a neuroscientist at Princeton University, and Alexandra Libby, a graduate student in his lab, decided to focus on auditory perception in mice. They had the animals passively listen to sequences of four chords over and over again, in what Buschman dubbed “the worst concert ever.”

These sequences allowed the mice to establish associations between certain chords, so that when they heard one initial chord versus another, they could predict what sounds would follow. Meanwhile, the researchers trained machine-learning classifiers to analyze the neural activity recorded from the rodents’ auditory cortex during these listening sessions, to determine how the neurons collectively represented each stimulus in the sequence.

Buschman and Libby watched how those patterns changed as the mice built up their associations. They found that over time, the neural representations of associated chords began to resemble each other. But they also observed that new, unexpected sensory inputs, such as unfamiliar sequences of chords, could interfere with a mouse’s representations of what it was hearing—in effect, by overwriting its representation of previous inputs. The neurons retroactively changed their encoding of a past stimulus to match what the animal associated with the later stimulus—even if that was wrong.

The researchers wanted to determine how the brain must be correcting for this retroactive interference to preserve accurate memories. So they trained another classifier to identify and differentiate neural patterns that represented memories of the chords in the sequences—the way the neurons were firing, for instance, when an unexpected chord evoked a comparison to a more familiar sequence. The classifier did find intact patterns of activity from memories of the actual chords that had been heard—rather than the false “corrections” written retroactively to uphold older associations—but those memory encodings looked very different from the sensory representations.

The memory representations were organized in what neuroscientists describe as an “orthogonal” dimension to the sensory representations, all within the same population of neurons. Buschman likened it to running out of room while taking handwritten notes on a piece of paper. When that happens, “you will rotate your piece of paper 90 degrees and start writing in the margins,” he said. “And that’s basically what the brain is doing. It gets that first sensory input, it writes it down on the piece of paper, and then it rotates that piece of paper 90 degrees so that it can write in a new sensory input without interfering or literally overwriting.”

In other words, sensory data was transformed into a memory through a morphing of the neuronal firing patterns. “The information changes because it needs to be protected,” said Anastasia Kiyonaga, a cognitive neuroscientist at UC San Diego who was not involved in the study.

Be the first to comment

Leave a Reply

Your email address will not be published.