The Science of How Memory Works
By Maria Popova
“Whatever becomes of [old memories], in the long intervals of consciousness?” Henry James wistfully pondered upon turning fifty. “They are like the lines of a letter written in sympathetic ink; hold the letter to the fire for a while and the grateful warmth brings out the invisible words.” James was not alone in seeking to understand the seemingly mysterious workings of human memory — something all the more urgently fascinating in our age of information overload, where we’re evolving a new kind of “transactive memory.” But like other scientific mysteries of how the brain works — including what actually happens while we sleep and why some people are left-handed — memory continues to give scientists more questions than answers.
In The Guardian of All Things: The Epic Story of Human Memory (public library) technology writer Michael S. Malone takes a 10,000-year journey into humanity’s understanding of our great cognitive record-keeper, exploring both its power and its ongoing perplexity.
One of the most astounding facts Malone points out is that memory — that is, the creation of memories — is the result of a biochemical reaction that takes place inside neurons, one particularly common among neurons responsible for our senses. Scientists have recently discovered that our short-term memory — also known as “working memory,” the kind responsible for the “chunking” mechanism that powers our pattern-recognition and creativity — is localized to a few specific areas of the brain. The left hemisphere, for instance, is mostly in charge of verbal and object-oriented tasks. Even so, however, scientists remain mystified by the specific distribution, retrieval, and management of memory. Malone writes:
One popular theory holds that short-term memory consists of four “slave” systems. The first is phonological, for sound and language that (when its contents begin to fade) buys extra time through a second slave system. This second operation is a continuous rehearsal system — as you repeat a phone number you’ve just heard as you run to the other room for your phone. The third system is a visuo-spatial sketch pad that, as the name suggests, stores visual information and mental maps. Finally, the fourth (and most recently discovered) slave is an episodic buffer that gathers all of the diverse information in from the other slaves, and maybe other information from elsewhere, and integrates them together into what might be described as a multimedia memory.
It’s worth noting that memory and creativity have a great deal in common — the combinatorial process of memory-making that Malone describes is remarkably similar to how creativity works: we gather ideas and information just by being alive and awake to the world, record some of those impressions in our mental sketch pad, then integrate the various bits into new combinations that we call our “own” ideas, a kind of “multimedia” assemblage of existing bits.
Malone goes on to explore the inner workings of long-term memory — a substantially different beast, designed to keep our permanent mental record:
Chemically, we have a pretty good idea how memories are encoded and retained in brain neurons. As with short-term memory, the storage of information is made possible by the synthesis of certain proteins in the cell. What differentiates long-term memory in neurons is that frequent repetition of signals causes magnesium to be released — which opens the door for the attachment of calcium, which in turn makes the record stable and permanent. But as we all know from experience, memory can still fade over time. For that, the brain has a chemical process called long-term potentiation that regularly enhances the strength of the connections (synapses) between the neurons and creates an enzyme protein that also strengthens the signal — in other words, the memory — inside the neuron.
From the functional, Malone moves on to the structural organization of memory, where another dichotomy emerges:
Architecturally, the organization of memory in the brain is a lot more slippery to get one’s hands around (so to speak); different perspectives all seem to deliver useful insights. For example, one popular way to look at brain memory is to see it as taking two forms: explicit and implicit. Explicit, or “declarative,” memory is all the information in our brains that we can consciously bring to the surface. Curiously, despite its huge importance in making us human, we don’t really know where this memory is located. Scientists have, however, divided explicit memory into two forms: episodic, or memories that occurred at a specific point in time; and semantic, or understandings (via science, technology, experience, and so on) of how the world works.
Implicit, or “procedural” memory, on the other hand, stores skills and memories of how to physically function in the natural world. Holding a fork, driving a car, getting dressed — and, most famously, riding a bicycle — are all nuanced activities that modern humans do without really giving them much thought; and they are skills, in all their complexity, that we can call up and perform decades after last using them.
One of the most confounding pieces of the cognitive puzzle, however, is a form of memory known as emotional memory — a specialized system for cataloging our memories based on the emotions they evoke. It’s unclear whether it belongs to the explicit or implicit domain, or to both, and scientists are still seeking to understand whether it serves as a special “search function” for the brain. (What we do now know, however, is that sharpening “emotional recall” might be the secret to better memory.)
From all this perplexity emerges Malone’s bigger point, a somewhat assuring testament to the idea that science, at its best, is always driven by “thoroughly conscious ignorance”:
What we do know is that — a quarter-million years after mankind inherited this remarkable organ called the brain — even with all of the tools available to modern science, human memory remains a stunning enigma.
Published April 8, 2014