The brain contains neural machinery for recognizing errors, correcting them, and optimizing behavior.
The neurotransmitter dopamine plays a major role in our ability to learn from our mistakes. Genetic variants that affect dopamine signaling may partly explain differences between people in the extent to which they learn from errors or negative consequences.
Certain patterns of cerebral activity often foreshadow errors, opening up the possibility of preventing blunders with portable devices that can detect error-prone brain states.
April 26, 1986: During routine testing, reactor number 4 of the Chernobyl nuclear power plant explodes, triggering the worst catastrophe in the history of the civilian use of nuclear energy.
September 22, 2006: On a trial run, experimental maglev train Transrapid 08 plows into a maintenance vehicle at 125 mph near Lathen, Germany, spewing wreckage over hundreds of yards, killing 23 passengers and severely injuring 10 others.
Human error was behind both accidents. Of course, people make mistakes, both large and small, every day, and monitoring and fixing slipups is a regular part of life. Although people understandably would like to avoid serious errors, most goofs have a good side: they give the brain information about how to improve or fine-tune behavior. In fact, learning from mistakes is likely essential to the survival of our species.
In recent years researchers have identified a region of the brain called the medial frontal cortex that plays a central role in detecting mistakes and responding to them. These frontal neurons become active whenever people or monkeys change their behavior after the kind of negative feedback or diminished reward that results from errors.
Much of our ability to learn from flubs, the latest studies show, stems from the actions of the neurotransmitter dopamine. In fact, genetic variations that affect dopamine signaling may help explain differences between people in the extent to which they learn from past goofs. Meanwhile certain patterns of cerebral activity often foreshadow miscues, opening up the possibility of preventing blunders with portable devices that can detect error-prone brain states.
Error DetectorHints of the brain’s error-detection apparatus emerged serendipitously in the early 1990s. Psychologist Michael Falkenstein of the University of Dortmund in Germany and his colleagues were monitoring subjects’ brains using electroencephalography (EEG) during a psychology experiment and noticed that whenever a subject pressed the wrong button, the electrical potential in the frontal lobe suddenly dropped by about 10 microvolts. Psychologist William J. Gehring of the University of Illinois and his colleagues confirmed this effect, which researchers refer to as error-related negativity, or ERN.
An ERN may appear after various types of errors, unfavorable outcomes or conflict situations. Action errors occur when a person’s behavior produces an unintended result. Time pressure, for example, often leads to misspellings while typing or incorrect addresses on e-mails. An ERN quickly follows such action errors, peaking within 100 milliseconds after the incorrect muscle activity ends.
A slightly more delayed ERN, one that crests 250 to 300 milliseconds after an outcome, occurs in response to unfavorable feedback or monetary losses. This so-called feedback ERN also may appear in situations in which a person faces a difficult choice—known as decision uncertainty—and remains conflicted even after making a choice. For instance, a feedback ERN may occur after a person has picked a checkout line in a supermarket and then realizes that the line is moving slower than the adjacent queue.
Where in the brain does the ERN originate? Using functional magnetic resonance imaging, among other imaging methods, researchers have repeatedly found that error recognition takes place in the medial frontal cortex, a region on the surface of the brain in the middle of the frontal lobe, including the anterior cingulate. Such studies implicate this brain region as a monitor of negative feedback, action errors and decision uncertainty—and thus as an overall supervisor of human performance.
In a 2005 paper, along with psychologist Stefan Debener of the Institute of Hearing Research in Southampton, England, and our colleagues, I showed that the medial frontal cortex is the probable source of the ERN. In this study, subjects performed a so-called flanker task, in which they specified the direction of a central target arrow in the midst of surrounding decoy arrows while we monitored their brain activity using EEG and fMRI simultaneously. We found that as soon as an ERN occurs, activity in the medial frontal cortex increases and that the bigger the ERN the stronger the fMRI signal, suggesting that this brain region does indeed generate the classic error signal.
Learning from LapsesIn addition to recognizing errors, the brain must have a way of adaptively responding to them. In the 1970s psychologist Patrick Rabbitt of the University of Manchester in England, one of the first to systematically study such reactions, observed that typing misstrikes are made with slightly less keyboard pressure than are correct strokes, as if the typist were attempting to hold back at the last moment.
More generally, people often react to errors by slowing down after a mistake, presumably to more carefully analyze a problem and to switch to a different strategy for tackling a task. Such behavioral changes represent ways in which we learn from our mistakes in hopes of avoiding similar slipups in the future.
The medial frontal cortex seems to govern this process as well. Imaging studies show that neural activity in this region increases, for example, before a person slows down after an action error. Moreover, researchers have found responses from individual neurons in the medial frontal cortex in monkeys that implicate these cells in an animal’s behavioral response to negative feedback, akin to that which results from an error.
In 1998 neuroscientists Keisetsu Shima and Jun Tanji of the Tohoku University School of Medicine in Sendai, Japan, trained three monkeys to either push or turn a handle in response to a visual signal. A monkey chose its response based on the reward it expected: it would, say, push the handle if that action had been consistently followed by a reward. But when the researchers successively reduced the reward for pushing—a type of negative feedback or error signal—the animals would within a few trials switch to turning the handle instead. Meanwhile researchers were recording the electrical activity of single neurons in part of the monkeys’ cingulate.
Shima and Tanji found that four types of neurons altered their activity after a reduced reward but only if the monkey used that reduction as a cue to push instead of turn, or vice versa. These neurons did not flinch if the monkey did not decide to switch actions or if it did so in response to a tone rather than to a lesser reward. And when the researchers temporarily deactivated neurons in this region, the monkey no longer switched movements after a dip in its incentive. Thus, these neurons relay information about the degree of reward for the purpose of altering behavior and can use negative feedback as a guide to improvement.
In 2004 neurosurgeon Ziv M. Williams and his colleagues at Massachusetts General Hospital reported finding a set of neurons in the human anterior cingulate with similar properties. The researchers recorded from these neurons in five patients who were scheduled for surgical removal of that brain region. While these neurons were tapped, the patients did a task in which they had to choose one of two directions to move a joystick based on a visual cue that also specified a monetary reward: either nine or 15 cents. On the nine-cent trials, participants were supposed to change the direction in which they moved the joystick.
Similar to the responses of monkey neurons, activity among the anterior cingulate neurons rose to the highest levels when the cue indicated a reduced reward along with a change in the direction of movement. In addition, the level of neuronal activity predicted whether a person would act as instructed or make an error. After surgical removal of those cells, the patients made more errors when they were cued to change their behavior in the face of a reduced payment. These neurons, therefore, seem to link information about rewards to behavior. After detecting discrepancies between actual and desired outcomes, the cells determine the corrective action needed to optimize reward.
But unless instructed to do so, animals do not generally alter their behavior after just one mishap. Rather they change strategies only after a pattern of failed attempts. The anterior cingulate also seems to work in this more practical fashion in arbitrating the response to errors. In a 2006 study experimental psychologists Stephen Kennerley and Matthew Rushworth and their colleagues at the University of Oxford taught rhesus monkeys to pull a lever to get food. After 25 trials, the researchers changed the rules, dispensing treats when the monkeys turned the lever instead of pulling it. The monkeys adapted and switched to turning the lever. After a while, the researchers changed the rules once more, and the monkeys again altered their behavior.
Each time the monkeys did not immediately switch actions, but did so only after a few false starts, using the previous four or five trials as a guide. After damage to the anterior cingulate, however, the animals lost this longer-term view and instead used only their most recent success or failure as a guide. Thus, the anterior cingulate seems to control an animal’s ability to evaluate a short history of hits and misses as a guide to future decisions.
Chemical IncentiveSuch evaluations may depend on dopamine, which conveys success signals in the brain. Neurophysiologist Wolfram Schultz, now at the University of Cambridge, and his colleagues have shown over the past 15 years that dopamine-producing nerve cells alter their activity when a reward is either greater or less than anticipated. When a monkey is rewarded unexpectedly, say, for a correct response, the cells become excited, releasing dopamine, whereas their activity drops when the monkey fails to get a treat after an error. And if dopamine quantity stably altered the connections between nerve cells, its differential release could thereby promote learning from successes and failures.
Indeed, changes in dopamine levels may help to explain how we learn from positive as well as negative reinforcement. Dopamine excites the brain’s so-called Go pathway, which promotes a response while also inhibiting the action-suppressing “NoGo” pathway. Thus, bursts of dopamine resulting from positive reinforcement promote learning by both activating the Go channel and blocking NoGo. In contrast, dips in dopamine after negative outcomes should promote avoidance behavior by inactivating the Go pathway while releasing inhibition of NoGo.
In 2004 psychologist Michael J. Frank, then at the University of Colorado at Boulder, and his colleagues reported evidence for dopamine’s influence on learning in a study of patients with Parkinson’s disease, who produce too little of the neurotransmitter. Frank theorized that Parkinson’s patients may have trouble generating the dopamine needed to learn from positive feedback but that their low dopamine levels may facilitate training based on negative feedback.
In the study the researchers displayed pairs of symbols on a computer screen and asked 19 healthy people and 30 Parkinson’s patients to choose one symbol from each pair. The word “correct” appeared whenever a subject had chosen an arbitrarily correct symbol, whereas the word “incorrect” flashed after every “wrong” selection. (No symbol was invariably correct or incorrect.) One of them was deemed right 80 percent of the time, and another 20 percent. For other pairs, the probabilities were 70:30 and 60:40. The subjects were expected to learn from this feedback and thereby increase the number of correct choices in later test runs.
As expected, the healthy people learned to prefer the correct symbols and avoid the incorrect ones with about equal proficiency. Parkinson’s patients, on the other hand, showed a stronger tendency to reject negative symbols than to select the positive ones—that is, they learned more from their errors than from their hits, showing that the lack of dopamine did bias their learning in the expected way. In addition, the patients’ ability to learn from positive feedback outpaced that from negative feedback after they took medication that boosted brain levels of dopamine, underscoring the importance of dopamine in positive reinforcement.
Dopamine-based discrepancies in learning ability also appear within the healthy population. Last December, along with psychology graduate student Tilmann A. Klein and our colleagues, I showed that such variations are partly based on individual differences in a gene for the D2 dopamine receptor. A variant of this gene, called A1, results in up to a 30 percent reduction in the density of those receptors on nerve cell membranes.
We asked 12 males with the A1 variant and 14 males who had the more common form of this gene to perform a symbol-based learning test like the one Frank used. We found that A1 carriers were less able to remember, and avoid, the negative symbols than were the participants who did not have this form of the gene. The A1 carriers also avoided the negative symbols less often than they picked the positive ones. Noncarriers learned about equally well from the good and bad symbols.
Thus, fewer D2 receptors may impair a person’s ability to learn from mistakes or negative outcomes. (This molecular quirk is just one of many factors that influence such learning.) Accordingly, our fMRI results show that the medial frontal cortex of A1 carriers generates a weaker response to errors than it does in other people, suggesting that this brain area is one site at which dopamine exerts its effect on learning from negative feedback.
But if fewer D2 receptors leads to impaired avoidance learning, why do drugs that boost dopamine signaling also lead to such impairments in Parkinson’s patients? In both scenarios, dopamine signaling may, in fact, be increased through other dopamine receptors; research indicates that A1 carriers produce an unusually large amount of dopamine, perhaps as a way to compensate for their lack of D2 receptors. Whatever the reason, insensitivity to unpleasant consequences may contribute to the slightly higher rates of obesity, compulsive gambling and addiction among A1 carriers than in the general population.
Foreshadowing FaultsAlthough learning from mistakes may help us avoid future missteps, inexperience or inattention can still lead to errors. Many such goofs turn out to be predictable, however, foreshadowed by telltale changes in brain metabolism, according to research my team published in April in the Proceedings of the National Academy of Sciences USA.
Along with cognitive neuroscientist Tom Eichele of the University of Bergen in Norway and several colleagues, I asked 13 young adults to perform a flanker task while we monitored their brain activity using fMRI. Starting about 30 seconds before our subjects made an error, we found distinct but gradual changes in the activation of two brain networks.
One of the networks, called the default mode region, is usually more active when a person is at rest and quiets down when a person is engaged in a task. But before an error, the posterior part of this network—which includes the retrosplenial cortex, located near the center of the brain at the surface—became more active, indicating that the mind was relaxing. Meanwhile activity declined in areas of the frontal lobe that spring to life whenever a person is working hard at something, suggesting that the person was also becoming less engaged in the task at hand.
Our results show that errors are the product of gradual changes in the brain rather than unpredictable blips in brain activity. Such adjustments could be used to foretell errors, particularly those that occur during monotonous tasks. In the future, people might wear portable devices that monitor these brain states as a first step toward preventing mistakes where they are most likely to occur—and when they matter most.
Editor's Note: This story was originally published with the title "Minding Mistakes"
ABOUT THE AUTHOR(S)
Markus Ullsperger is a physician and head of the cognitive neurology research group at the Max Planck Institute for Neurological Research in Cologne, Germany.