illustration of psychology experiments

Glenn Harvey

Century of Science: Theme

The science of us

from ScienceNews

Clashing approaches

One of the most infamous psychology experiments ever conducted involved a carefully planned form of child abuse. The study rested on a simple scheme that would never get approved or funded today. In 1920, two researchers reported that they had repeatedly startled an unsuspecting infant, who came to be known as Little Albert, to see if he could be conditioned like Pavlov’s dogs. The scientists viewed their laboratory fearfest as a step toward strengthening a branch of natural science able to predict and control the behavior of people and other animals.

No one could accuse the boy’s self-appointed trainers of lacking ambition or being sticklers for ethical research.

Pic of Watson
Psychologist John Watson, who led the Little Albert experiments, pioneered “behaviorism,” the study of people’s external reactions to specific sensations and situations. Granger

Psychologist John Watson of Johns Hopkins University and his graduate student Rosalie Rayner first observed that a 9-month-old boy, identified as Albert B., sat placidly when the researchers placed a white rat in front of him. In tests two months later, one of the researchers presented the rodent, and just as the child brought his hand to pet it, the other scientist stood behind Albert and clang a metal rod with a hammer. Their goal: to see if a human child could be conditioned to associate an emotionally neutral white rat with a scary noise, just as Russian physiologist Ivan Pavlov had trained dogs to associate the meaningless clicks of a metronome with the joy of being fed.

Pavlov’s dogs slobbered at the mere sound of a metronome. Likewise, Little Albert eventually cried and recoiled at the mere sight of a white rat. The boy’s conditioned fear wasn’t confined to rodents. He got upset when presented with other furry things — a rabbit, a dog, a fur coat and a Santa Claus mask with a fuzzy beard.

Little Albert and rabbit
Infamous experiments conducted around a century ago conditioned an infant nicknamed Little Albert to fear furry animals and objects.J.B. Watson

Crucial details of the Little Albert experiment remain unclear or in dispute, such as who the child was, whether he had any neurological conditions and why the boy was removed from the experiment, possibly by his mother, before the researchers could attempt to reverse his learned fears. Also uncertain is whether he experienced any long-term effects of his experience.

Although experimental psychology originated in Germany in 1879, Watson’s notorious study foreshadowed a messy, contentious approach to the “science of us” that has played out over the past 100 years. Warring scientific tribes armed with clashing assumptions about how people think and behave have struggled for dominance in psychology and other social sciences.Some have achieved great influence and popularity, at least for a while. Others have toiled in relative obscurity. Competing tribes have rarely joined forces to develop or integrate theories about how we think or why we do what we do; such efforts don’t attract much attention.

In the late 1940s, psychologist Edward Tolman reported that rats — and probably people — construct “cognitive maps” of their environments during learning.Science News

But Watson, who had a second career as a successful advertising executive, knew how to grab the spotlight. He pioneered a field dubbed behaviorism, the study of people’s external reactions to specific sensations and situations. Only behavior counted in Watson’s science. Unobservable thoughts didn’t concern him.

Even as behaviorism took center stage — Watson wrote a best-selling book on how to raise children based on conditioning principles — some psychologists addressed mental life. American psychologist Edward Tolman concluded that rats learned the spatial layout of mazes by constructing a “cognitive map” of their surroundings published in 1948. Beginning in the 1910s, Gestalt psychologists studied how we perceive wholes differently than the sum of their parts, such as, depending on your perspective, seeing either a goblet or the profiles of two faces in the foreground of a drawing.

Pic of Freud
Sigmund Freud’s ideas about unconscious conflicts, neuroses and psychoses relied on analyses of himself and his patients, not lab studies.Authenticated News/Getty Image

And starting at the turn of the 20th century, Sigmund Freud, the founder of psychoanalysis, exerted a major influence on the treatment of psychological ailments through his writings on topics such as unconscious conflicts, neuroses and psychoses. Freud’s often controversial ideas — consider the Oedipus complex and the death instinct — hinged on analyses of himself and his patients, not lab studies. Psychoanalytically inspired research came much later, exemplified by British psychologist John Bowlby’s work in the 1940s through ’60s on children’s styles of emotional attachment to their caregivers.

Bowlby’s findings appeared around the time that Freudian clinicians guided the drafting of the American Psychiatric Association’s first official classification system for mental disorders. Later editions of the psychiatric “bible” dropped Freudian concepts as unscientific. Dissatisfaction with the current manual, which groups ailments by sets of often overlapping symptoms, has motivated a growing line of research on how best to classify mental ailments.

Skinner and pigeons
Psychologist B.F. Skinner studied how rewards and punishments shape new behaviors in pigeons and other animals.Science History Images/Alamy Stock Photo

Shortly after Freud’s intellectual star rose, so did that of a Harvard University psychologist named B.F. Skinner. Skinner could trace his academic lineage back to John Watson’s behaviorism. By placing rats and pigeons in conditioning chambers known as Skinner boxes, Skinner studied how the timing and rate of rewards or punishments affect animals’ ability to learn new behaviors. He found, for instance, that regular rewards speed up learning, whereas intermittent rewards produce behavior that’s hard to extinguish in the lab.

Skinner regarded human behavior as resulting from past patterns of reinforcement, which in his view rendered free will an illusion. In his 1948 novel Walden Two, Skinner imagined a post-World War II utopian community in which rewards were doled out to produce well-behaved members.

Skinner’s ideas, and behaviorism in general, lost favor by the late 1960s. Scientists began to entertain the idea that computations, or statistical calculations, in the brain might enable thinking.

Some psychologists suspected that human judgments relied on faulty mental shortcuts rather than computer-like data crunching. Research on allegedly rampant flaws in how people make decisions individually and in social situations shot to prominence in the 1970s and remains popular today. In the last few decades, an opposing line of research has reported that instead, people render good judgments by using simple rules of thumb tailored to relevant situations.

Starting in the 1990s, the science of us branched out in new directions. Progress has been made in studying how emotional problems develop over decades, how people in non-Western cultures think and why deaths linked to despair have steadily risen in the United States. Scientific attention has also been redirected to finding new, more precise ways to define mental disorders.

No unified theory of mind and behavior unites these projects. For now, as social psychologists William Swann of the University of Texas at Austin and Jolanda Jetten of the University of Queensland in Australia wrote in 2017, perhaps scientists should broaden their perspectives to “witness the numerous striking and ingenious ways that the human spirit asserts itself.”
— Bruce Bower


Revolution and rationality

Today’s focus on studying people’s thoughts and feelings as well as their behaviors can be traced to a “cognitive revolution” that began in the mid-20th century.

The rise of increasingly powerful computers motivated the idea that complex programs in the brain guide “information processing” so that we can make sense of the world. These neural programs, or sets of formal rules, provide frameworks for remembering what we’ve done, learning a native language and performing other mental feats, a new breed of cognitive and computer scientists argued.

SN coverage of brain computer link
A “cognitive revolution” in the mid-20th century, as reported in Science News, championed the idea that the brain processes the world like a computer, following sets of formal rules.Science News

Economists adapted the cognitive science approach to their own needs. They were already convinced that individuals calculate costs and benefits of every transaction in the most self-serving ways possible — or should do so but can’t due to human mental limitations. Financial theorists bought into the latter argument and began creating cost-benefit formulas for investing money that are far too complex for anyone to think up, much less calculate, on their own. Economist Harry Markowitz won a 1990 Nobel Prize for his set of mathematical rules, introduced in 1952, to allocate an investor’s money to different assets, with more cash going to better and safer bets.

But in the 1970s, psychologists began conducting studies documenting that people rarely think according to rational rules of logic beloved by economists. Psychologists Daniel Kahneman of Princeton University, who received the Nobel Memorial Prize in economic sciences in 2002, and Amos Tversky of Stanford University founded that area of research, at first called heuristics (meaning mental shortcuts) and biases.

Kahneman and Tversky’s demonstrations of seemingly uncontrollable, irrational thinking hit a chord among scientists and the broader culture. In one experiment, participants given a description of a single, outspoken, politically active woman were more likely to deem her a bank teller who is active in the feminist movement than simply a bank teller. But the probability of both being true is less than the probability of either one alone. So based on this classic logical formula, which treats as irrelevant the social context that people typically use to categorize others, the participants were wrong.

Tversky and Kahneman
Starting in the 1970s, psychologists Amos Tversky, left, and Daniel Kahneman, right, led a popular line of research aimed at uncovering ways in which human decisions go awry.Courtesy of Barbara Tversky

Kahneman and Tversky popularized the notion that decision makers rely on highly fallible mental shortcuts that can have dire consequences. For instance, people bet themselves into bankruptcy at blackjack tables based on what they easily remember — big winners — rather than on the vast majority of losers. University of Chicago economist Richard Thaler applied that idea to the study of financial behavior in the 1980s. He was awarded the 2017 Nobel Memorial Prize in economic sciences for his contributions to the field of behavioral economics, which incorporated previous heuristics and biases research. Thaler has championed the practice of nudging, in which government and private institutions find ways to prod people to make decisions deemed to be in their best interest.

Kahneman and Tversky popularized the notion that decision makers rely on highly fallible mental shortcuts that can have dire consequences.

Better to nudge, behavioral economists argue, than to leave people to their potentially disastrous mental shortcuts. Nudges have been used, for instance, to enroll employees automatically in retirement savings plans unless they opt out. That tactic is aimed at preventing delays in saving money during prime work years that lead to financial troubles later in life.

Another nudge tactic has attempted to reduce overeating of sweets and other unhealthy foods, and perhaps rising obesity rates as well, by redesigning cafeterias and grocery stores so that vegetables and other nutritious foods are easiest to see and reach.

As nudging gained in popularity, Kahneman and Tversky’s research also stimulated the growth of an opposing research camp, founded in the 1990s by psychologist Gerd Gigerenzer, now director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Gigerenzer and his colleagues study simple rules of thumb that, when geared toward crucial cues in real-world situations, work remarkably well for decision making. Their approach builds on ideas on decision making in organizations that won economist Herbert Simon the 1978 Nobel Memorial Prize in economic sciences

Grocery store pic
Simple nudges can help people, who often think irrationally, to make better choices, according to one school of thought. Based on this idea, some grocery stores have been redesigned so that vegetables and other nutritious foods are easiest to see and reach.Rob Maxwell/Unsplash

In the real world, people typically possess limited information and have little time to make decisions, Gigerenzer argues. Precise risks can’t be known in advance or calculated based on what’s happened in the past because many interacting factors can trigger unexpected events in, for example, one’s life or the world economy. Amid so much uncertainty, simple but powerful decision tactics can outperform massive number-crunching operations such as Markowitz’s investment formula. Using 40 years of U.S. stock market data to predict future returns, one study found that simply distributing money evenly among either 25 or 50 stocks usually yielded more money than 14 complex investment strategies, including Markowitz’s.

When there’s a lot of uncertainty, such as with the stock market, simple rules of thumb can outperform more complex decision-making strategies.xPACIFICA/Getty Images

Unlike Markowitz’s procedure, dividing funds equally among diverse buys spreads out investment risks without mistaking accidental and random financial patterns in the past for good bets.

Gigerenzer and other investigators of powerful rules of thumb emphasize public education in statistical literacy and effective thinking strategies over nudging schemes. Intended effects of nudges are often weak and short-lived, they contend. Unintended effects can also occur, such as regrets over having accepted the standard investment rate in a company’s savings plan because it turns out to be too low for one’s retirement needs. “Nudging people without educating them means infantilizing the public,” Gigerenzer wrote in 2015.
— Bruce Bower

Spotlight


Doing harm

As studies of irrational decision making took off around 50 years ago, so did a field of research with especially troubling implications. Social psychologists put volunteers into experimental situations that, in their view, exposed a human weakness for following the crowd and obeying authority. With memories of the Nazi campaign to exterminate Europe’s Jews still fresh, two such experiments became famous for showing the apparent ease with which people abide by heinous orders and abuse power.

SN Milgram coverage
Stanley Milgram’s experiments, reported in Science News above, appeared to show that people are often willing to physically hurt others badly when commanded to do so, though researchers have since questioned those conclusions.Science News

First, Yale psychologist Stanley Milgram reported in 1963 that 65 percent of volunteers obeyed an experimenter’s demands to deliver what they thought were increasingly powerful and possibly lethal electric shocks to an unseen person — who was actually working with Milgram — as punishments for erring on word-recall tests. This widely publicized finding appeared to unveil a frightening willingness of average folks to carry out the commands of evil authorities.

A disturbing follow-up to Milgram’s work was the 1971 Stanford Prison Experiment, which psychologist Philip Zimbardo halted after six days due to escalating chaos among participants. Male college students assigned to play guards in a simulated prison had increasingly abused mock prisoners, stripping them naked and denying them food. Student “prisoners” became withdrawn and depressed.

Zimbardo argued that extreme social situations, such as assuming the role of a prison guard, overwhelm self-control. Even mild-mannered college kids can get harsh when clad in guards’ uniforms and turned loose on their imprisoned peers, he said.

Milgram ad
Stanley Milgram recruited participants, who thought they were taking part in a memory experiment, through newspaper ads. Click to enlargeGranger

Milgram’s and Zimbardo’s projects contained human drama and conflict that had widespread, and long-lasting, public appeal. A 1976 made-for-television movie based on Milgram’s experiment, titled The Tenth Level, starred William Shatner — formerly Captain Kirk of Star Trek. Books examining the contested legacy of Milgram’s shock studies continue to draw readers. A 2010 movie inspired by the Stanford Prison Experiment, simply called The Experiment, starred Academy Award winners Adrien Brody and Forest Whitaker.

Despite the lasting cultural impact of the obedience-to-authority and prison experiments, some researchers have questioned Milgram’s and Zimbardo’s conclusions. Milgram conducted 23 obedience experiments, although only one was publicized. Overall, volunteers usually delivered the harshest shocks when encouraged to identify with Milgram’s scientific mission to understand human behavior. No one followed the experimenter’s order, “You have no other choice, you must go on.”

Indeed, people who follow orders to harm others are most likely to do so because they identify with a collective cause that morally justifies their actions, argued psychologists S. Alexander Haslam of the University of Queensland and Stephen Reicher of the University of St. Andrews in Scotland 40 years after the famous obedience study. Rather than blindly following orders, Milgram’s volunteers cooperated with an experimenter when they viewed participation as scientifically important — even if, as many later told Milgram, they didn’t want to deliver shocks and felt bad afterward after doing so.

Milgram experiment pic
Volunteers in Stanley Milgram’s “obedience to authority” experiments thought they were giving shocks to “learners,” such as the seated man here, who were actually working with Milgram.© 1968 BY STANLEY MILGRAM, © RENEWED 1993 BY ALEXANDRA MILGRAM. FROM THE FILM OBEDIENCE DISTRIBUTED BY ALEXANDER STREET PRESS

Data from the 1994 ethnic genocide in the African nation of Rwanda supported that revised take on Milgram’s experiment. In a 100-day span, members of Rwanda’s majority Hutu population killed roughly 800,000 ethnic Tutsis. Researchers who later examined Rwandan government data on genocide perpetrators estimated that only about 20 percent of Hutu men and a much smaller percentage of Hutu women seriously injured or killed at least one person during the bloody episode. Many of those who did were ideological zealots or sought political advancement. Other genocide participants thought they were defending Rwanda from enemies or wanted to steal valuable possessions from Tutsi neighbors.

But most Hutus rejected pressure from political and community leaders to join the slaughter.

Historical pic of experiment
College students who assumed the roles of guards, left, and prisoners, right, in the 1971 Stanford Prison Experiment descended into a chaotic situation within a few days.Zimbardo, Philip G./Stanford University

Neither did Zimbardo’s prisoners and guards passively accept their assigned roles. Prisoners at first challenged and rebelled against guards. When prisoners learned from Zimbardo that they would have to forfeit any money they’d already earned if they left before the experiment ended, their solidarity plummeted, and the guards crushed their resistance. Still, a majority of guards refused to wield power tyrannically, favoring tough-but-fair or friendly tactics.

In a second prison experiment conducted by Haslam and Reicher in 2001, guards were allowed to develop their own prison rules rather than being told to make prisoners feel powerless, as Zimbardo had done. In a rapid chain of events, conflict broke out between one set of guards and prisoners who formed a communal group that shared power and another with guards and prisoners who wanted to institute authoritarian rule. Morale in the communal group sank rapidly. Haslam stopped the experiment after eight days. “It’s the breakdown of groups and resulting sense of powerlessness that creates the conditions under which tyranny can triumph,” Haslam concluded.

Prisoners at a table
A 2002 prison experiment conducted in England resulted in some guards and prisoners forming a communal group and others forming an authoritarian group.A. Haslam, BBC

Milgram’s and Zimbardo’s experiments set the stage for further research alleging that people can’t control certain harmful attitudes and behaviors. A test of the speed with which individuals identify positive or negative words and images after being shown white and Black faces has become popular as a marker of unconscious racial bias. Some investigators regard that test as a window into hidden prejudice — and implicit bias training has become common in many workplaces. But other scientists have challenged whether it truly taps into underlying bigotry. Likewise, stereotype threat, the idea that people automatically act consistently with negative beliefs about their race, sex or other traits when subtly reminded of those stereotypes, has also attracted academic supporters and critics.
— Bruce Bower


Diagnostic disarray

If the Stanford Prison Experiment left volunteers emotionally frazzled, thumbing through the official manual of psychiatric diagnoses for guidance would only have further confused them. Since its introduction nearly 70 years ago, psychiatry’s “bible” of mental disorders has created an unholy mess for anyone trying to define and understand mental ailments.

DSM manual cover
Early versions of the Diagnostic and Statistical Manual of Mental Disorders, including the first one shown here, leaned on psychoanalytic ideas, such as dividing ailments into less-severe neuroses and more-severe psychoses.American Psychiatric Association

From 1952 to 1980, the Diagnostic and Statistical Manual of Mental Disorders, or DSM, leaned on psychoanalytic ideas. Ailments not caused by a clear brain disease were divided into those involving less-debilitating neuroses and more-debilitating psychoses. Other conditions were grouped under psychosomatic illnesses, personality disorders, and brain or nervous system problems.

Growing frustration with the imprecision of DSM labels, including those for psychiatric disorders such as schizophrenia and depression, led to a major revision of the manual in 1980. Titled DSM-III, this guidebook consisted of an expanded number of mental disorders, defined by official teams of psychiatrists as sets of symptoms that regularly occurred together. Architects of DSM-III wanted psychiatry to become a biological science. Their ascendant movement emphasized medications over psychotherapy for treating mental disorders.

But diagnostic confusion still reigned as the American Psychiatric Association published new variations on the DSM-III theme. Many symptoms characterized more than one mental disorder. For instance, psychotic delusions could occur in schizophrenia, bipolar disorder or other mood disorders. People receiving mental health treatment typically got diagnosed with two or more conditions, based on symptoms. Given so much overlap in disorders’ definitions, clinicians often disagreed on which DSM-III label best fit individuals in clear distress.

By the time DSM-5 appeared in 2013, an organized scientific rebellion was underway. In 2010, a decade-plus project to fund research on alternative ways to define mental disorders, based on behavioral and brain measures, was launched by the National Institute of Mental Health in Bethesda, Md. This move was welcomed by investigators who had long argued that DSM personality disorders should be rated on a sliding scale from moderate to severe, using measures of a handful of personality traits. Psychiatrists distinguish mental disorders such as depression and schizophrenia from personality disorders, which include narcissism and antisocial behavior.

St. Elizabeths pic
As standardized definitions of mental disorders gained ground in the 1950s, large psychiatric facilities such as St. Elizabeths Hospital in Washington, D.C., still treated many cases of severe mental illness.Library of Congress

Pioneering studies in New Zealand, the United States and Switzerland that tracked children into adulthood also suggested that definitions of mental disorders needed a big rethink. Glaringly, almost everyone in those investigations qualified for temporary or, less frequently, long-lasting mental disorders at some point in their lives. Only about 17 percent of New Zealanders who grew up in Dunedin stayed mentally healthy from age 11 to 38, for example. Those who managed that feat usually possessed advantageous personality traits from childhood on. People who in childhood rarely displayed strongly negative emotions, had lots of friends and displayed superior self-control — but not necessarily an exceptional sense of well-being — stood out as Kiwis who avoided mental disorders. But those same people did not always report being satisfied with their lives as adults.

In 2014, researchers involved in the Dunedin project released a self-report questionnaire aimed at measuring an individual’s susceptibility to mental illness in general. Symptoms from the vast array of DSM mental disorders are folded into a single score. This assessment of “general psychopathology,” called p for short, parallels the g score of general intelligence derived from IQ tests.

Studies of how best to measure p are still in the early stages. A p score is thought to reflect a person’s “internalizing” liability to develop anxiety and mood disorders, an “externalizing” liability, such as to abuse drugs and break laws, and a propensity to delusions and other forms of psychotic thinking.

The goal is to develop a p score that estimates a liability to DSM disorders based on a range of risk factors, including having experienced past child abuse or specific brain disturbances. If researchers eventually climb that mountain, they can try using p scores to evaluate how well psychotherapies and psychoactive medications treat and prevent mental disorders.
— Bruce Bower

Support our next century

100 years after our founding, top-quality, fiercely accurate reporting on key advances in science, technology, and medicine has never been more important – or more in need of your support. The best way to help? Subscribe.


Anthropologists have lived among and observed other cultures since the mid-1800s. At least into the early 20th century, hunter-gatherers and members of other small-scale societies were described as living in a “primitive” state or as “savages” divorced from what was regarded as the advanced thinking of people in modern civilizations.

In the early 1900s, anthropologist Franz Boas launched an opposing school of thought. Human cultures teach people to interact with the world in locally meaningful and helpful ways, Boas argued. He thus rejected any ranking of cultures from primitive to advanced.

Margaret Mead pic
Rather than ranking cultures from primitive to advanced, Margaret Mead, shown here with children on Manus Island in Papua New Guinea, emphasized commonalities among cultures.Fotosearch/Getty Images

Following that lead, anthropologist and former Boas student Margaret Mead emphasized commonalities that underlie cultural differences among populations. Mead’s 1928 book about her observations of Samoans controversially argued that casual sexuality and other features of their culture enabled a smoother adolescence for Samoan girls than what American teens experience. Controversy over Mead’s findings and her elevation of nurture over nature as having the most influence over a person’s development — a rebuke of then-popular eugenic ideas — lasted for decades.

Eugenicists believed that selective breeding among members of groups with desirable traits that were considered largely genetic, including intelligence and good physical health, would improve the quality of humankind. Thus, eugenicists controversially advocated preventing reproduction among people with mental or physical disabilities, criminals and members of disfavored racial and minority groups.

As the 20th century wound down, Mead’s cross-cultural focus reasserted itself among a school of social scientists that deemed it critical to conduct research outside societies dubbed WEIRD, short for Western, educated, industrialized, rich and democratic.

Economists’ cherished assumption that people are naturally selfish, based on personal convictions far more than on evidence, took a hard fall when investigators studied sharing in and outside the WEIRD world. Cultural standards of fairness, driven by a general desire to cooperate and share, determined how individuals everywhere, including the United States, divvied up money or other valuables in experimental games, researchers found.

Scientist and Hadza pic
Experimental sharing games conducted between 2010 and 2016 with Hadza hunter-gatherers in Tanzania were inspired by a new research focus on how people think in traditional, small-scale societies.Eduardo Azevedo

A path-breaking project conducted experimental games with pairs of people from hunter-gatherer groups, herding populations and other small-scale societies around the world. In those transactions, one person could give any part of a sum of money or other valuables to a biologically unrelated partner. The partner could accept the offer or turn it down, leaving both players with nothing.

Members of societies that bargained and bartered a lot often split the experimental pot nearly evenly. Offers fell to 25 percent of the pot or less in communities consisting of relatively isolated families. Players on the receiving end in most societies frequently accepted low offers.

Cross-cultural research in the past few decades suggests that a willingness to deal fairly with strangers expanded greatly over the past 10,000 years. The growth of market economies, in which people purchased food rather than hunting or growing it, encouraged widespread interest in making fair deals with outsiders. So did the replacement of local religions with organized religions, such as Christianity and Islam, that require believers to treat others as they would want others to treat them.

Hadza smoking bees
Hadza hunter-gatherers in Tanzania gather honey by smoking bees out of their nests. Studies there are helping to offer a cross-cultural perspective on how people share resources.Robin Hammond/Panos Pictures/Redux

Cross-cultural research has now shifted toward studying how groups shape the willingness to share. In one case, Africa’s Hadza hunter-gatherers live in camps that have a range of standards about how much food to share with strangers. In experimental cooperation games, Hadza individuals who circulated among camps adjusted the amount of honey sticks they pooled to a communal pot, based on whether their current camp favored sharing a lot or a little.
— Bruce Bower


Lives and life spans

It has taken a public health crisis to stimulate a level of cooperation across disciplines within and outside the social sciences rarely reached in the past century. Life spans of Americans have declined in recent years, fueled by drug overdoses and other “deaths of despair” among poor and working-class people plagued by job losses and dim futures.

This deadly turning point followed a long stretch of increasing longevity. Throughout the 20th century, average life expectancy at birth in the United States increased from about 48 to 76 years. By mid-century, scientists had tamed infectious diseases that hit children especially hard, such as pneumonia and polio, in no small part due to public health innovations including vaccines and antibiotics. Public health efforts starting in the 1960s, including preventive treatments for heart disease and large reductions in cigarette smoking helped to lengthen adults’ lives.

But at the end of the 20th century, U.S. life spans reversed course. Economists, psychologists, psychiatrists, sociologists, epidemiologists and physicians have begun to explore potential reasons for recent longevity losses, with an eye toward stemming a rising tide of early deaths.

Two Princeton University economists, Anne Case and Angus Deaton, highlighted this disturbing trend in 2015. After combing through U.S. death statistics, Case and Deaton observed that mortality rose sharply among middle-aged, non-Hispanic white people starting in the late 1990s. In particular, white, working-class people ages 45 to 54 were increasingly drinking themselves to death with alcohol, succumbing to opioid overdoses and committing suicide.

Job losses that resulted as mining declined and manufacturing plants moved offshore, high health care costs, disintegrating families and other stresses rendered more people than ever susceptible to deaths of despair, the economists argued. On closer analysis, they found that a similar trend had stoked deaths among inner-city Black people in the 1970s and 1980s.

Psychologists and other mental health investigators took note.

If Case and Deaton were right, then researchers urgently needed to find a way to measure despair. Two big ideas guided their efforts. First, don’t assume depression or other diagnoses correspond to despair. Instead, treat despair as a downhearted state of mind. Tragic life circumstances beyond one’s control, from sudden unemployment to losses of loved ones felled by COVID-19, can trigger demoralization and grief that have nothing to do with preexisting depression or any other mental disorder.

Second, study people throughout their lives to untangle how despair develops and prompts early deaths. It’s reasonable to wonder, for instance, if opioid addiction and overdoses more often afflict young adults who have experienced despair since childhood, versus those who first faced despair in the previous year.

One preliminary despair scale consists of seven indicators of this condition, including feeling hopeless and helpless, feeling unloved and worrying often. In sample of rural North Carolina youngsters tracked into young adulthood, this scale has shown promise as a way to identify those who are likely to think about or attempt suicide and to abuse opioids and other drugs.

Deaths of despair belong to a broader public health and economic crisis, concluded a 12-member National Academies of Sciences, Engineering and Medicine committee in 2021. Since the 1990s, drug overdoses, alcohol abuse, suicides and obesity-related conditions caused the deaths of nearly 6.7 million U.S. adults ages 25 to 64, the committee found.

Deaths from those causes hit racial minorities and working-class people of all races especially hard from the start. The COVID-19 pandemic further inflamed that mortality trend because people with underlying health conditions were especially vulnerable to the virus.

Since the 1990s, drug overdoses, alcohol abuse, suicides and obesity-related conditions caused the deaths of nearly 6.7 million U.S. adults ages 25 to 64. Deaths from those causes hit racial minorities and working-class people of all races especially hard from the start.

Perhaps findings with such alarming public health implications can inform policies that go viral, in the best sense of that word. Obesity-prevention programs for young people, expanded drug abuse treatment and stopping the flow of illegal opioids into the United States would be a start.

Whatever the politicians decide, the science of us has come a long way from Watson and Rayner instilling ratty fears in an unsuspecting infant. If Little Albert were alive today, he might smile, no doubt warily, at researchers working to extinguish real-life anguish.
— Bruce Bower

Milestones

All psychology milestones

From the archive

  • What babies see

    Developmental psychologist Jean Piaget contends that babies see the world as a series of pictures that have no reality after passing out of sight.

  • Pigeons Play Ping-Pong

    In early conditioning experiments, psychologist B.F. Skinner trains pigeons to play table tennis and peck out simple tunes on a seven-key piano.

  • LSD Helps Severely Disturbed Children

    The psychedelic drug LSD shows promise as a treatment for severely disturbed children who cannot speak or relate to others, UCLA researchers say. Psychedelics still draw scientific interest as possible treatments today.

  • A controversial claim in the IQ debate

    Psychologist Arthur Jensen argues that heredity largely explains individual, social class and racial differences in IQ scores. Ironically, the work helps inspire research on how growing up in a wealthy family and other environmental advantages boost IQs.

  • Hope for people with schizophrenia

    Intensive job and psychological rehabilitation after release from the hospital leads to marked improvement in many people with schizophrenia 10 to 20 years later. But psychoactive medications remain the primary treatment.

  • Objective Visions

    Historians tracked how notions of scientific objectivity and its usefulness have changed over the past few centuries, informing a debate about what scientists can know about the world.

  • Psychology’s Tangled Web

    Science News writer Bruce Bower explores the long-running debate over whether psychologists should use deceptive methods in the name of science.

  • 9/11’s Fatal Road Toll

    Fear of flying after the airborne terrorist attacks of 9/11 led to excess deaths in car crashes on U.S. roads during the last three months of 2001.

  • Night of the Crusher

    Researchers suspect that a strange type of waking nightmare called sleep paralysis, which includes a sensation of chest compression, helps to explain worldwide beliefs in evil spirits and ghosts.

  • Pi master’s storied recall

    A man who recited more than 60,000 decimals of pi from memory revealed the power of practice and storytelling for his world-record feat.

  • Hallucinated voices’ attitudes vary with culture

    Depending on whether they live in the United States, India or Ghana, people with schizophrenia hear voices that are either hostile or soothing, suggesting that cultural expectations help produce some schizophrenia symptoms.

The Latest