The Guest Alternate View

Baby Talk

By Richard A. Lovett

Not all that long ago, the idea of talking to your unborn baby sounded silly. Sure, people have undoubtedly done so since the dawn of humanity, telling their unborn children how they will grow up strong, wise, talented, or into great leaders who will save the world (or at least the tribe). But that’s more like prophecy than communication.

It turns out, however, that unborn children can actually hear what we say. In 1994, a team from Queen’s University of Belfast, Northern Ireland, placed speakers on mothers’ abdomens and played sounds, ranging from 100 Hz (a pleasant, low-frequency hum) to 3,000 Hz (an irritating, high-pitched whine). If the baby moved in response to the sound, the researchers figured it heard it. If not, they figured it hadn’t.1

It wasn’t the most delicate of methodologies—basically akin to banging on an animal’s cage in the hope of getting a reaction—but it proved that as early as nineteen weeks after conception, a developing fetus can hear tones in the middle of the range—500Hz, roughly a musical B—and that not long afterward, it can hear a much wider range of sounds.

Nor do these sounds have to be all that loud for the fetus to hear them. In 1992, researchers at the University of Florida inserted hydrophones (underwater microphones) into the uteruses of pregnant mothers (with permission from the mothers, of course) and eavesdropped on the sounds reaching the developing babies.

Not surprisingly to anyone who’s ever covered their ears to better hear their own voice, the researchers found that the mother’s voice was 5.2 decibels louder in the womb than outside it. But external sounds also reached the developing baby easily enough, with male and female voices being attenuated by only 2.1 and 3.2 decibels, respectively. In other words, if the mother can hear something, so can the baby.

At the simplest level, what this means is that third-trimester woman might want to steer clear of super-loud activities like rock concerts, loading baggage on jetliners, or working with jackhammers, because even if the mother wears hearing protection, that won’t help the baby. But quieter sounds might actually be more important, because voices reaching the developing baby’s ears might play a major role in jump-starting the growth of its hearing and, ultimately, its brain’s ability to process speech.2

*   *   *

All of that comes from decades-old technology. Today, a technique called fetal magnetoencephalography (fMEG) can directly measure the brain activity of developing fetuses as they listen to sounds—a big step up from beaming noises at them and hoping they’ll twitch in response. Using this technique in 2018, a team of German researchers exposed the fetuses of 55 pregnant women, 31 – 40 weeks’ pregnant, to pure tones and short bursts of the type that ultimately help us learn to distinguish bat from pat, cat, fat, hat, mat, gnat, sat, vat, etcetera.3 The study buried its findings in acoustical jargon, but the bottom line, says Brian Monson, an acoustic physicist at the University of Illinois, Urbana-Champaign, is that unborn children are very definitely learning about language many weeks before they are born. “So, watch what you say,” he quips.

Monson is joking, but that doesn’t mean expectant mothers can’t find websites urging them to read baby books to their unborn children as a way of spurring speech development. There are even suggestions to read them Shakespeare or to learn a second language in order to introduce the unborn child to its sounds and rhythms.

I myself am dubious about trying to jumpstart a baby’s education in the womb. Shakespeare was challenging enough in high school. I doubt I would have found him easier—or pre-learned any of the important lessons found in his works—if my mother had spent her pregnancy reciting things like, “The fool doth think he is wise, but the wise man knows himself to be a fool.” Sure, that’s a stunning encapsulation of what is now known as the Dunning-Kruger effect, written centuries before social psychologists David Dunning and Justin Kruger published their better-researched version.4 But to an unborn baby, it’s hard to see how it’s more important than “Twinkle, Twinkle, Little Star.” The issue, Monson says, isn’t so much what is being said, as how language exposure in the womb may affect brain and auditory development later in life.

Monson’s own research began by asking a dozen pregnant women to wear audio recorders for twenty-four hours at a time, twice a week during their third trimesters. He then ran the resulting 6,000 hours of recordings through a computer program that determined how much time each day the baby was exposed to language, reporting his results at a 2019 meeting of the Acoustical Society of America in Louisville, Kentucky.

The average mother, he found, was chatty (or lived and worked in a chatty environment), exposing her baby to a full five hours of language per day. “That translates to about 24,000 words per day,” he says. But even in his pilot study (which he eventually hopes to expand to 100 women), there was a range of a factor of two.

For privacy reasons, Monson says, there was no attempt to transcribe what the unborn babies were hearing: the objective was simply to distinguish “speech” from “not-speech.”

Monson’s primary goal is to assist doctors and nurses in neonatal intensive care units (neonatal ICUs), who are working with babies that, if they had not been born prematurely, would still be secure in the womb. Other researchers have found that excessive noise in neonatal ICUs is harmful,5 but Monson’s research suggests that it may also be important to ensure that premature babies are exposed to the same amounts of speech they would normally have heard before birth. I.e., it may be important for someone to talk to them—not just occasionally, but a lot.

But Monson also plans to test each of his study’s babies—as well as the dozens more he hopes to add in the future—at least three times after birth, starting at three months, then again at two and five years, in order to see if there is any correlation between the amount of language they hear in utero and their subsequent development.

Depending on his results, expectant mothers might someday be using Siri and Alexa to track the amount of speech they (and their babies) are exposed to, just as health-conscious people today use smartphone apps as step counters in an effort to get in the recommended ten thousand paces per day. Expose your baby to less than some magic number of words, and you’re in the red zone: not so good. Talk more, and you move into orange, yellow, and eventually green zones, with on-screen cheers and digital fireworks as you pass each benchmark. If you’re too busy, maybe you can paste a thin, bandage-like Bluetooth speaker to your belly and have the app talk directly to your baby.

If this doesn’t make you think of Isaac Asimov’s 1940 classic short story, “Robbie,” winner of the 1941 retrospective Hugo, hunt it up and read it. Asimov’s robotic nanny was—and still is—technologically far ahead of its time. But a digital prenatal nanny of the type we’re talking about here is immensely feasible. It could even be programmed to read your baby Shakespeare or talk to it in dozens of different languages. It might even be able to mimic the sound of your own voice.

*   *   *

Prenatal exposure to voices isn’t the only thing the evolving science of childhood speech development is examining.

In another study reported at the 2019 acoustics meeting, Mark VanDam, a speech and hearing researcher at Washington State University, Spokane, placed voice recorders on seventeen toddlers (average age thirty-four months), and collected more than 1,400 hours of recordings of their conversations with their parents. What he found won’t be a surprise to any parent whose toddler’s favorite question starts with why, but it very much upset the applecart of researchers who’d assumed that mothers were the ones who most frequently instigated conversations with their children.

VanDam’s team found that the toddlers are the ones who most frequently got the conversational ball rolling. “It turns out that children initiate about half the conversations,” VanDam says. “Mothers [initiate] about 40%. Dads about 10%.”

“This is exciting,” he adds, “because [it shows that] kids are active participants in shaping their own world. They have the freedom to pick what they talk about, and when.” It also fits with nontraditional views of early childhood education like that used in Montessori schools throughout the world. “This is completely Montessori,” says Liana Bernard, who spent two school years teaching English to first-graders in the Philippines. “Basically, the child has the capacity to shape their own reality based on their own experiences with the world and how they interpret things, rather than an adult telling them how they should see things.”

The low rate of father-initiated conversations, Monson adds, applied to every family in his study. “We have seventeen families, and all of the fathers initiated fewer conversations [with the toddler] than the mother.” Nor does it matter whether fathers are talking to sons or daughters. “Dads are equal-opportunity non-conversationalists,” he says.

As with Monson’s study, VanDam’s initial goal is simply to take advantage of the new speech-recognition technologies to collect what scientists think of as “ecological” information—i.e., to study families “in the wild” and see how they behave. But with a larger sample set, VanDam says, it may also become possible to better understand disorders that affect communication, including such problems as autism. “We could use this information to develop new, targeted therapies to help children that have language delay,” he says.

“This type of work would have been impossible a few decades ago,” he adds.

*   *   *

Yet another study has found that however much young children may attend to their mother’s voices, a leading driver of their speech development isn’t so much their parents as it is other children. Not that this is counterintuitive. There are lots of stories about toddlers who somehow learn the “f-word” and start using it unpredictably and sometimes embarrassingly—even when there’s no way it could have been the parents from whom they learned it.

In fact, say speech and hearing researchers Yuanyuan Wang of Ohio State University and Amanda Seidl of Purdue University (who also presented their research at the 2019 acoustics meeting), young children best learn new words (desired or otherwise) if taught them by other children, especially ones who are slightly older. In a series of experiments, Wang and Seidl attempted to teach two-year-olds new words, using speakers of different ages. They found that the toddlers learned them most effectively from other children, with a sweet spot in the learning curve coming from “instructors” who were about eight to ten years old.

What this means for single-child families, or for the development of peer pressure at later ages, isn’t clear. Perhaps it simply means that children often grow by socializing with slightly older children—ones they recognize as more like themselves than the adults, but still as role models from whom they can learn. If so, it’s another line of support for Montessori-like programs, which often mix children of different ages, somewhat like old-fashioned one-room schoolhouses. Bernard says she sees it when kindergartners are mixed with first-graders. “The first-graders serve as big influencers or examples,” she says. “If a kindergartner sees a first-grader doing something, it’s like it gives the kindergartner permission to do the same.”

It may also mean that the ideal AI nanny wouldn’t be an artificial Mary Poppins. Instead, it might incorporate attributes of an older child, albeit with greater wisdom and self-control—something Siri and Alexa may soon be quite capable of mimicking.

Maybe, instead of Asimov’s robot, Robbie, all that’s needed is a doll through which such an artificial intelligence can speak—one that could be steadily “improved” as the both the child and its ideal companion grow.

It might even be possible for the doll to recite Shakespeare in a dozen languages.

*   *   *

Endnotes: 

1   PG. Hepper and BS Shahidullah. Development of fetal hearing. Archives of Disease in Childhood, 1994 Sep; 71(2):F81–F87, doi: 10.1136/fn.71.2.f81.

2     Richards DS, Frentzen B, Gerhardt KJ, McCann ME, Abrams RM. Sound levels in the human uterus. Obstetrics & Gynecology. 1992 Aug; 80(2):186–190.

3     Draganova R, et al. Fetal auditory evoked responses to onset of amplitude modulated sounds. A fetal magnetoencephalography (fMEG) study. Hearing Research. 2018 Jun; 363:70-77. doi: 10.1016/j.heares.2018.03.005.

4   Kruger, J; Dunning, D. Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology. 1999 Dec; 77 (6): 1121-1134. doi:10.1037/0022-3514.77.6.1121.

5     JR Weber and EE Ryherd. Quiet time impacts on the neonatal intensive care unit soundscape and patient outcomes. J. Acoustical Society of America 2019; 145, 1657; doi.org/10.1121/1.5101086.

Copyright © 2019 Richard A. Lovett