Why Hannah Talks and Alyssa Doesn’t Ⅱ- Woodmam

The irony here only deepens. One might have noticed that all of these scholars are at the University of Washington. Kuhl and Meltzoff are Co-Directors of the same lab. So when Disney CEO Iger attacked the Pediatrics scholars, he was attacking the very laboratory and institution that Baby Einstein had hailed, when its Language Nursery DVD was first released.

So why does an infant need a live human speaker to learn language from? Why are babies learning nothing from the audio track of a baby DVD, while their language isn’t impaired by exposure to regular TV?

The evidence suggests one factor is that baby DVDs rely on disembodied audio voice-overs, unrelated to the abstract imagery of the video track. Meanwhile, grown-up television shows live actors, usually close up—kids can see their faces as they talk. Studies have repeatedly shown that seeing a person’s face makes a huge difference.

Babies learn to decipher speech partly by lip-reading: they watch how people move their lips and mouths to produce sounds. One of the first things that babies must learn—before they can comprehend any word meanings—is when one word ends and another begins. Without segmentation, an adult’s words probably sound about the same to an infant as does his own babbling. At 7.5 months, babies can segment the speech of people they see speaking. However, if the babies hear speech while looking at an abstract shape, instead of a face, they can’t segment the sounds: the speech once again is just endless gibberish. (Even for adults, seeing someone’s lips as he speaks is the equivalent of a 20-decibel increase in volume.)

When a child sees someone speak and hears his voice, there are two sensory draws—two simultaneous events both telling the child to pay attention to this single object of interest—this moment of human interaction. The result is that the infant is more focused, remembers the event, and learns more. Contrast that to the disconnected voice-overs and images of the baby videos. The sensory inputs don’t build on each other. Instead, they compete.

Would baby DVDs work better if they showed human faces speaking? Possibly. But there’s another reason—a more powerful reason—why language learning can’t be left to DVDs. Video programming can’t interact with the baby, responding to the sounds she makes. Why this is so important requires careful explanation.

Wondering what parents’ prevailing assumptions about language acquisition were, we polled some parents, asking them why they thought one kid picked up language far faster than another. Specifically, we were asking about two typically-developing kids, without hearing or speech impairments.

Most parents admitted they didn’t know, but they had absorbed a little information here and there to inform their guesses. One of these parents was Anne Frazier, mother to ten-month-old Jon and a litigator at a prestigious Chicago law firm; she was working part-time until Jon turned one. Frazier had a Chinese client base and, before having Jon, occasionally traveled to Asia. She’d wanted to learn Mandarin, but her efforts were mostly for naught. She had decided that she was too old—her brain had lost the necessary plasticity—so she was determined to start her son young. When she was dressing or feeding her baby, she had Chinese-language news broadcasts playing on the television in the background. They never sat down just to watch television—she didn’t think that would be good for Jon—but Frazier did try to make sure her child heard twenty minutes of Mandarin a day. She figured it couldn’t hurt.

Frazier also assumed that Jon would prove to have some level of innate verbal ability—but this would be affected by the sheer amount of language Jon was exposed to. Having a general sense that she needed to constantly talk to her child, Frazier was submitting her kid to a veritable barrage of words.

“Nonstop chatter throughout the day,” she affirmed. “As we run errands, or take a walk, I describe what’s on the street—colors, everything I see. It’s very easy for a mother to lose her voice.”

She sounded exhausted describing it. “It’s hard to keep talking to myself all the time,” Frazier confessed. “Infants don’t really contribute anything to the conversation.”

Frazier’s story was similar to many we heard. Parents were vague on the details, but word had gotten out that innate ability wasn’t the only factor: children raised in a more robust, language-intensive home will hit developmental milestones quicker. This is also the premise of popular advice books for parents of newborns, which usually devote a page to reminding parents to talk a lot to their babies, and around their babies. A fast-selling new product being sold to parents is the $699 “verbal pedometer,” a sophisticated gadget the size of a cell phone that can be slipped into the baby’s pocket or car seat. It counts the number of words the baby hears during an hour or day.

The verbal pedometer is actually used by many researchers who study infants’ exposure to language. The inspiration behind such a tool is a famous longitudinal study by Drs. Betty Hart and Todd Risley, from the University of Kansas, published in 1994.

Hart and Risley went into the homes of a variety of families with a seven- to nine-month-old infant. They videotaped an hour of interactions while the parent was feeding the baby or doing chores with the baby nearby—and they repeated this once a month until the children were three. Painstakingly breaking down those tapes into data, Hart and Risley found that infants in welfare families heard about 600 words per hour. Meanwhile, the infants of working-class families heard 900 words per hour, and the infants of professional-class families heard 1,500 words per hour. These gaps only increased when the babies turned into toddlers—not because the parents spoke to their children more often, but because they communicated in more complex sentences, adding to the word count.

This richness of language exposure had a very strong correlation to the children’s resulting vocabulary. By their third birthday, children of professional parents had spoken vocabularies of 1,100 words, on average, while the children of welfare families were less than half as articulate—speaking only 525 words, on average.

The complexity, variety, and sheer amount of language a child hears is certainly one driver of language acquisition. But it’s not scientifically clear that merely hearing lots of language is the crucial, dominant factor. For their part, Hart and Risley wrote pages listing many other variables at play, all of which had correlations with the resulting rate at which the children learned to speak.

In addition, the words in the English language that children hear most often are words like “was,” “of,” “that,” “in,” and “some”—these are termed “closed class” words. Yet children learn these words the most slowly—usually not until after their second birthday. By contrast, children learn nouns first, even though nouns are the least commonly-occurring words in parents’ natural speech to children.

The basic paradigm, that a child’s language output is a direct function of the enormity of input, also doesn’t explain why two children, both of whom have similar home experiences (they might both have highly educated, articulate mothers, for instance) can acquire language on vastly divergent timelines.

A decade ago, Hart and Risley’s work was the cutting edge of language research. It’s still one of the most quoted and cited studies in all of social science. But in the last decade, other scholars have been flying under the radar, teasing out exactly what’s happening in a child’s first two years that pulls her from babble to fluent speech.

If there’s one main lesson from this newest science, it’s this: the basic paradigm has been flipped. The information flow that matters most is in the opposite direction we previously assumed. The central role of the parent is not to push massive amounts of language into the baby’s ears; rather, the central role of the parent is to notice what’s coming from the baby, and respond accordingly—coming from his mouth, his eyes, and his fingers. If, like Anne Frazier, you think a baby isn’t contributing to the conversation, you’ve missed something really important.

In fact, one of the mechanisms helping a baby to talk isn’t a parent’s speech at all—it’s not what a child hears from a parent, but what a parent accomplishes with a well-timed, loving caress.
You have successfully subscribed!Your discount is OFF20
This email has been registered
ico-collapse
0
Recently Viewed
Top
ic-expand
ic-cross-line-top