Skip to content Skip to main navigation Report an accessibility issue

Speech Segmentation and It’s Underpinnings

We have conducted in our lab numerous studies looking at how infants are able to find words in continuous speech. Words in speech aren’t separated by blank spaces the way words in text are separated by blank spaces. If you think about what it’s like listening to a foreign language, one of the things that you might be able to recall is that it often sounds like people are speaking really really quickly.  That is because if you don’t know any words in a language it is difficult to tell where words start and stop.  And this is basically the problem that infants face when hearing their own language for the first time.

Most of the words that your child hears aren’t produced in isolation.  So for example, your child doesn’t usually hear “doggy, doggy, doggy”, but instead the words that you might want your child to learn are often embedded in sentences like “Where’s the doggy? Do you want to take the doggy to the park?”

From both artificial and natural language studies we know that infants are able to pick up on the likelihood that one sound will follow another.  So the first syllable of a word tends to be a good predictor of what’s going to come next.  Hearing the syllable ba will predict that the syllable by will come next.  And between word boundaries there is a lot less predictability.  So in a phrase like happy baby, hearing happy does not predict that baby will come next. In a set ongoing studies we are investigating infants ability to track these language patterns and how their abitily to track them may correlate with other factors, like memory, speech processing eficiency, preferences for infant direct speech, etc..