Anne Smith, Ph.D., Winter 1999-2000
Everyone knows that stuttering, like many human behaviors, is complex. However, people who stutter and clinicians who treat them should be very encouraged. Recent theoretical advances and new technologies to "see" the processes underlying complex human behavior have helped us to make considerable progress in understanding stuttering.
Most scientists interested in fluency agree that there is not a single cause of stuttering; rather it is the result of the interaction of several factors. The central issue then is: how and when do these factors interact to produce stuttering? Our model starts with a simple point: to produce speech, the brain must generate sets of neural commands to produce the right amount and timing of muscle activity in a large number of muscles, including those that control breathing, voice, and oral movements. During disfluent speech of children and adults who stutter, it is clear that the brain does not accomplish this task.
Our research, in combination with that from other laboratories, suggests that, although stuttering is expressed as a failure of the motor areas of the brain to generate the right muscle commands for speech to proceed, the explanation of why this happens involves the interaction of the brain=s motor areas with other brain systems, including those involved in emotional, cognitive, and linguistic processing. Thus our experiments are designed to test hypotheses such as this: if linguistic processing demands are great (or emotional arousal is high or memory load is great), the motor areas of the brain cannot perform as well in generating muscle command signals. The next phase of our NIH project on stuttering focuses on the interaction of language and motor factors.
How can we test hypotheses such as these? How can we "see" the brain's motor command signals or get information about the activity of the neural systems involved in linguistic processing? To study the motor commands we place small infrared lights on the lips or jaw, and a digital camera tracks oral movements during speech. This system is completely noninvasive, and we have tested children as young as 4 years. By analyzing speech movements, we can obtain a good index of how well the brain is doing at generating the muscle command signals. The beauty of this technique is that the person does not have to be disfluent. We can see a range of performance even during fluent speech. We know, for example, that as children mature, their brains get better at generating muscle command signals for speech, and they are not like adults until the teen years. Also we have found that people who stutter can perform just like their fluent controls, but that when the speaking task demands are increased (by making the sentence more complex), the muscle command signals of people who stutter (but not the normally fluent adults) start to deteriorate.
As a window on the brain's activity during linguistic processing, we use a system to record the electrical activity of the brain. The subject wears a "bathing cap" with 32 electrodes embedded in it. These electrodes record the brain's activity during linguistic processing tasks. In earlier studies scientists have found very distinctive signatures of brain activity for various types of linguistic processing. For example, decoding the meaning of a sentence is characterized by a different pattern of brain activity in space and time compared to processing the grammar of the sentence. We intend to find out id people who stutter (or a sub-group of them) have different neural processing of language even when they are not speaking, or if they have basically normal linguistic processing abilities. In the same people, we will record oral movements during speech to see if "loading" the linguistic processing system produces negative effects on the brain's ability to send the "right" command signals to the speech muscles. We will then know if people who stutter (or a sub-group of them) have either (a) normal language processes or (b) atypical language processes that interfere with motor commands for speech. We also will be studying children, as we believe that the potential interaction of language and motor factors in fluency could change over the life span.
Our research team (Anne Smith and Christine Weber-Fox from the Department of Audiology and Speech Sciences, Howard Zelaznik from the Department of Health, Kinesiology, and Leisure Studies at Purdue University, and Janet Nichol, Department of Linguistics, University of Arizona) is extremely excited about the next phase of our research on stuttering. We are grateful to the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health for support of the project (Physiological Correlates of Stuttering, R01 00559).