Cerebral mechanisms of prosodic sensory integration using low-frequency bands of connected speech.

Isabelle Hesling, Bixente Dilharreguy, Sylvain Clément, Martine Bordessoules, Michèle Allard
Human Brain Mapping. 2005-05-31; 26(3): 157-169
DOI: 10.1002/hbm.20147

Read on PubMed

Even if speech perception has been reported to involve both left and right hemispheres, converging data have posited the existence of a functional asymmetry at the level of secondary auditory cortices. Using fMRI in 12 right‐handed French men listening passively to long connected speech stimuli, we addressed the question of neuronal networks involved in the integration of low frequency bands of speech by comparing 1) differences in brain activity in two listening conditions (FN, NF) differing in the integration of pitch modulations (in FN, low frequencies, obtained by a low‐pass filter, are addressed to the left ear while the whole acoustic message is simultaneously addressed to the right ear, NF being the reverse position); 2) differences in brain activity induced by high and low degrees of prosodic expression (expressive vs. flat); and 3) effects of the same connected speech stimulus in the two listening conditions. Each stimulus induced a specific cerebral network, the flat one weakening activations which were mainly reduced to the bilateral STG for both listening conditions. In the expressive condition, the specific sensory integration FN results in an increase of the articulatory loop and new recruitments such as right BA6‐44, left BA39‐40, the left posterior insula and the bilateral BA30. This finding may be accounted for by the existence of temporal windows differing both in length and in acoustic cues decoding, strengthening the “asymmetric sampling in time” hypothesis posited by Poeppel (Speech Commun 2003; 41:245–255). Such an improvement of prosodic integration could find applications in the rehabilitation of some speech disturbances. Hum Brain Mapp, 2005. © 2005 Wiley‐Liss, Inc.

Know more about