The research described in this thesis focuses on the roles of top-down semantic information in the sentence context and bottom-up acoustic information for word perception. It is investigated how and when listeners use acoustic and semantic information during spoken-word processing.
To this end, a rhyme-monitoring experiment was performed using lexically ambiguous speech fragments, which correspond to more than one lexical parsing. An example if the fragment [plesAIn], which could be perceived as ‘play sign’ or as ‘place sign’. The duration of the boundary consonant constitutes a phonetic cue for the intended segmentation. Such ambiguous fragments were embedded in neutral and constraining contexts.
The results of this study show that semantic information was used early no during the processing of these fragments. Listeners often perceived the contextually appropriate alternative, even before the duration of the boundary consonant could have contributed to their perception. This means that semantic information can influence spoken-word recognition if the acoustic signal is still ambiguous between more than one solution.
In addition, results showed that the temporal-acoustic cues for the intended segmentation were not always decisive for perceived segmentation in the presence of a constraining semantic context, although these cues were used effectively when no semantic information was present. This means that not all types of clear acoustic information are decisive for word perception.
This study is of interest to researchers working in the field of spoken-word recognition, as well as to cognitive psychologists and experimental phoneticians.