Abstract

One critical step when trying to comprehend a spoken message is to identify the words that the speaker intended. To recognize spoken words, listeners continuously attempt to map the incoming speech signal onto lexical representations stored in memory (McClelland and Elman, 1986; Norris, 1994): Words that partially overlap with the signal are activated until the lexical candidate that best matches the input wins over its competitors, a process known as lexical competition. Models of spoken-word recognition, most of which are based on native listener behavior, assume that lexical representations are stable, and contain at least the phonological form of words in citation. While lexical representations likely also contain other forms, for example the reduced forms found in conversational speech, it is a matter of debate whether native listeners encode spoken words exclusively as phonetically detailed exemplars (Johnson, 1997; Goldinger, 1998) or whether phonological abstraction also takes place (McQueen et al., 2006). Another assumption of models of native spoken-word recognition is that, under normal circumstances, listeners' perception of the input is optimal and faithful to the signal: Accurate lexical representations are easily contacted, and an optimal set of candidates is activated for quick lexical selection.

Details

Statistics

from
to
Export