Self-organization in anticipatory language contexts: A new view of top-down and bottom-up constraint integration during online sentence processing

Date of Completion

January 2011


Psychology, Cognitive




Research on anticipatory and bottom-up effects in human sentence processing has revealed two important, though seemingly conflicting insights about language processing. Anticipatory effects (e.g., eye movements to predictably edible items on hearing "The boy will eat the…;" Altmann & Kamide, 1999) support a language system that robustly integrates all available information (e.g., from the preceding language/discourse, as well as the visual context, etc.) in order to anticipate upcoming linguistic elements that best satisfy the available constraints. By contrast, bottom-up effects support a language system that is robustly sensitive to ("bottom-up") structure in the language that conflicts with the larger context, suggesting that the system does not immediately integrate the bottom-up input with the contextual constraints. For example, readers are disrupted by "locally coherent" phrases in the bottom-up input that conflict with the larger sentence (e.g., grammatical) context (e.g., the underlined in "The coach smiled at the player tossed the Frisbee; " Tabor, Galantucci, & Richardson, 2004). Here, I report on a self-organizing artificial neural network simulation that predicts simultaneous effects of anticipatory and bottom-up effects, and I report on five experiments in the visual world paradigm that tested the predictions of the network. We found simultaneous anticipatory and bottom-up effects at both the lexical (Experiments 1 and 2; but see Experiment 3) and multi-word level (Experiment 4), as well as individual differences in these effects (Experiment 5). I describe implications of these findings with respect to self-organization, as well as various other theories of language processing. ^