The vast expressive power of language is made possible by two principles: the arbitrary sound- meaning pairing underlying words, and the discrete combinatorial system underlying grammar. These principles implicate distinct cognitive mechanisms: associative memory and symbol- manipulating rules. The distinction may be seen in the difference between regular inflection (e.g., walk-walked), which is productive and open-ended and hence implicates a rule, and irregular inflection (e.g., come-came, which is idiosyncratic and closed and hence implicates individually memorized words. Nonetheless, two very different theories have attempted to collapse the distinction; generative phonology invokes minor rules to generate irregular as well as regular forms, and connectionism invokes a pattern associator memory to store and retrieve regular as well as irregular forms. I present evidence from three disciplines that supports the traditional word/rule distinction, though with an enriched conception of lexical memory with some of the properties of a pattern-associator. Rules, nonetheless, are distinct from pattern- association, because a rule concatenates a suffix to a symbol for verbs, so it does not require access to memorized verbs or their sound patterns, but applies as the "default," whenever memory access fails. I present a dozen such circumstances, including novel, unusual-sounding, and rootless and headless derived words, in which people inflect the words regularly (explaining quirks like flied out, low-lifes, and Walkmans). A comparison of English to other languages shows that contrary to the connectionist account, default suffixation is not due to numerous regular words reinforcing a pattern in associative memory, but to a memory-independent, symbol-concatenating mental operation.
Language comprises a lexicon for storing words and a grammar for generating rule-governed forms. Evidence is presented that the lexicon is part of a temporal-parietalhnedial-temporal “declarative memory” system and that granlmatical rules are processed by a frontamasal-ganglia “procedural” system. Patients produced past tenses of regular and novel verbs (looked and plagged), which require an -ed-suffixation rule, and irregular verbs (dug), which are retrieved from memory. Word-finding difficulties in posterior aphasia, and the general declarative memory impairment in Alzheimer's disease, led to more errors with irregular than regular and novel verbs. Grammatical difficulties in anterior aphasia, and the general impairment of procedures in Parkinson's disease, led to the opposite pattern. In contrast to the Parkinson's patients, who showed suppressed motor activity and rule use, Huntington's disease patients showed excess motor activity and rule use, underscoring a role for the basal ganglia in grammatical processing.
By definition, visual image representations are organized around spatial properties. However, we know very little about how these representations use information about location, one of the most important spatial properties. Three experiments explored how location information is incorporated into image representations. All of these experiments used a mental rotation task in which the location of the stimulus varied from trial to trial. If images are location-specific, these changes should affect the way images are used. The effects from image representations were separated from those of general spatial attention mechanisms by comparing performance with and without advance knowledge of the stimulus shape. With shape information, subjects could use an image as a template, and they recognized the stimulus more quickly when it was at the same location as the image. Experiment 1 demonstrated that subjects were able to use visual image representations effectively without knowing where the stimulus would appear, but left open the possibility that image location must be adjusted before use. In Experiment 2, distance between the stimulus location and the image location was varied systematically, and response time increased with distance. Therefore image representations appear to be location-specific, though the represented location can be adjusted easily. In Experiment 3, a saccade was introduced between the image cue and the test stimulus, in order to test whether subjects responded more quickly when the test stimulus appeared at the same retinotopic location or same spatiotopic location as the cue. The results suggest that location is coded retinotopically in image representations. This finding has implications not only for visual imagery but also for visual processing in general, because it suggests that there is no spatiotopic transform in the early stages of visual processing.
When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call for a hybrid of the two theories.
Children extend regular grammatical patterns to irregular words, resulting in overregularizations like comed, often after a period of correct performance ("U-shaped development"). The errors seem paradigmatic of rule use, hence bear on central issues in the psychology of rules: how creative rule application interacts with memorized exceptions in development, how overgeneral rules are unlearned in the absence of parental feedback, and whether cognitive processes involve explicit rules or parallel distributed processing (connectionist) networks. We remedy the lack of quantitative data on overregularization by analyzing 11,521 irregular past tense utterances in the spontaneous speech of 83 children. Our findings are as follows. (1) Overregularization errors are relatively rare (median 2.5% of irregular past tense forms), suggesting that there is no qualitative defect in children's grammars that must be unlearned. (2) Overregularization occurs at a roughly constant low rate from the 2s into the school-age years, affecting most irregular verbs. (3) Although overregularization errors never predominate, one aspect of their purported U-shaped development was confirmed quantitatively: an extended period of correct performance precedes the first error. (4) Overregularization does not correlate with increases in the number or proportion of regular verbs in parental speech, children's speech, or children's vocabularies. Thus, the traditional account in which memory operates before rules cannot be replaced by a connectionist alternative in which a single network displays rotelike or rulelike behavior in response to changes in input statistics. (5) Overregularizations first appear when children begin to mark regular verbs for tense reliably (i.e., when they stop saying Yesterday I walk). (6) The more often a parent uses an irregular form, the less often the child overregularizes it. (7) Verbs are protected from overregularization by similar-sounding irregulars, but they are not attracted to overregularization by similar-sounding regulars, suggesting that irregular patterns are stored in an associative memory with connectionist properties, but that regulars are not. We propose a simple explanation. Children, like adults, mark tense using memory (for irregulars) and an affixation rule that can generate a regular past tense form for any verb. Retrieval of an irregular blocks the rule, but children's memory traces are not strong enough to guarantee perfect retrieval. When retrieval fails, the rule is applied, and overregularization results.
How do speakers predict the syntax of a verb from its meaning? Traditional theories posit that syntactically relevant information about semantic arguments consists of a list of thematic roles like "agent", "theme", and "goal", which are linked onto a hierarchy of grammatical positions like subject, object and oblique object. For verbs involving motion, the entity caused to move is defined as the "theme" or "patient" and linked to the object. However, this fails for many common verbs, as in fill water into the glass and cover a sheet onto the bed. In more recent theories verbs' meanings are multidimensional structures in which the motions, changes, and other events can be represented in separate but connected substructures; linking rules are sensitive to the position of an argument in a particular configuration. The verb's object would be linked not to the moving entity but to the argument specified as "affected" or caused to change as the main event in the verb's meaning. The change can either be one of location, resulting from motion in a particular manner, or of state, resulting from accommodating or reacting to a substance. For example, pour specifies how a substance moves (downward in a stream), so its substance argument is the object (pour the water/glass); fill specifies how a container changes (from not full to full), so its stationary container argument is the object (fill the glass/water). The newer theory was tested in three experiments. Children aged 3;4-9;4 and adults were taught made-up verbs, presented in a neutral syntactic context (this is mooping), referring to a transfer of items to a surface or container. Subjects were tested on their willingness to encode the moving items or the surface as the verb's object. For verbs where the items moved in a particular manner (e.g., zig-zagging), people were more likely to express the moving items as the object; for verbs where the surface changed state (e.g., shape, color, or fullness), people were more likely to express the surface as the object. This confirms that speakers are not confined to labeling moving entities as "themes" or "patients" and linking them to the grammatical object; when a stationary entity undergoes a state change as the result of a motion, it can be represented as the main affected argument and thereby linked to the grammatical object instead.
How are words represented in the mind and woven into sentences? How do children learn how to use words? Currently there is a tremendous resurgence of interest in lexical semantics. Word meanings have become increasingly important in linguistic theories because syntactic constructions are sensitive to the words they contain. In computational linguistics, new techniques are being applied to analyze words in texts, and machine-readable dictionaries are being used to build lexicons for natural language systems. These technologies provide large amounts of data and powerful data-analysis techniques to theoretical linguists, who can repay the favor to computer science by describing how one efficient lexical system, the human mind, represents word meanings. Lexical semantics provides crucial evidence to psychologists, too, about the innate stuff out of which concepts are made. Finally, it has become central to the study of child language acquisition. Infants are not born knowing a language, but they do have some understanding of the conceptual world that their parents describe in their speech. Since concepts are intimately tied to word meanings, knowledge of semantics might help children break into the rest of the language system. Lexical and Conceptual Semantics offers views from a variety of disciplines of these sophisticated new approaches to understanding the mental dictionary.
Language and cognition have been explained as the products of a homogeneous associative memory structure or alternatively, of a set of genetically determined computational modules in which rules manipulate symbolic representations. Intensive study of one phenomenon of English grammar and how it is processed and acquired suggest that both theories are partly right. Regular verbs (walk-walked) are computed by a suffixation rule in a neural system for grammatical processing; irregular verbs (run-ran) are retrieved from an associative memory.