Most evidence for the role of regular inflection as a default operation comes from languages that confound the morphological properties of regular and irregular forms with their phonological characteristics. For instance, regular plurals tend to faithfully preserve the base’s phonology (e.g., rat-rats), whereas irregular nouns tend to alter it (e.g., mouse- mice). The distinction between regular and irregular inflection may thus be an epiphenomenon of phonological faithfulness. In Hebrew noun inflection, however, morphological regularity and phonological faithfulness can be distinguished: Nouns whose stems change in the plural may take either a regular or an irregular suffix, and nouns whose stems are preserved in the plural may take either a regular or an irregular suffix. We use this dissociation to examine two hallmarks of default inflection: its lack of dependence on analogies from similar regular nouns, and its application to nonroots such as names. We show that these hallmarks of regularity may be found whether or not the plural form preserves the stem faithfully: People apply the regular suffix to novel nouns that don’t resemble existing nouns, and to names that sound like irregular nouns, regardless of whether the stem is ordinarily preserved in the plural of that family of nouns. Moreover, when they pluralize names (e.g., the Barak-Barakim), they do not apply the stem changes that are found in their homophonous nouns (e.g., barak-brakim “lightning”), replicating an effect found in English and German. These findings show that the distinction between regular and irregular phenomena cannot be reduced to differences in the kinds of phonological changes associated with those phenomena in English. Instead, regularity and irregularity must be distinguished in terms of the kinds of mental computations that effect them: symbolic operations versus memorized idiosyncrasies. A corollary is that complex words are not generally dichotomizable as “regular” or “irregular”; different aspects of a word may be regular or irregular depending on whether they violate the rule for that aspect and hence must be stored in memory.
What is the interaction between storage and computation in language processing? What is the psychological status of grammatical rules? What are the relative strengths of connectionist and symbolic models of cognition? How are the components of language implemented in the brain? The English past tense has served as an arena for debates on these issues. We defend the theory that irregular past-tense forms are stored in the lexicon, a division of declarative memory, whereas regular forms can be computed by a concatenation rule, which requires the procedural system. Irregulars have the psychological, linguistic and neuropsychological signatures of lexical memory, whereas regulars often have the signatures of grammatical processing. Furthermore, because regular inflection is rule-driven, speakers can apply it whenever memory fails.
According to the ‘word/rule’ account, regular inflection is computed by a default, symbolic process, whereas irregular inflection is achieved by associative memory. Conversely, pattern- associator accounts attribute both regular and irregular inflection to an associative process. The acquisition of the default is ascribed to the asymmetry in the distribution of regular and irregular tokens. Irregular tokens tend to form tight, well-defined phonological clusters (e.g. sing-sang, ring-rang), whereas regular forms are diffusely distributed throughout the phono- logical space. This distributional asymmetry is necessary and sufficient for the acquisition of a regular default. Hebrew nominal inflection challenges this account. We demonstrate that Hebrew speakers use the regular masculine inflection as a default despite the overlap in the distribution of regular and irregular Hebrew masculine nouns. Specifically, Experiment 1 demonstrates that regular inflection is productively applied to novel nouns regardless of their similarity to existing regular nouns. In contrast, the inflection of irregular sounding nouns is strongly sensitive to their similarity to stored irregular tokens. Experiment 2 estab- lishes the generality of the regular default for novel words that are phonologically idiosyn- cratic. Experiment 3 demonstrates that Hebrew speakers assign the default regular inflection to borrowings and names that are identical to existing irregular nouns. The existence of default inflection in Hebrew is incompatible with the distributional asymmetry hypothesis. Our find- ings also lend no support for a type-frequency account. The convergence of the circumstances triggering default inflection in Hebrew, German and English suggests that the capacity for default inflection may be general.
The vast expressive power of language is made possible by two principles: the arbitrary sound- meaning pairing underlying words, and the discrete combinatorial system underlying grammar. These principles implicate distinct cognitive mechanisms: associative memory and symbol- manipulating rules. The distinction may be seen in the difference between regular inflection (e.g., walk-walked), which is productive and open-ended and hence implicates a rule, and irregular inflection (e.g., come-came, which is idiosyncratic and closed and hence implicates individually memorized words. Nonetheless, two very different theories have attempted to collapse the distinction; generative phonology invokes minor rules to generate irregular as well as regular forms, and connectionism invokes a pattern associator memory to store and retrieve regular as well as irregular forms. I present evidence from three disciplines that supports the traditional word/rule distinction, though with an enriched conception of lexical memory with some of the properties of a pattern-associator. Rules, nonetheless, are distinct from pattern- association, because a rule concatenates a suffix to a symbol for verbs, so it does not require access to memorized verbs or their sound patterns, but applies as the "default," whenever memory access fails. I present a dozen such circumstances, including novel, unusual-sounding, and rootless and headless derived words, in which people inflect the words regularly (explaining quirks like flied out, low-lifes, and Walkmans). A comparison of English to other languages shows that contrary to the connectionist account, default suffixation is not due to numerous regular words reinforcing a pattern in associative memory, but to a memory-independent, symbol-concatenating mental operation.
Language comprises a lexicon for storing words and a grammar for generating rule-governed forms. Evidence is presented that the lexicon is part of a temporal-parietalhnedial-temporal “declarative memory” system and that granlmatical rules are processed by a frontamasal-ganglia “procedural” system. Patients produced past tenses of regular and novel verbs (looked and plagged), which require an -ed-suffixation rule, and irregular verbs (dug), which are retrieved from memory. Word-finding difficulties in posterior aphasia, and the general declarative memory impairment in Alzheimer's disease, led to more errors with irregular than regular and novel verbs. Grammatical difficulties in anterior aphasia, and the general impairment of procedures in Parkinson's disease, led to the opposite pattern. In contrast to the Parkinson's patients, who showed suppressed motor activity and rule use, Huntington's disease patients showed excess motor activity and rule use, underscoring a role for the basal ganglia in grammatical processing.
By definition, visual image representations are organized around spatial properties. However, we know very little about how these representations use information about location, one of the most important spatial properties. Three experiments explored how location information is incorporated into image representations. All of these experiments used a mental rotation task in which the location of the stimulus varied from trial to trial. If images are location-specific, these changes should affect the way images are used. The effects from image representations were separated from those of general spatial attention mechanisms by comparing performance with and without advance knowledge of the stimulus shape. With shape information, subjects could use an image as a template, and they recognized the stimulus more quickly when it was at the same location as the image. Experiment 1 demonstrated that subjects were able to use visual image representations effectively without knowing where the stimulus would appear, but left open the possibility that image location must be adjusted before use. In Experiment 2, distance between the stimulus location and the image location was varied systematically, and response time increased with distance. Therefore image representations appear to be location-specific, though the represented location can be adjusted easily. In Experiment 3, a saccade was introduced between the image cue and the test stimulus, in order to test whether subjects responded more quickly when the test stimulus appeared at the same retinotopic location or same spatiotopic location as the cue. The results suggest that location is coded retinotopically in image representations. This finding has implications not only for visual imagery but also for visual processing in general, because it suggests that there is no spatiotopic transform in the early stages of visual processing.
When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call for a hybrid of the two theories.
Children extend regular grammatical patterns to irregular words, resulting in overregularizations like comed, often after a period of correct performance ("U-shaped development"). The errors seem paradigmatic of rule use, hence bear on central issues in the psychology of rules: how creative rule application interacts with memorized exceptions in development, how overgeneral rules are unlearned in the absence of parental feedback, and whether cognitive processes involve explicit rules or parallel distributed processing (connectionist) networks. We remedy the lack of quantitative data on overregularization by analyzing 11,521 irregular past tense utterances in the spontaneous speech of 83 children. Our findings are as follows. (1) Overregularization errors are relatively rare (median 2.5% of irregular past tense forms), suggesting that there is no qualitative defect in children's grammars that must be unlearned. (2) Overregularization occurs at a roughly constant low rate from the 2s into the school-age years, affecting most irregular verbs. (3) Although overregularization errors never predominate, one aspect of their purported U-shaped development was confirmed quantitatively: an extended period of correct performance precedes the first error. (4) Overregularization does not correlate with increases in the number or proportion of regular verbs in parental speech, children's speech, or children's vocabularies. Thus, the traditional account in which memory operates before rules cannot be replaced by a connectionist alternative in which a single network displays rotelike or rulelike behavior in response to changes in input statistics. (5) Overregularizations first appear when children begin to mark regular verbs for tense reliably (i.e., when they stop saying Yesterday I walk). (6) The more often a parent uses an irregular form, the less often the child overregularizes it. (7) Verbs are protected from overregularization by similar-sounding irregulars, but they are not attracted to overregularization by similar-sounding regulars, suggesting that irregular patterns are stored in an associative memory with connectionist properties, but that regulars are not. We propose a simple explanation. Children, like adults, mark tense using memory (for irregulars) and an affixation rule that can generate a regular past tense form for any verb. Retrieval of an irregular blocks the rule, but children's memory traces are not strong enough to guarantee perfect retrieval. When retrieval fails, the rule is applied, and overregularization results.
How do speakers predict the syntax of a verb from its meaning? Traditional theories posit that syntactically relevant information about semantic arguments consists of a list of thematic roles like "agent", "theme", and "goal", which are linked onto a hierarchy of grammatical positions like subject, object and oblique object. For verbs involving motion, the entity caused to move is defined as the "theme" or "patient" and linked to the object. However, this fails for many common verbs, as in fill water into the glass and cover a sheet onto the bed. In more recent theories verbs' meanings are multidimensional structures in which the motions, changes, and other events can be represented in separate but connected substructures; linking rules are sensitive to the position of an argument in a particular configuration. The verb's object would be linked not to the moving entity but to the argument specified as "affected" or caused to change as the main event in the verb's meaning. The change can either be one of location, resulting from motion in a particular manner, or of state, resulting from accommodating or reacting to a substance. For example, pour specifies how a substance moves (downward in a stream), so its substance argument is the object (pour the water/glass); fill specifies how a container changes (from not full to full), so its stationary container argument is the object (fill the glass/water). The newer theory was tested in three experiments. Children aged 3;4-9;4 and adults were taught made-up verbs, presented in a neutral syntactic context (this is mooping), referring to a transfer of items to a surface or container. Subjects were tested on their willingness to encode the moving items or the surface as the verb's object. For verbs where the items moved in a particular manner (e.g., zig-zagging), people were more likely to express the moving items as the object; for verbs where the surface changed state (e.g., shape, color, or fullness), people were more likely to express the surface as the object. This confirms that speakers are not confined to labeling moving entities as "themes" or "patients" and linking them to the grammatical object; when a stationary entity undergoes a state change as the result of a motion, it can be represented as the main affected argument and thereby linked to the grammatical object instead.
Language and cognition have been explained as the products of a homogeneous associative memory structure or alternatively, of a set of genetically determined computational modules in which rules manipulate symbolic representations. Intensive study of one phenomenon of English grammar and how it is processed and acquired suggest that both theories are partly right. Regular verbs (walk-walked) are computed by a suffixation rule in a neural system for grammatical processing; irregular verbs (run-ran) are retrieved from an associative memory.
How do people recognize an object in different orientations? One theory is that the visual system describes the object relative to a reference frame centered on the object, resulting in a representation that is invariant across orientations. Chronometric data show that this is true only when an object can be identified uniquely by the arrangement of its parts along a single dimension. When an object can only be distinguished by an arrangement of its parts along more than one dimension, people mentally rotate it to a familiar orientation. This finding suggests that the human visual reference frame is tied to egocentric coordinates.