By definition, visual image representations are organized around spatial properties. However, we know very little about how these representations use information about location, one of the most important spatial properties. Three experiments explored how location information is incorporated into image representations. All of these experiments used a mental rotation task in which the location of the stimulus varied from trial to trial. If images are location-specific, these changes should affect the way images are used. The effects from image representations were separated from those of general spatial attention mechanisms by comparing performance with and without advance knowledge of the stimulus shape. With shape information, subjects could use an image as a template, and they recognized the stimulus more quickly when it was at the same location as the image. Experiment 1 demonstrated that subjects were able to use visual image representations effectively without knowing where the stimulus would appear, but left open the possibility that image location must be adjusted before use. In Experiment 2, distance between the stimulus location and the image location was varied systematically, and response time increased with distance. Therefore image representations appear to be location-specific, though the represented location can be adjusted easily. In Experiment 3, a saccade was introduced between the image cue and the test stimulus, in order to test whether subjects responded more quickly when the test stimulus appeared at the same retinotopic location or same spatiotopic location as the cue. The results suggest that location is coded retinotopically in image representations. This finding has implications not only for visual imagery but also for visual processing in general, because it suggests that there is no spatiotopic transform in the early stages of visual processing.
When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call for a hybrid of the two theories.
Children extend regular grammatical patterns to irregular words, resulting in overregularizations like comed, often after a period of correct performance ("U-shaped development"). The errors seem paradigmatic of rule use, hence bear on central issues in the psychology of rules: how creative rule application interacts with memorized exceptions in development, how overgeneral rules are unlearned in the absence of parental feedback, and whether cognitive processes involve explicit rules or parallel distributed processing (connectionist) networks. We remedy the lack of quantitative data on overregularization by analyzing 11,521 irregular past tense utterances in the spontaneous speech of 83 children. Our findings are as follows. (1) Overregularization errors are relatively rare (median 2.5% of irregular past tense forms), suggesting that there is no qualitative defect in children's grammars that must be unlearned. (2) Overregularization occurs at a roughly constant low rate from the 2s into the school-age years, affecting most irregular verbs. (3) Although overregularization errors never predominate, one aspect of their purported U-shaped development was confirmed quantitatively: an extended period of correct performance precedes the first error. (4) Overregularization does not correlate with increases in the number or proportion of regular verbs in parental speech, children's speech, or children's vocabularies. Thus, the traditional account in which memory operates before rules cannot be replaced by a connectionist alternative in which a single network displays rotelike or rulelike behavior in response to changes in input statistics. (5) Overregularizations first appear when children begin to mark regular verbs for tense reliably (i.e., when they stop saying Yesterday I walk). (6) The more often a parent uses an irregular form, the less often the child overregularizes it. (7) Verbs are protected from overregularization by similar-sounding irregulars, but they are not attracted to overregularization by similar-sounding regulars, suggesting that irregular patterns are stored in an associative memory with connectionist properties, but that regulars are not. We propose a simple explanation. Children, like adults, mark tense using memory (for irregulars) and an affixation rule that can generate a regular past tense form for any verb. Retrieval of an irregular blocks the rule, but children's memory traces are not strong enough to guarantee perfect retrieval. When retrieval fails, the rule is applied, and overregularization results.
How do speakers predict the syntax of a verb from its meaning? Traditional theories posit that syntactically relevant information about semantic arguments consists of a list of thematic roles like "agent", "theme", and "goal", which are linked onto a hierarchy of grammatical positions like subject, object and oblique object. For verbs involving motion, the entity caused to move is defined as the "theme" or "patient" and linked to the object. However, this fails for many common verbs, as in fill water into the glass and cover a sheet onto the bed. In more recent theories verbs' meanings are multidimensional structures in which the motions, changes, and other events can be represented in separate but connected substructures; linking rules are sensitive to the position of an argument in a particular configuration. The verb's object would be linked not to the moving entity but to the argument specified as "affected" or caused to change as the main event in the verb's meaning. The change can either be one of location, resulting from motion in a particular manner, or of state, resulting from accommodating or reacting to a substance. For example, pour specifies how a substance moves (downward in a stream), so its substance argument is the object (pour the water/glass); fill specifies how a container changes (from not full to full), so its stationary container argument is the object (fill the glass/water). The newer theory was tested in three experiments. Children aged 3;4-9;4 and adults were taught made-up verbs, presented in a neutral syntactic context (this is mooping), referring to a transfer of items to a surface or container. Subjects were tested on their willingness to encode the moving items or the surface as the verb's object. For verbs where the items moved in a particular manner (e.g., zig-zagging), people were more likely to express the moving items as the object; for verbs where the surface changed state (e.g., shape, color, or fullness), people were more likely to express the surface as the object. This confirms that speakers are not confined to labeling moving entities as "themes" or "patients" and linking them to the grammatical object; when a stationary entity undergoes a state change as the result of a motion, it can be represented as the main affected argument and thereby linked to the grammatical object instead.
Language and cognition have been explained as the products of a homogeneous associative memory structure or alternatively, of a set of genetically determined computational modules in which rules manipulate symbolic representations. Intensive study of one phenomenon of English grammar and how it is processed and acquired suggest that both theories are partly right. Regular verbs (walk-walked) are computed by a suffixation rule in a neural system for grammatical processing; irregular verbs (run-ran) are retrieved from an associative memory.
How are words represented in the mind and woven into sentences? How do children learn how to use words? Currently there is a tremendous resurgence of interest in lexical semantics. Word meanings have become increasingly important in linguistic theories because syntactic constructions are sensitive to the words they contain. In computational linguistics, new techniques are being applied to analyze words in texts, and machine-readable dictionaries are being used to build lexicons for natural language systems. These technologies provide large amounts of data and powerful data-analysis techniques to theoretical linguists, who can repay the favor to computer science by describing how one efficient lexical system, the human mind, represents word meanings. Lexical semantics provides crucial evidence to psychologists, too, about the innate stuff out of which concepts are made. Finally, it has become central to the study of child language acquisition. Infants are not born knowing a language, but they do have some understanding of the conceptual world that their parents describe in their speech. Since concepts are intimately tied to word meanings, knowledge of semantics might help children break into the rest of the language system. Lexical and Conceptual Semantics offers views from a variety of disciplines of these sophisticated new approaches to understanding the mental dictionary.
How do people recognize an object in different orientations? One theory is that the visual system describes the object relative to a reference frame centered on the object, resulting in a representation that is invariant across orientations. Chronometric data show that this is true only when an object can be identified uniquely by the arrangement of its parts along a single dimension. When an object can only be distinguished by an arrangement of its parts along more than one dimension, people mentally rotate it to a familiar orientation. This finding suggests that the human visual reference frame is tied to egocentric coordinates.
In a recent paper, Chambers and Reisberg (1985) showed that people cannot reverse classical ambiguous figures in imagery (such as the Necker cube, duck/rabbit, or Schroeder staircase). In three experiments, we refute one kind of explanation for this difficulty: that visual images do not contain information about the geometry of a shape necessary for reinterpreting it or that people cannot apply shape classification procedures to the information in imagery. We show, that given suitable conditions, people can assign novel interpretations to ambiguous images which have been constructed out of parts or mentally transformed. For example, when asked to imagine the letter “D” on its side, affixed to the top of the letter “J”, subjects spontaneously report “seeing” an umbrella. We also show that these reinterpretations are not the result of guessing strategies, and that they speak directly to the issue of whether or not mental images of ambiguous figures can be reconstrued. Finally, we show that arguments from the philosophy literature on the relation between images and descriptions are not relevant to the issue of whether images can be reinterpreted, and we suggest possible explanations for why classical ambiguous figures do not spontaneously reverse in imagery.
"A monumental study that sets a new standard for work on learnability." —Ray Jackendoff
In tackling a learnability paradox that has challenged scholars for more than a decade—how children acquire predicate-argument structures in their language—Steven Pinker synthesizes a vast literature in the fields of linguistics and psycholinguistics, and outlines explicit theories of the mental representation, the learning, and the development of verb meaning and verb syntax. He describes a new theory that has some surprising implications for the relation between language and thought.
Does intelligence result from the manipulation of structured symbolic expressions? Or is it the result of the activation of large networks of densely interconnected simple units? Connections and Symbols provides the first systematic analysis of the explosive new field of connectionism that is challenging the basic tenets of cognitive science. These lively discussions by Jerry A. Fodor, Zenon W. Pylyshyn, Steven Pinker, Alan Prince, Joel Lechter, and Thomas G. Bever raise issues that lie at the core of our understanding of how the mind works: Does connectionism offer a truly new scientific model or does it merely cloak the old notion of associationism as a central doctrine of learning and mental functioning? Which of the new empirical generalizations are sound and which are false? And which of the many ideas such as massively parallel processing, distributed representation, constraint satisfaction, and subsymbolic or microfeatural analyses belong together, and which are logically independent? Now that connectionism has arrived with full-blown models of psychological processes as diverse as Pavlovian conditioning, visual recognition, and language acquisition, the debate is on. Common themes emerge from all the contributors to Connections and Symbols: criticism of connectionist models applied to language or the parts of cognition employing language—like operations; and a focus on what it is about human cognition that supports the traditional physical symbol system hypothesis. While criticizing many aspects of connectionist models, the authors also identify aspects of cognition that could be explained by the connectionist models.
The acquisition of the passive in English poses a learnability problem. Most transitive verbs have passive forms (e.g., kick/was kicked by), tempting the child to form a productive rule of passivization deriving passive participles from active forms. However, some verbs cannot be passivized (e.g. cost/was cost by). Given that children do not receive negative evidence telling them which strings are ungrammatical, what prevents them from overgeneralizing a productive passive rule to the exceptional verbs (or if they do incorrectly passivize such verbs, how do they recover)? One possible solution is that children are conservative: they only generate passives for those verbs that they have heard in passive sentences in the input. We show that this proposal is incorrect: in children's spontaneous speech, they utter passive participles that they could not have heard in parental input, and in four experiments in which 3–8-year-olds were taught novel verbs in active sentences, they freely uttered passivized versions of them when describing new events. An alternative solution is that children at some point come to possess a semantic constraint distinguishing passivizable from nonpassivizable verbs. In two of the experiments, we show that children do not have an absolute constraint forbidding them to passivize nonactional verbs of perception or spatial relationships, although they passivize them somewhat more reluctantly than they do actional verbs. In two other experiments, we show that children's tendency to passivize depends on the mapping between thematic roles and grammatical functions specified by the verb: they selectively resist passivizing made-up verbs whose subjects are patients and whose objects are agents; and they are more likely to passivize spatial relation verbs with location subjects than with theme subjects. These trends are consistent with Jackendoff's “Thematic Hierarchy Condition” on the adult passive. However, we argue that the constraint on passive that adults obey, and that children approach, is somewhat different: passivizable verbs must have object arguments that are patients, either literally for action verbs, or in an extended abstract sense that individual languages can define for particular classes of nonactional verbs.
How do we recognize objects? How do we reason about objects when they are absent and only in memory? How do we conceptualize the three dimensions of space? Do different people do these things in different ways? And where are these abilities located in the brain? During the past decade cognitive scientists have devised new experimental techniques; researchers in artificial intelligence have devised new ways of modeling cognitive processes on computers; neuropsychologists are testing new models of brain organization.. Many of these developments are represented in this collection of essays. The papers, though reporting work at the cutting edge of their fields, do not assume a highly technical background on the part of readers, and the volume begins with a tutorial introduction by the editor, making the book suitable for specialists and non-specialists alike.
Research is reviewed that addresses itself to human language learning by developing precise, mechanistic models that are capable in principle of acquiring languages on the basis of exposure to linguistic data. Such research includes theorems on language learnability from mathematical linguistics, computer models of language acquisition from cognitive simulation and artificial intelligence, and models of transformational grammar acquisition from theoretical linguistics. It is argued that such research bears strongly on major issues in developmental psycholinguistics, in particular, nativism and empiricism, the role of semantics and pragmatics in language learning, cognitive development, and the importance of the simplified speech addressed to children.