How are words represented in the mind and woven into sentences? How do children learn how to use words? Currently there is a tremendous resurgence of interest in lexical semantics. Word meanings have become increasingly important in linguistic theories because syntactic constructions are sensitive to the words they contain. In computational linguistics, new techniques are being applied to analyze words in texts, and machine-readable dictionaries are being used to build lexicons for natural language systems. These technologies provide large amounts of data and powerful data-analysis techniques to theoretical linguists, who can repay the favor to computer science by describing how one efficient lexical system, the human mind, represents word meanings. Lexical semantics provides crucial evidence to psychologists, too, about the innate stuff out of which concepts are made. Finally, it has become central to the study of child language acquisition. Infants are not born knowing a language, but they do have some understanding of the conceptual world that their parents describe in their speech. Since concepts are intimately tied to word meanings, knowledge of semantics might help children break into the rest of the language system. Lexical and Conceptual Semantics offers views from a variety of disciplines of these sophisticated new approaches to understanding the mental dictionary.
How do people recognize an object in different orientations? One theory is that the visual system describes the object relative to a reference frame centered on the object, resulting in a representation that is invariant across orientations. Chronometric data show that this is true only when an object can be identified uniquely by the arrangement of its parts along a single dimension. When an object can only be distinguished by an arrangement of its parts along more than one dimension, people mentally rotate it to a familiar orientation. This finding suggests that the human visual reference frame is tied to egocentric coordinates.
In a recent paper, Chambers and Reisberg (1985) showed that people cannot reverse classical ambiguous figures in imagery (such as the Necker cube, duck/rabbit, or Schroeder staircase). In three experiments, we refute one kind of explanation for this difficulty: that visual images do not contain information about the geometry of a shape necessary for reinterpreting it or that people cannot apply shape classification procedures to the information in imagery. We show, that given suitable conditions, people can assign novel interpretations to ambiguous images which have been constructed out of parts or mentally transformed. For example, when asked to imagine the letter “D” on its side, affixed to the top of the letter “J”, subjects spontaneously report “seeing” an umbrella. We also show that these reinterpretations are not the result of guessing strategies, and that they speak directly to the issue of whether or not mental images of ambiguous figures can be reconstrued. Finally, we show that arguments from the philosophy literature on the relation between images and descriptions are not relevant to the issue of whether images can be reinterpreted, and we suggest possible explanations for why classical ambiguous figures do not spontaneously reverse in imagery.
"A monumental study that sets a new standard for work on learnability." —Ray Jackendoff
In tackling a learnability paradox that has challenged scholars for more than a decade—how children acquire predicate-argument structures in their language—Steven Pinker synthesizes a vast literature in the fields of linguistics and psycholinguistics, and outlines explicit theories of the mental representation, the learning, and the development of verb meaning and verb syntax. He describes a new theory that has some surprising implications for the relation between language and thought.
Does intelligence result from the manipulation of structured symbolic expressions? Or is it the result of the activation of large networks of densely interconnected simple units? Connections and Symbols provides the first systematic analysis of the explosive new field of connectionism that is challenging the basic tenets of cognitive science. These lively discussions by Jerry A. Fodor, Zenon W. Pylyshyn, Steven Pinker, Alan Prince, Joel Lechter, and Thomas G. Bever raise issues that lie at the core of our understanding of how the mind works: Does connectionism offer a truly new scientific model or does it merely cloak the old notion of associationism as a central doctrine of learning and mental functioning? Which of the new empirical generalizations are sound and which are false? And which of the many ideas such as massively parallel processing, distributed representation, constraint satisfaction, and subsymbolic or microfeatural analyses belong together, and which are logically independent? Now that connectionism has arrived with full-blown models of psychological processes as diverse as Pavlovian conditioning, visual recognition, and language acquisition, the debate is on. Common themes emerge from all the contributors to Connections and Symbols: criticism of connectionist models applied to language or the parts of cognition employing language—like operations; and a focus on what it is about human cognition that supports the traditional physical symbol system hypothesis. While criticizing many aspects of connectionist models, the authors also identify aspects of cognition that could be explained by the connectionist models.
The acquisition of the passive in English poses a learnability problem. Most transitive verbs have passive forms (e.g., kick/was kicked by), tempting the child to form a productive rule of passivization deriving passive participles from active forms. However, some verbs cannot be passivized (e.g. cost/was cost by). Given that children do not receive negative evidence telling them which strings are ungrammatical, what prevents them from overgeneralizing a productive passive rule to the exceptional verbs (or if they do incorrectly passivize such verbs, how do they recover)? One possible solution is that children are conservative: they only generate passives for those verbs that they have heard in passive sentences in the input. We show that this proposal is incorrect: in children's spontaneous speech, they utter passive participles that they could not have heard in parental input, and in four experiments in which 3–8-year-olds were taught novel verbs in active sentences, they freely uttered passivized versions of them when describing new events. An alternative solution is that children at some point come to possess a semantic constraint distinguishing passivizable from nonpassivizable verbs. In two of the experiments, we show that children do not have an absolute constraint forbidding them to passivize nonactional verbs of perception or spatial relationships, although they passivize them somewhat more reluctantly than they do actional verbs. In two other experiments, we show that children's tendency to passivize depends on the mapping between thematic roles and grammatical functions specified by the verb: they selectively resist passivizing made-up verbs whose subjects are patients and whose objects are agents; and they are more likely to passivize spatial relation verbs with location subjects than with theme subjects. These trends are consistent with Jackendoff's “Thematic Hierarchy Condition” on the adult passive. However, we argue that the constraint on passive that adults obey, and that children approach, is somewhat different: passivizable verbs must have object arguments that are patients, either literally for action verbs, or in an extended abstract sense that individual languages can define for particular classes of nonactional verbs.
How do we recognize objects? How do we reason about objects when they are absent and only in memory? How do we conceptualize the three dimensions of space? Do different people do these things in different ways? And where are these abilities located in the brain? During the past decade cognitive scientists have devised new experimental techniques; researchers in artificial intelligence have devised new ways of modeling cognitive processes on computers; neuropsychologists are testing new models of brain organization.. Many of these developments are represented in this collection of essays. The papers, though reporting work at the cutting edge of their fields, do not assume a highly technical background on the part of readers, and the volume begins with a tutorial introduction by the editor, making the book suitable for specialists and non-specialists alike.
Research is reviewed that addresses itself to human language learning by developing precise, mechanistic models that are capable in principle of acquiring languages on the basis of exposure to linguistic data. Such research includes theorems on language learnability from mathematical linguistics, computer models of language acquisition from cognitive simulation and artificial intelligence, and models of transformational grammar acquisition from theoretical linguistics. It is argued that such research bears strongly on major issues in developmental psycholinguistics, in particular, nativism and empiricism, the role of semantics and pragmatics in language learning, cognitive development, and the importance of the simplified speech addressed to children.