Publications

1994
Cave, K., Pinker, S., Giorgi, L., Thomas, C., Heller, L., Wolfe, J., & Lin, H. (1994). The Representation of Location in Visual Images. Cognitive Psyhcology , 26, 1-32.Abstract

By definition, visual image representations are organized around spatial properties. However, we know very little about how these representations use information about location, one of the most important spatial properties. Three experiments explored how location information is incorporated into image representations. All of these experiments used a mental rotation task in which the location of the stimulus varied from trial to trial. If images are location-specific, these changes should affect the way images are used. The effects from image representations were separated from those of general spatial attention mechanisms by comparing performance with and without advance knowledge of the stimulus shape. With shape information, subjects could use an image as a template, and they recognized the stimulus more quickly when it was at the same location as the image. Experiment 1 demonstrated that subjects were able to use visual image representations effectively without knowing where the stimulus would appear, but left open the possibility that image location must be adjusted before use. In Experiment 2, distance between the stimulus location and the image location was varied systematically, and response time increased with distance. Therefore image representations appear to be location-specific, though the represented location can be adjusted easily. In Experiment 3, a saccade was introduced between the image cue and the test stimulus, in order to test whether subjects responded more quickly when the test stimulus appeared at the same retinotopic location or same spatiotopic location as the cue. The results suggest that location is coded retinotopically in image representations. This finding has implications not only for visual imagery but also for visual processing in general, because it suggests that there is no spatiotopic transform in the early stages of visual processing.

PDF
The Language Instinct
Pinker, S. (1994). The Language Instinct . New York, NY: Harper Perennial Modern Classics.Abstract

"A brilliant, witty, and altogether satisfying book."
—Michael Coe, New York Times Book Review

Everyone has questions about language. Some are from everyday experience: Why do immigrants struggle with a new language, only to have their fluent children ridicule their grammatical errors? Why can't computers converse with us? Why is the hockey team in Toronto called the Maple Leafs, not the Maple Leaves? Some are from popular science: Have scientists really reconstructed the first language spoken on earth? Are there genes for grammar? Can chimpanzees learn sign language? And some are from our deepest ponderings about the human condition: Does our language control our thoughts? How could language have evolved? Is language deteriorating?  Today laypeople can chitchat about black holes and dinosaur extinctions, but their curiosity about their own speech has been left unsatisfied—until now. In The Language Instinct, Steven Pinker, one of the world's leading scientists of language and the mind, lucidly explains everything you always wanted to know about language: how it works, how children learn it, how it changes, how the brain computes it, how it evolved.  But The Language Instinct is no encyclopedia. With wit, erudition, and deft use of everyday examples of humor and wordplay, Pinker weaves our vast knowledge of language into a compelling theory: that language is a human instinct, wired into our brains by evolution like web-spinning in spiders or sonar in bats.  The theory not only challenges convention wisdom about language itself (especially from the self-appointed "experts" who claim to be safeguarding the language but who understand it less well than a typical teenager). It is part of a whole new vision of the human mind: not a general-purpose computer, but a collection of instincts adapted to solving evolutionarily significant problems—the mind as a Swiss Army knife.  Entertaining, insightful, provocative, The Language Instinct will change the way you talk about talking and think about thinking.  New in 2007: The new “PS” edition contains an update on the science of language since the book was first published, an autobiography, an account of how the book was written, frequently asked questions, and suggestions for further reading.

REVIEWS
Review Excerpts

AVAILABLE AT:
Amazon
Amazon UK
Barnes & Noble
IndieBound

1993
Prasada, S., & Pinker, S. (1993). Generalizations of regular and irregular morphology. Language and Cognitive Processes , 8 (1), 1-56.Abstract

When it comes to explaining English verbs' patterns of regular and irregular generalization, single-network theories have difficulty with the former, rule-only theories with the latter process. Linguistic and psycholinguistic evidence, based on observation during experiments and simulations in morphological pattern generation, independently call for a hybrid of the two theories.

PDF
1992
Marcus, G., Pinker, S., Ullman, M., Hollander, M., Rosen, T., Xu, F., & Clahsen, H. (1992). Overregularization in language acquisition. Monographs of the Society for Research in Child Development , 57, i+iii+v+vi+1-178.Abstract

Children extend regular grammatical patterns to irregular words, resulting in overregularizations like comed, often after a period of correct performance ("U-shaped development"). The errors seem paradigmatic of rule use, hence bear on central issues in the psychology of rules: how creative rule application interacts with memorized exceptions in development, how overgeneral rules are unlearned in the absence of parental feedback, and whether cognitive processes involve explicit rules or parallel distributed processing (connectionist) networks. We remedy the lack of quantitative data on overregularization by analyzing 11,521 irregular past tense utterances in the spontaneous speech of 83 children. Our findings are as follows. (1) Overregularization errors are relatively rare (median 2.5% of irregular past tense forms), suggesting that there is no qualitative defect in children's grammars that must be unlearned. (2) Overregularization occurs at a roughly constant low rate from the 2s into the school-age years, affecting most irregular verbs. (3) Although overregularization errors never predominate, one aspect of their purported U-shaped development was confirmed quantitatively: an extended period of correct performance precedes the first error. (4) Overregularization does not correlate with increases in the number or proportion of regular verbs in parental speech, children's speech, or children's vocabularies. Thus, the traditional account in which memory operates before rules cannot be replaced by a connectionist alternative in which a single network displays rotelike or rulelike behavior in response to changes in input statistics. (5) Overregularizations first appear when children begin to mark regular verbs for tense reliably (i.e., when they stop saying Yesterday I walk). (6) The more often a parent uses an irregular form, the less often the child overregularizes it. (7) Verbs are protected from overregularization by similar-sounding irregulars, but they are not attracted to overregularization by similar-sounding regulars, suggesting that irregular patterns are stored in an associative memory with connectionist properties, but that regulars are not. We propose a simple explanation. Children, like adults, mark tense using memory (for irregulars) and an affixation rule that can generate a regular past tense form for any verb. Retrieval of an irregular blocks the rule, but children's memory traces are not strong enough to guarantee perfect retrieval. When retrieval fails, the rule is applied, and overregularization results.

PDF
1991
Kim, J., Pinker, S., Prince, A., & Prasada, S. (1991). Why no mere mortal has ever flown out to center field. Cognitive Science , 15 (2), 173-218. PDF
Gropen, J., Pinker, S., Hollander, M., & Goldberg, R. (1991). Affectedness and Direct Objects: The role of Lexical Semantics in the Acquistion of Verb Argument Structure. Cognition , 41 (1-3), 153-195.Abstract

How do speakers predict the syntax of a verb from its meaning? Traditional theories posit that syntactically relevant information about semantic arguments consists of a list of thematic roles like "agent", "theme", and "goal", which are linked onto a hierarchy of grammatical positions like subject, object and oblique object. For verbs involving motion, the entity caused to move is defined as the "theme" or "patient" and linked to the object. However, this fails for many common verbs, as in fill water into the glass and cover a sheet onto the bed. In more recent theories verbs' meanings are multidimensional structures in which the motions, changes, and other events can be represented in separate but connected substructures; linking rules are sensitive to the position of an argument in a particular configuration. The verb's object would be linked not to the moving entity but to the argument specified as "affected" or caused to change as the main event in the verb's meaning. The change can either be one of location, resulting from motion in a particular manner, or of state, resulting from accommodating or reacting to a substance. For example, pour specifies how a substance moves (downward in a stream), so its substance argument is the object (pour the water/glass); fill specifies how a container changes (from not full to full), so its stationary container argument is the object (fill the glass/water). The newer theory was tested in three experiments. Children aged 3;4-9;4 and adults were taught made-up verbs, presented in a neutral syntactic context (this is mooping), referring to a transfer of items to a surface or container. Subjects were tested on their willingness to encode the moving items or the surface as the verb's object. For verbs where the items moved in a particular manner (e.g., zig-zagging), people were more likely to express the moving items as the object; for verbs where the surface changed state (e.g., shape, color, or fullness), people were more likely to express the surface as the object. This confirms that speakers are not confined to labeling moving entities as "themes" or "patients" and linking them to the grammatical object; when a stationary entity undergoes a state change as the result of a motion, it can be represented as the main affected argument and thereby linked to the grammatical object instead.

PDF
Pinker, S. (1991). Rules of Language. Science , 253, 530-535.Abstract

Language and cognition have been explained as the products of a homogeneous associative memory structure or alternatively, of a set of genetically determined computational modules in which rules manipulate symbolic representations. Intensive study of one phenomenon of English grammar and how it is processed and acquired suggest that both theories are partly right. Regular verbs (walk-walked) are computed by a suffixation rule in a neural system for grammatical processing; irregular verbs (run-ran) are retrieved from an associative memory.

Lexical and Conceptual Semantics
Pinker, S., & Levin, B. (1991). Lexical and Conceptual Semantics . Cambridge, MA: MIT Press.Abstract

How are words represented in the mind and woven into sentences? How do children learn how to use words? Currently there is a tremendous resurgence of interest in lexical semantics. Word meanings have become increasingly important in linguistic theories because syntactic constructions are sensitive to the words they contain. In computational linguistics, new techniques are being applied to analyze words in texts, and machine-readable dictionaries are being used to build lexicons for natural language systems. These technologies provide large amounts of data and powerful data-analysis techniques to theoretical linguists, who can repay the favor to computer science by describing how one efficient lexical system, the human mind, represents word meanings. Lexical semantics provides crucial evidence to psychologists, too, about the innate stuff out of which concepts are made. Finally, it has become central to the study of child language acquisition. Infants are not born knowing a language, but they do have some understanding of the conceptual world that their parents describe in their speech. Since concepts are intimately tied to word meanings, knowledge of semantics might help children break into the rest of the language system. Lexical and Conceptual Semantics offers views from a variety of disciplines of these sophisticated new approaches to understanding the mental dictionary.

AVAILABLE AT:
Amazon
Amazon UK
Barnes & Noble

1990
Pinker, S., & Bloom, P. (1990). Natural Language and Natural Selection . Behavioral and Brain Sciences , 13 (4), 707-784. pinker_bloom_1990.pdf
Tarr, M., & Pinker, S. (1990). When does human object recognition use a viewer-centered reference frame?. Psychological Science , 1 (4), 253-256.Abstract

How do people recognize an object in different orientations? One theory is that the visual system describes the object relative to a reference frame centered on the object, resulting in a representation that is invariant across orientations. Chronometric data show that this is true only when an object can be identified uniquely by the arrangement of its parts along a single dimension. When an object can only be distinguished by an arrangement of its parts along more than one dimension, people mentally rotate it to a familiar orientation. This finding suggests that the human visual reference frame is tied to egocentric coordinates.

PDF
1989
Tarr, M. J., & Pinker, S. (1989). Mental rotation and orientation-dependence in shape recognition. Cognitive Psychology , (21), 233-282.
Finke, R., Pinker, S., & Farah, M. (1989). Reinterpreting Visual Patterns in Mental Imagery. Cognitive Science , 13 (1), 51-78.Abstract

In a recent paper, Chambers and Reisberg (1985) showed that people cannot reverse classical ambiguous figures in imagery (such as the Necker cube, duck/rabbit, or Schroeder staircase). In three experiments, we refute one kind of explanation for this difficulty: that visual images do not contain information about the geometry of a shape necessary for reinterpreting it or that people cannot apply shape classification procedures to the information in imagery. We show, that given suitable conditions, people can assign novel interpretations to ambiguous images which have been constructed out of parts or mentally transformed. For example, when asked to imagine the letter “D” on its side, affixed to the top of the letter “J”, subjects spontaneously report “seeing” an umbrella. We also show that these reinterpretations are not the result of guessing strategies, and that they speak directly to the issue of whether or not mental images of ambiguous figures can be reconstrued. Finally, we show that arguments from the philosophy literature on the relation between images and descriptions are not relevant to the issue of whether images can be reinterpreted, and we suggest possible explanations for why classical ambiguous figures do not spontaneously reverse in imagery.

PDF
Gropen, J., Pinker, S., Hollander, M., Goldberg, R., & Wilson, R. (1989). The Learnability and Acquistion of Dative Alternation in English. Language , 65 (2), 203-195. PDF
Learnability and Cognition: The Acquisition of Argument Structure
Pinker, S. (1989). Learnability and Cognition: The Acquisition of Argument Structure . Cambridge, MA: MIT Press.Abstract

"A monumental study that sets a new standard for work on learnability."
—Ray Jackendoff

In tackling a learnability paradox that has challenged scholars for more than a decade—how children acquire predicate-argument structures in their language—Steven Pinker synthesizes a vast literature in the fields of linguistics and psycholinguistics, and outlines explicit theories of the mental representation, the learning, and the development of verb meaning and verb syntax. He describes a new theory that has some surprising implications for the relation between language and thought.

REVIEWS
Review Excerpts

AVAILABLE AT:
Amazon
Amazon UK

1988
Connections and Symbols
Pinker, S., & Mehler, J. (1988). Connections and Symbols . Cambridge, MA: MIT Press.Abstract

Does intelligence result from the manipulation of structured symbolic expressions? Or is it the result of the activation of large networks of densely interconnected simple units? Connections and Symbols provides the first systematic analysis of the explosive new field of connectionism that is challenging the basic tenets of cognitive science. These lively discussions by Jerry A. Fodor, Zenon W. Pylyshyn, Steven Pinker, Alan Prince, Joel Lechter, and Thomas G. Bever raise issues that lie at the core of our understanding of how the mind works: Does connectionism offer a truly new scientific model or does it merely cloak the old notion of associationism as a central doctrine of learning and mental functioning? Which of the new empirical generalizations are sound and which are false? And which of the many ideas such as massively parallel processing, distributed representation, constraint satisfaction, and subsymbolic or microfeatural analyses belong together, and which are logically independent? Now that connectionism has arrived with full-blown models of psychological processes as diverse as Pavlovian conditioning, visual recognition, and language acquisition, the debate is on. Common themes emerge from all the contributors to Connections and Symbols: criticism of connectionist models applied to language or the parts of cognition employing language—like operations; and a focus on what it is about human cognition that supports the traditional physical symbol system hypothesis. While criticizing many aspects of connectionist models, the authors also identify aspects of cognition that could be explained by the connectionist models.

AVAILABLE AT:
Amazon
Amazon UK
Barnes & Noble
IndieBound

1987
Pinker, S., Lebeaux, S., & Frost, L. A. (1987). Productivity and Constraints in the Acquisition of the Passive . Cognition , 26 (3), 195-267.Abstract
The acquisition of the passive in English poses a learnability problem. Most transitive verbs have passive forms (e.g., kick/was kicked by), tempting the child to form a productive rule of passivization deriving passive participles from active forms. However, some verbs cannot be passivized (e.g. cost/was cost by). Given that children do not receive negative evidence telling them which strings are ungrammatical, what prevents them from overgeneralizing a productive passive rule to the exceptional verbs (or if they do incorrectly passivize such verbs, how do they recover)? One possible solution is that children are conservative: they only generate passives for those verbs that they have heard in passive sentences in the input. We show that this proposal is incorrect: in children's spontaneous speech, they utter passive participles that they could not have heard in parental input, and in four experiments in which 3–8-year-olds were taught novel verbs in active sentences, they freely uttered passivized versions of them when describing new events. An alternative solution is that children at some point come to possess a semantic constraint distinguishing passivizable from nonpassivizable verbs. In two of the experiments, we show that children do not have an absolute constraint forbidding them to passivize nonactional verbs of perception or spatial relationships, although they passivize them somewhat more reluctantly than they do actional verbs. In two other experiments, we show that children's tendency to passivize depends on the mapping between thematic roles and grammatical functions specified by the verb: they selectively resist passivizing made-up verbs whose subjects are patients and whose objects are agents; and they are more likely to passivize spatial relation verbs with location subjects than with theme subjects. These trends are consistent with Jackendoff's “Thematic Hierarchy Condition” on the adult passive. However, we argue that the constraint on passive that adults obey, and that children approach, is somewhat different: passivizable verbs must have object arguments that are patients, either literally for action verbs, or in an extended abstract sense that individual languages can define for particular classes of nonactional verbs.
PDF
1986
Visual Cognition: Computational Models of Cognition and Perception
Pinker, S. (1986). Visual Cognition: Computational Models of Cognition and Perception . Cambridge, MA: MIT Press.Abstract

How do we recognize objects? How do we reason about objects when they are absent and only in memory? How do we conceptualize the three dimensions of space? Do different people do these things in different ways? And where are these abilities located in the brain? During the past decade cognitive scientists have devised new experimental techniques; researchers in artificial intelligence have devised new ways of modeling cognitive processes on computers; neuropsychologists are testing new models of brain organization.. Many of these developments are represented in this collection of essays. The papers, though reporting work at the cutting edge of their fields, do not assume a highly technical background on the part of readers, and the volume begins with a tutorial introduction by the editor, making the book suitable for specialists and non-specialists alike.

AVAILABLE AT:
Amazon
Amazon UK
Barnes & Noble
IndieBound

1984
Language Learnability and Language Development
Pinker, S. (1984). Language Learnability and Language Development . Cambridge, MA: Harvard University Press.Abstract

"A fiercely reasoned, bently written landmark of psychological science."
—Roger Brown

This classic study is still the only comprehensive theory of child language acquisition—one that begins with the infant, proceeds step by step according to explicit learning algorithms, mirrors children's development, and ends up with adult grammatical competence. Now reprinted with new commentary by the author that updates of every section, Language Learnability and Language Development continues to be an indispensible resource in developmental psycholinguistics.

REVIEWS
Review Excerpts

AVAILABLE AT:
Amazon
Amazon UK
Barnes & Noble

1979
Pinker, S. (1979). Formal Models of Language Learning. Cognition , 7 217-283.Abstract
Research is reviewed that addresses itself to human language learning by developing precise, mechanistic models that are capable in principle of acquiring languages on the basis of exposure to linguistic data. Such research includes theorems on language learnability from mathematical linguistics, computer models of language acquisition from cognitive simulation and artificial intelligence, and models of transformational grammar acquisition from theoretical linguistics. It is argued that such research bears strongly on major issues in developmental psycholinguistics, in particular, nativism and empiricism, the role of semantics and pragmatics in language learning, cognitive development, and the importance of the simplified speech addressed to children.
PDF

Pages