The role of Broca’s area in grammatical computation is unclear, because syntactic processing is often confounded with working memory, articulation, or semantic selection. Morphological processing potentially circumvents these problems. Using event-related functional magnetic resonance imaging (fMRI), we had 18 subjects silently inflect words or read them verbatim. Subtracting the activity pattern for reading from that for inflection, which indexes processes involved in inflection (holding constant lexical processing and articulatory planning) highlighted left Brodmann area (BA) 44/45 (Broca’s area), BA 47, anterior insula, and medial supplementary motor area. Subtracting activity during zero inflection (the hawk; they walk) from that during overt inflection (the hawks; they walked), which highlights manipulation of phonological content, implicated subsets of the regions engaged by inflection as a whole. Subtracting activity during verbatim reading from activity during zero inflection (which highlights the manipulation of inflectional features) implicated distinct regions of BA 44, 47, and a premotor region (thereby tying these regions to grammatical features), but failed to implicate the insula or BA 45 (thereby tying these to articulation). These patterns were largely similar in nouns and verbs and in regular and irregular forms, suggesting these regions implement inflectional features cutting across word classes. Greater activity was observed for irregular than regular verbs in the anterior cingulate and supplementary motor area (SMA), possibly reflecting the blocking of regular or competing irregular candidates. The results confirm a role for Broca’s area in abstract grammatical processing, and are interpreted in terms of a network of regions in left prefrontal cortex (PFC) that are recruited for processing abstract morphosyntactic features and overt morphophonological content.
Publications by Type: Journal Article
2006
2005
The distinction between singular and plural enters into linguistic phenomena such as morphology, lexical semantics, and agreement and also must interface with perceptual and conceptual systems that assess numerosity in the world. Three experiments examine the computation of semantic number for singulars and plurals from the morphological properties of visually presented words. In a Stroop-like task, Hebrew speakers were asked to determine the number of words presented on a computer screen (one or two) while ignoring their contents. People took longer to respond if the number of words was incongruent with their morphological number (e.g., they were slower to determine that one word was on the screen if it was plural, and in some conditions, that two words were on the screen if they were singular, compared to neutral letter strings), suggesting that the extraction of number from words is automatic and yields a representation comparable to the one computed by the perceptual system. In many conditions, the effect of number congruency occurred only with plural nouns, not singulars, consistent with the suggestion from linguistics that words lacking a plural affix are not actually singular in their semantics but unmarked for number.
In my book How the Mind Works, I defended the theory that the human mind is a naturally selected system of organs of computation. Jerry Fodor claims that ‘the mind doesn’t work that way’ (in a book with that title) because (1) Turing Machines cannot duplicate humans’ ability to perform abduction (inference to the best explanation); (2) though a massively modular system could succeed at abduction, such a system is implausible on other grounds; and (3) evolution adds nothing to our under- standing of the mind. In this review I show that these arguments are flawed. First, my claim that the mind is a computational system is different from the claim Fodor attacks (that the mind has the architecture of a Turing Machine); therefore the practical limitations of Turing Machines are irrelevant. Second, Fodor identifies abduction with the cumulative accomplishments of the scientific community over millennia. This is very different from the accomplishments of human common sense, so the supposed gap between human cognition and computational models may be illusory. Third, my claim about biological specialization, as seen in organ systems, is distinct from Fodor’s own notion of encapsulated modules, so the limitations of the latter are irrelevant. Fourth, Fodor’s arguments dismissing of the relevance of evolution to psychology are unsound.
We examine the question of which aspects of language are uniquely human and uniquely linguistic in light of recent suggestions by Hauser, Chomsky, and Fitch that the only such aspect is syntactic recursion, the rest of language being either specific to humans but not to language (e.g. words and concepts) or not specific to humans (e.g. speech perception). We find the hypothesis problematic. It ignores the many aspects of grammar that are not recursive, such as phonology, morphology, case, agreement, and many properties of words. It is inconsistent with the anatomy and neural control of the human vocal tract. And it is weakened by experiments suggesting that speech perception cannot be reduced to primate audition, that word learning cannot be reduced to fact learning, and that at least one gene involved in speech and language was evolutionarily selected in the human lineage but is not specific to recursion. The recursion-only claim, we suggest, is motivated by Chomsky’s recent approach to syntax, the Minimalist Program, which de-emphasizes the same aspects of language. The approach, however, is sufficiently problematic that it cannot be used to support claims about evolution. We contest related arguments that language is not an adaptation, namely that it is “perfect,” non-redundant, unusable in any partial form, and badly designed for communication. The hypothesis that language is a complex adaptation for communication which evolved piecemeal avoids all these problems.
In a continuation of the conversation with Fitch, Chomsky, and Hauser on the evolution of language, we examine their defense of the claim that the uniquely human, language-specific part of the language faculty (the “narrow language faculty”) consists only of recursion, and that this part cannot be considered an adaptation to communication. We argue that their characterization of the narrow language faculty is problematic for many reasons, including its dichotomization of cognitive capacities into those that are utterly unique and those that are identical to nonlinguistic or nonhuman capacities, omitting capacities that may have been substantially modified during human evolution. We also question their dichotomy of the current utility versus original function of a trait, which omits traits that are adaptations for current use, and their dichotomy of humans and animals, which conflates similarity due to common function and similarity due to inheritance from a recent common ancestor. We show that recursion, though absent from other animals’ communications systems, is found in visual cognition, hence cannot be the sole evolutionary development that granted language to humans. Finally, we note that despite Fitch et al.’s denial, their view of language evolution is tied to Chomsky’s conception of language itself, which identifies combinatorial productivity with a core of “narrow syntax.” An alternative conception, in which combinatoriality is spread across words and constructions, has both empirical advantages and greater evolutionary plausibility.
2004
2002
What is the interaction between storage and computation in language processing? What is the psychological status of grammatical rules? What are the relative strengths of connectionist and symbolic models of cognition? How are the components of language implemented in the brain? The English past tense has served as an arena for debates on these issues. We defend the theory that irregular past-tense forms are stored in the lexicon, a division of declarative memory, whereas regular forms can be computed by a concatenation rule, which requires the procedural system. Irregulars have the psychological, linguistic and neuropsychological signatures of lexical memory, whereas regulars often have the signatures of grammatical processing. Furthermore, because regular inflection is rule-driven, speakers can apply it whenever memory fails.
Most evidence for the role of regular inflection as a default operation comes from languages that confound the morphological properties of regular and irregular forms with their phonological characteristics. For instance, regular plurals tend to faithfully preserve the base’s phonology (e.g., rat-rats), whereas irregular nouns tend to alter it (e.g., mouse- mice). The distinction between regular and irregular inflection may thus be an epiphenomenon of phonological faithfulness. In Hebrew noun inflection, however, morphological regularity and phonological faithfulness can be distinguished: Nouns whose stems change in the plural may take either a regular or an irregular suffix, and nouns whose stems are preserved in the plural may take either a regular or an irregular suffix. We use this dissociation to examine two hallmarks of default inflection: its lack of dependence on analogies from similar regular nouns, and its application to nonroots such as names. We show that these hallmarks of regularity may be found whether or not the plural form preserves the stem faithfully: People apply the regular suffix to novel nouns that don’t resemble existing nouns, and to names that sound like irregular nouns, regardless of whether the stem is ordinarily preserved in the plural of that family of nouns. Moreover, when they pluralize names (e.g., the Barak-Barakim), they do not apply the stem changes that are found in their homophonous nouns (e.g., barak-brakim “lightning”), replicating an effect found in English and German. These findings show that the distinction between regular and irregular phenomena cannot be reduced to differences in the kinds of phonological changes associated with those phenomena in English. Instead, regularity and irregularity must be distinguished in terms of the kinds of mental computations that effect them: symbolic operations versus memorized idiosyncrasies. A corollary is that complex words are not generally dichotomizable as “regular” or “irregular”; different aspects of a word may be regular or irregular depending on whether they violate the rule for that aspect and hence must be stored in memory.